Longterm Wiki
Updated 2026-03-12HistoryData
Page StatusContent
Edited 1 day ago2.7k words112 backlinksUpdated every 3 daysDue in 2 days
37QualityDraft •35ImportanceReference55ResearchModerate
Summary

Comprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Frontier Safety Framework with 5-tier capability thresholds, but provides limited actionable guidance for prioritization decisions.

Content8/13
LLM summaryScheduleEntityEdit history4Overview
Tables20/ ~11Diagrams0/ ~1Int. links42/ ~22Ext. links0/ ~14Footnotes0/ ~8References13/ ~8Quotes0Accuracy0RatingsN:2 R:4 A:2 C:6Backlinks112
Change History4
Auto-improve (standard): Google DeepMind2 weeks ago

Improved "Google DeepMind" via standard pipeline (301.7s). Quality score: 72. Issues resolved: EntityLink for Google DeepMind in Overview uses duplicate 'n; EntityLink in Overview references E98 as both the merged ent; Frontmatter 'lastEdited' date is '2026-02-26' which is a fut.

301.7s · $5-8

Batch content fixes + stale-facts validator + 2 new validation rules#9242 weeks ago

(fill in)

claude-sonnet-4-6

Audit wiki pages for factual errors and hallucinations3 weeks ago

Systematic audit of ~20 wiki pages for factual errors, hallucinations, and inconsistencies. Found and fixed 25+ confirmed errors across 17 pages, including wrong dates, fabricated statistics, false attributions, missing major events, broken entity references, misattributed techniques, and internal inconsistencies.

Fix factual errors found in wiki audit3 weeks ago

Systematically audited ~35+ high-risk wiki pages for factual errors and hallucinations using parallel background agents plus direct reading. Fixed 13 confirmed errors across 11 files.

Issues1
QualityRated 37 but structure suggests 73 (underrated by 36 points)

Google DeepMind

Frontier Lab

Google DeepMind

Comprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Frontier Safety Framework with 5-tier capability thresholds, but provides limited actionable guidance for prioritization decisions.

TypeFrontier Lab
Founded2010
LocationLondon, UK
Employees~2000
FundingGoogle subsidiary
Related
People
Demis HassabisShane Legg
Organizations
OpenAIAnthropic
Safety Agendas
Scalable Oversight
Risks
Reward HackingAI Development Racing DynamicsAI-Driven Concentration of Power
2.7k words · 112 backlinks
Frontier Lab

Google DeepMind

Comprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Frontier Safety Framework with 5-tier capability thresholds, but provides limited actionable guidance for prioritization decisions.

TypeFrontier Lab
Founded2010
LocationLondon, UK
Employees~2000
FundingGoogle subsidiary
Related
People
Demis HassabisShane Legg
Organizations
OpenAIAnthropic
Safety Agendas
Scalable Oversight
Risks
Reward HackingAI Development Racing DynamicsAI-Driven Concentration of Power
2.7k words · 112 backlinks

Overview

Google DeepMind represents one of the world's most influential AI research organizations, formed in April 2023 from merging Google DeepMind and Google Brain. The combined entity has achieved breakthrough results including AlphaGo's defeat of world Go champions, AlphaFold's solution to protein folding, and Gemini's competition with GPT-4.

Founded in 2010 by Demis Hassabis, Shane Legg, and Mustafa Suleyman, DeepMind was acquired by Google in 2014 for approximately $500–650 million. The merger ended DeepMind's unique independence within Google, raising questions about whether commercial pressures will compromise its research-first culture and safety research.

Key achievements demonstrate AI's potential for scientific discovery: AlphaFold has predicted nearly 200 million protein structures, GraphCast outperforms traditional weather prediction, and GNoME discovered 380,000 stable materials. The organization now faces racing dynamics with OpenAI that may affect the pace of safety research relative to capability development.

Risk Assessment

Risk CategoryAssessmentEvidenceTimeline
Commercial PressureElevatedGemini releases accelerated after ChatGPT launch; merger driven by competitive pressure2023–2025
Safety Culture ErosionModerate–ElevatedLoss of independent governance, product integration pressure post-merger2024–2027
Racing DynamicsElevatedExplicit competition with OpenAI/Microsoft; Google's "code red" response to ChatGPTOngoing
Power ConcentrationElevatedMassive compute resources, potential first-to-AGI advantage2025–2030

Historical Evolution

Founding and Early Years (2010–2014)

DeepMind was founded with the stated mission to "solve intelligence, then use that to solve everything else." The founding team brought complementary expertise:

FounderBackgroundContribution
Demis HassabisChess master, game designer, neuroscience PhDStrategic vision, technical leadership
Shane LeggAI researcher with Jürgen SchmidhuberAGI theory, early safety advocacy
Mustafa SuleymanSocial entrepreneur, Oxford dropoutBusiness strategy, applied focus. Left DeepMind in 2022, co-founded Inflection AI, became CEO of Microsoft AI in 2024.

The company's early work on deep reinforcement learning with Atari games demonstrated that general-purpose algorithms could master diverse tasks through environmental interaction alone.

Google Acquisition and Independence (2014–2023)

Google's 2014 acquisition was structured to preserve DeepMind's autonomy:

  • Separate brand and culture maintained
  • Ethics board established for AGI oversight
  • Open research publication continued
  • UK headquarters retained independence

This structure allowed DeepMind to pursue long-term fundamental research while accessing Google's substantial computational resources.

The Merger Decision (2023)

The April 2023 merger of DeepMind and Google Brain ended DeepMind's independent governance structure:

FactorImpact
ChatGPT CompetitionPressure to consolidate AI resources
Resource EfficiencyEliminate duplication between teams
Product IntegrationAccelerate commercial deployment
Talent RetentionUnified career paths and leadership

Major Scientific Achievements

AlphaGo Series: Mastering Strategic Reasoning

DeepMind's early breakthrough came with Go, previously considered intractable for computers:

SystemYearAchievementImpact
AlphaGo2016Defeated Lee Sedol 4-1200M+ viewers, demonstrated strategic AI
AlphaGo Zero2017Self-play only, defeated AlphaGo 100-0Learning without human data
AlphaZero2017Generalized to chess/shogiDomain-general strategic reasoning

"Move 37" in the Lee Sedol match exemplified unexpected AI strategy — a move no human would conventionally consider that proved strategically effective.

AlphaFold: Revolutionary Protein Science

AlphaFold represents a widely-cited scientific contribution of AI to biology:

MilestoneAchievementScientific Impact
CASP13 (2018)First place in protein predictionProof of concept
CASP14 (2020)≈90% accuracy on protein foldingAddressed a 50-year grand challenge
Database Release (2021)200M+ protein structures freely availableAccelerated global research
Nobel Prize (2024)Chemistry prize to Hassabis and Jumper (DeepMind); shared with David Baker (University of Washington, independent protein design work)Major scientific recognition

Gemini: The GPT-4 Competitor

Model & Research Releases
6 entries
NameReleasedDescription
AlphaGoJan 2016First AI to defeat a professional Go player; beat Lee Sedol 4-1 in March 2016
AlphaFoldDec 2018Won CASP13 protein structure prediction competition
AlphaFold 2Nov 2020Solved protein structure prediction; accuracy comparable to experimental methods
Gemini 1.0Dec 2023Google's first natively multimodal model family (Ultra, Pro, Nano)
Gemini 1.5Feb 2024Introduced 1M token context window; mixture-of-experts architecture
Gemini 2.0Dec 2024Next-generation Gemini with improved agentic capabilities

Following the merger, Gemini became DeepMind's flagship product:

VersionLaunchKey FeaturesCompetitive Position
Gemini 1.0Dec 2023Multimodal from ground upClaimed GPT-4 parity or superiority
Gemini 1.5Feb 20242M token context windowLong-context leadership
Gemini 2.0Dec 2024Enhanced agentic capabilitiesIntegrated across Google

Sparrow: Alignment and Debate Methods

DeepMind's Sparrow project, published in 2022, applied RLHF and rule-based reward modeling to produce a dialogue agent that more reliably avoids harmful outputs compared to baseline models. The project incorporated elements of debate-style methods — prompting the model to cite evidence for its claims — as an approach to scalable oversight. Evaluations showed mixed results on truthfulness: Sparrow was rated more helpful and less harmful than baseline models, but also showed a tendency to hedge or give qualified answers in ways that did not always reflect confident factual accuracy. The Sparrow paper is the primary DeepMind publication on alignment methods using debate and evidence-citing approaches, and is more directly relevant to the scalable oversight research direction than the reward modeling paper currently cited in that table row.1

Leadership and Culture

Current Leadership Structure

Key Leaders

Demis Hassabis
CEO, Co-founder
Shane Legg
Chief AGI Scientist, Co-founder
Koray Kavukcuoglu
VP Research
Pushmeet Kohli
VP Research, AI Safety
Jeff Dean
Chief Scientist, Google Research
Neel Nanda
Research Scientist, Mechanistic Interpretability Lead
Key People
4 entries
PersonTitleStartEndIs Founder
demis-hassabisCo-founder & CEOSep 2010
shane-leggCo-founder & Chief AGI ScientistSep 2010
mustafa-suleymanCo-founder & Head of Applied AISep 2010Jan 2022
pushmeet-kohliVP of Research2017

Demis Hassabis: The Scientific CEO

Hassabis combines rare credentials: chess mastery, successful game design, neuroscience PhD, and business leadership. His approach emphasizes:

  • Long-term research over short-term profits
  • Scientific publication and open collaboration
  • Beneficial applications like protein folding
  • Measured AGI development with safety considerations

The 2024 Nobel Prize in Chemistry recognizes the scientific contributions of DeepMind's AlphaFold work.

Research Philosophy: Intelligence Through Learning

DeepMind's core thesis:

PrincipleImplementationExamples
General algorithmsSame methods across domainsAlphaZero mastering multiple games
Environmental interactionLearning through experienceSelf-play in Go, chess
Emergent capabilitiesScale reveals new abilitiesLarger models show better reasoning
Scientific applicationsAI accelerates discoveryProtein folding, materials science

Safety Research and Framework

Frontier Safety Framework

Safety Milestones
3 entries
NameDateTypeDescription
Specification Gaming ResearchApr 2020research-paperCatalogued examples of AI systems exploiting reward misspecification
Frontier Safety FrameworkMay 2024policy-updateFramework for evaluating and mitigating risks from frontier AI models
Dangerous Capability EvaluationsOct 2023safety-evalSystematic evaluations for dangerous capabilities in frontier models

Launched in 2024, DeepMind's systematic approach to AI safety:

Critical Capability LevelDescriptionSafety Measures
CCL-0No critical capabilitiesStandard testing
CCL-1Could aid harmful actorsEnhanced security measures
CCL-2Could enable catastrophic harmDeployment restrictions
CCL-3Could directly cause catastrophic harmSevere limitations
CCL-4Autonomous catastrophic capabilitiesNo deployment

This framework parallels Anthropic's Responsible Scaling Policies, representing industry convergence on capability-based safety approaches.

Technical Safety Research Areas

Research DirectionApproachKey Publications
Scalable OversightAI debate, evidence-citing dialogue (Sparrow), recursive reward modelingScalable agent alignment via reward modeling
Specification GamingDocumenting unintended behaviorsSpecification gaming examples
Safety GridworldsTestable safety environmentsAI Safety Gridworlds
Mechanistic InterpretabilitySparse Autoencoder features, Gemma Scope open-source toolsGemma Scope 2 (2024); SAE limitations assessment (2025)

Interpretability Research: Gemma Scope and SAE Work

DeepMind has invested substantially in interpretability research, with Neel Nanda leading the mechanistic interpretability team. Two significant outputs mark 2024–2025:

Gemma Scope 2 (2024): In 2024, DeepMind released Gemma Scope 2, described as the largest open-source interpretability tools release to date — comprising approximately 110 petabytes of data and models up to 1 trillion parameters.2 The release was framed as supporting the AI safety community's ability to study large-scale model internals, including sparse autoencoder (SAE) features trained on Gemma model activations.

Critical Assessment of SAE Limitations (2025): In March 2025, DeepMind's mechanistic interpretability team published a critical assessment of the limitations of sparse autoencoders for safety applications.3 The assessment examined whether SAE-extracted features are sufficiently reliable and interpretable to ground safety-relevant conclusions, identifying conditions under which SAE decompositions may not faithfully represent underlying model computations. This self-critical stance is notable given the field's reliance on SAEs as a primary interpretability tool. The publication reflects a broader research posture of publishing negative and limiting results alongside positive findings.

Neel Nanda's Role in AI Safety

Neel Nanda joined Google DeepMind to lead the mechanistic interpretability research team after earlier work establishing foundational results in the field (including work on grokking and superposition at Anthropic and independently). At DeepMind, the team has focused on sparse autoencoders as a method for decomposing neural network activations into interpretable features, publishing both the Gemma Scope tooling and the 2025 SAE limitations paper. Nanda has been a prominent communicator of mechanistic interpretability methods to the broader AI safety community, including through posts on LessWrong and the Alignment Forum.

Evaluation and Red Teaming

DeepMind's Frontier Safety Team conducts:

  • Pre-training evaluations for dangerous capabilities
  • Red team exercises testing misuse potential
  • External collaboration with safety organizations
  • Transparency reports on safety assessments

Google Integration: Benefits and Tensions

Resource Advantages

Strategic Partnerships
3 entries
PartnerTypeDateNotes
Google (Alphabet)acquisitionJan 2014Acquired by Google for approximately $500M; conditions included creation of AI ethics board
Google BrainmergerApr 2023DeepMind merged with Google Brain to form Google DeepMind; Jeff Dean became Chief Scientist of Google DeepMind and Google Research
Isomorphic Labsspin-offNov 2021Drug discovery spin-off from DeepMind, led by Demis Hassabis; leverages AlphaFold technology

Google's backing provides substantial capabilities:

Resource TypeSpecific AdvantagesScale
ComputeTPU access, massive data centersExaflop-scale training
DataYouTube, Search, Gmail datasetsBillions of users
DistributionGoogle products, Android3+ billion active users
TalentTop engineers, research infrastructureCompetitive salaries/equity

Commercial Pressure Points

The merger introduced new tensions:

PressureSourceImpact on Research
Revenue generationGoogle shareholdersPressure to monetize research
Product integrationGoogle executivesDivert resources to products
Competition responseOpenAI/Microsoft raceAccelerated release timelines
BureaucracyLarge organizationSlower decision-making

Racing Dynamics with OpenAI

Google's "code red" response to ChatGPT illustrates competitive pressure:

  • December 2022: ChatGPT launch triggers Google emergency response
  • February 2023: Bard released quickly, with a factual error in the launch demo drawing criticism
  • April 2023: DeepMind–Brain merger announced
  • December 2023: Gemini 1.0 released to compete with GPT-4

Critics have characterized some of these releases as rushed; DeepMind and Google leadership have described them as appropriate responses to market conditions. This racing dynamic is a concern among safety researchers who note coordination failures as a risk factor.

Current State and Capabilities

Scientific AI Applications

DeepMind continues applying AI to fundamental science:

ProjectDomainAchievementImpact
GraphCastWeather predictionOutperforms traditional models on medium-range forecast benchmarksImproved forecasting accuracy
GNoMEMaterials science380K new stable materials identifiedAccelerated materials discovery
AlphaTensorMathematicsNovel matrix multiplication algorithmsAlgorithmic efficiency improvements
FunSearchPure mathematicsNovel combinatorial solutions via evolutionary searchMathematical discovery

Gemini Deployment Strategy

Google integrates Gemini across its ecosystem:

ProductIntegrationUser Base
SearchEnhanced search results8.5B searches/day
WorkspaceGmail, Docs, Sheets3B+ users
AndroidOn-device AI features3B+ devices
Cloud PlatformEnterprise AI servicesMajor corporations

This distribution advantage provides data collection and feedback loops for model improvement at scale.

Key Uncertainties and Debates

Will Safety Culture Survive Integration?

Safety Culture Debate

Impact of Merger on Safety

Culture Preserved

Hassabis maintains leadership, Frontier Safety Framework provides structure, Google benefits from responsible development reputation

Proponents: DeepMind leadership, Google executives
Confidence: medium (3/5)
Commercial Corruption

Racing pressure overrides safety investment, product demands compete for research resources, Google's ad-based business model creates misaligned incentives

Proponents: Safety researchers, Former employees
Confidence: high (4/5)
Mixed Outcomes

Some safety progress continues while commercial pressure increases; outcome depends on specific decisions, regulatory intervention, and external constraints

Proponents: Independent observers
Confidence: medium (3/5)

Note: Strength scores (3, 4, 3) represent editorial assessment of the relative weight of available public evidence for each position, not results of consensus polling or formal elicitation.

AGI Timeline and Power Concentration

Timeline predictions for when DeepMind might achieve AGI vary significantly based on who's making the estimate and what methodology they're using. Public statements from DeepMind leadership suggest arrival within the next decade, while external observers analyzing capability trajectories point to potentially faster timelines based on recent progress.

Expert/SourceEstimateReasoning
Demis Hassabis (2023)5–10 yearsHassabis has stated that AGI could potentially arrive within a decade based on current progress trajectories. This estimate reflects DeepMind's position as the organization with direct visibility into their research pipeline, though it may also be influenced by strategic communication considerations.
Shane Legg (2009, reiterated 2011)50% by 2028Legg has publicly held this prediction since 2009, reiterated in a widely-cited 2011 LessWrong post. Despite deep learning advances exceeding earlier expectations, he did not revise the estimate as of that reiteration. The 50% probability framing reflects genuine uncertainty rather than confident prediction.
Capability trajectory analysis3–7 yearsExternal analysis based on rapid progress from Gemini 1.0 to 2.0 and observed capability improvements suggests potentially faster timelines than official statements indicate. Such extrapolation assumes continued scaling returns, which is itself contested.

If DeepMind develops AGI first, this concentrates substantial power in a single corporation with limited external oversight.

Governance and Accountability

Governance MechanismEffectivenessLimitations
Ethics BoardUnknownOpaque composition and activities; no public reporting
Internal ReviewsSome oversightSelf-regulation without external validation
Government RegulationEmergingRegulatory capture risk, technical complexity
Market CompetitionForces innovationMay accelerate unsafe development

Comparative Analysis

vs OpenAI

DimensionDeepMindOpenAI
IndependenceGoogle subsidiaryMicrosoft partnership
Research FocusScientific applications + commercialCommercial products + research
Safety ApproachCapability thresholds + evals + interpretabilityRLHF + deliberative alignment + evals
DistributionGoogle ecosystemAPI + ChatGPT

vs Anthropic

ApproachDeepMindAnthropic
Safety BrandResearch lab with safety componentSafety-first branding
Technical MethodsRL + scaling + evals + mechanistic interpretabilityConstitutional AI + interpretability
ResourcesSubstantial (Google-backed)Significant but smaller
IndependenceFully integrated into GoogleIndependent with Amazon investment

Both organizations claim safety leadership but face similar commercial pressures and racing dynamics.

Future Trajectories

Scenario Analysis

Optimistic Scenario: DeepMind maintains research excellence while developing safe AGI. Frontier Safety Framework proves effective. Scientific applications like AlphaFold continue. Google's resources enable both capability and safety advancement. Interpretability research matures into deployable safety tools.

Pessimistic Scenario: Commercial racing overwhelms safety culture. Gemini competition forces compressed timelines. AGI development proceeds without adequate safeguards. Power concentrates in Google without democratic accountability. SAE and interpretability limitations identified in 2025 research persist unresolved.

Mixed Reality: Continued scientific breakthroughs alongside increasing commercial pressure. Some safety measures persist while others erode. Outcome depends on leadership decisions, regulatory intervention, and competitive dynamics.

Key Decision Points (2025–2027)

  1. Regulatory Response: How will governments regulate frontier AI development?
  2. Safety Threshold Tests: Will DeepMind actually pause development when capability thresholds are reached?
  3. Scientific vs Commercial: Will AlphaFold-style applications continue or shift to commercial focus?
  4. Transparency: Will research publication continue or become more proprietary?
  5. AGI Governance: What oversight mechanisms will constrain AGI development?
  6. Interpretability Maturation: Will mechanistic interpretability tools (e.g., Gemma Scope) translate into actionable safety interventions, or remain primarily research artifacts?

Key Questions

  • ?Can DeepMind's safety culture survive full Google integration and commercial pressure?
  • ?Will the Frontier Safety Framework meaningfully constrain development or prove to be self-regulation theater?
  • ?How will democratic societies govern AGI development by large corporations?
  • ?Will DeepMind continue scientific applications or shift entirely to commercial AI products?
  • ?What happens if DeepMind achieves AGI first — does this create unacceptable power concentration?
  • ?Can racing dynamics with OpenAI/Microsoft be resolved without compromising safety margins?
  • ?Will the SAE limitations identified in 2025 be resolved, or do they indicate fundamental constraints on interpretability-based safety approaches?

Sources & Resources

Academic Papers & Research

CategoryKey PublicationsLinks
Foundational WorkDQN (Nature 2015), AlphaGo (Nature 2016)Nature DQN
AlphaFold SeriesAlphaFold 2 (Nature 2021), Database papersNature AlphaFold
Safety ResearchAI Safety Gridworlds, Specification GamingSafety Gridworlds
Recent AdvancesGemini technical reports, GraphCastGemini Report

Official Resources

TypeResourceURL
Company BlogDeepMind Researchdeepmind.google
Safety FrameworkFrontier Safety documentationFrontier Safety
AlphaFold DatabaseProtein structure predictionsalphafold.ebi.ac.uk
PublicationsResearch papers and preprintsscholar.google.com

News & Analysis

SourceFocusExample Coverage
The InformationTech industry analysisMerger coverage, internal dynamics
AI Research OrganizationsTechnical assessmentFuture of Humanity Institute
Safety CommunityRisk analysisAlignment Forum
Policy AnalysisGovernance implicationsCenter for AI Safety

Footnotes

  1. Glaese et al. (2022). "Improving alignment of dialogue agents via targeted human judgements." DeepMind. The Sparrow paper describes rule-based reward modeling and evidence-citing as alignment methods, with human evaluation showing improved harmlessness but mixed truthfulness outcomes.

  2. DeepMind Blog (2024). "Gemma Scope 2: Helping the AI safety community with open-source interpretability tools." The release comprised approximately 110 PB of data and models up to 1 trillion parameters, described as the largest open-source interpretability release at that time.

  3. DeepMind Mechanistic Interpretability Team (March 26, 2025). Critical assessment of sparse autoencoder limitations for safety applications. Published on the DeepMind blog and cross-posted to the Alignment Forum.

References

1Scalable agent alignment via reward modelingarXiv·Jan Leike et al.·2018·Paper
★★★☆☆
★★★★☆
3AI Safety GridworldsarXiv·Jan Leike et al.·2017·Paper
★★★☆☆
4Nature DQNNature (peer-reviewed)·Paper
★★★★★
5Nature AlphaFoldNature (peer-reviewed)·Paper
★★★★★
6Gemini ReportarXiv·Gemini Team et al.·2023·Paper
★★★☆☆
7Google DeepMindGoogle DeepMind
★★★★☆
8Frontier SafetyGoogle DeepMind
★★★★☆
10scholar.google.comGoogle Scholar
★★★★☆
11**Future of Humanity Institute**Future of Humanity Institute
★★★★☆
12AI Alignment ForumAlignment Forum·Blog post
★★★☆☆
13CAIS SurveysCenter for AI Safety

The Center for AI Safety conducts technical and conceptual research to mitigate potential catastrophic risks from advanced AI systems. They take a comprehensive approach spanning technical research, philosophy, and societal implications.

★★★★☆

Structured Data

7 facts·20 recordsView full profile →
Headcount
2,700
as of Dec 2024
Founded Date
Sep 2010

Key People

4
DH
Demis HassabisFounder
Co-founder & CEO · Sep 2010–present
SL
Shane LeggFounder
Co-founder & Chief AGI Scientist · Sep 2010–present
MS
mustafa-suleymanFounder
Co-founder & Head of Applied AI · Sep 2010–Jan 2022
PK
pushmeet-kohli
VP of Research · 2017–present

Model Releases

6
Dec 2024
Gemini 2.0
Next-generation Gemini with improved agentic capabilities
Feb 2024
Gemini 1.5
Introduced 1M token context window; mixture-of-experts architecture
Dec 2023
Gemini 1.0
Google's first natively multimodal model family (Ultra, Pro, Nano)
Nov 2020
AlphaFold 2
Solved protein structure prediction; accuracy comparable to experimental methods
Dec 2018
AlphaFold
Won CASP13 protein structure prediction competition
Jan 2016
AlphaGo
First AI to defeat a professional Go player; beat Lee Sedol 4-1 in March 2016

All Facts

Organization
PropertyValueAs OfSource
Founded DateSep 2010
HeadquartersLondon, UK
Legal StructureSubsidiary of Alphabet Inc.
Financial
PropertyValueAs OfSource
Headcount2,700Dec 2024
People
PropertyValueAs OfSource
Founded ByDemis Hassabis,Shane Legg,mustafa-suleyman
Safety
PropertyValueAs OfSource
Safety Researchers120Jun 2025
Model
PropertyValueAs OfSource
Context Window2 millionMay 2024

Research Areas

4
NameDescriptionStarted
Game-Playing AIReinforcement learning for complex games including Go, chess, StarCraft, and DiplomacyJan 2013
Protein Structure PredictionDeep learning for predicting 3D protein structures from amino acid sequencesJan 2016
Neuroscience-Inspired AIUsing insights from neuroscience to improve AI architectures and learning algorithmsSep 2010
AI SafetyResponsible AI development including alignment, robustness, and societal impactJan 2017

Safety Milestones

3
NameDateDescription
Specification Gaming ResearchApr 2020Catalogued examples of AI systems exploiting reward misspecification
Frontier Safety FrameworkMay 2024Framework for evaluating and mitigating risks from frontier AI models
Dangerous Capability EvaluationsOct 2023Systematic evaluations for dangerous capabilities in frontier models

Strategic Partnerships

3
PartnerTypeDateInvestment Amount
Google (Alphabet)acquisitionJan 2014500,000,000
Google BrainmergerApr 2023
Isomorphic Labsspin-offNov 2021

Related Pages

Top Related Pages

Safety Research

AI Control

Approaches

Constitutional AI

Analysis

Anthropic Impact Assessment ModelAI Safety Intervention Effectiveness Matrix

Policy

Responsible Scaling PoliciesResponsible Scaling Policies (RSPs)

Other

Demis HassabisNeel NandaGeminiGemini 1.0 Ultra

Risks

AI Development Racing DynamicsReward Hacking

Key Debates

Corporate Influence on AI PolicyAI Safety Solution CruxesWhy Alignment Might Be Hard

Historical

Deep Learning Revolution EraThe MIRI Era

Concepts

AGI TimelineAgi DevelopmentLarge Language Models