Longterm Wiki

Safety Researchers

safety-researcher-count · 2 facts across 2 entities · safety

Definition

NameSafety Researchers
DescriptionNumber of employees working on safety-related research
Data Typenumber
Unit
Categorysafety
TemporalYes
ComputedNo
Applies Toorganization

All Facts (2)

Anthropic265Dec 20251 value
As OfValueSourceFact ID
Dec 2025265f_ddR2zmZ1ZQ
Google DeepMind120Jun 20251 value
As OfValueSourceFact ID
Jun 2025120f_dANFu1eouA

Coverage

Applies Toorganization
Applicable Entities100
Have Current Data2 of 100 (2%)

Missing (98)

1Day Sooner80,000 HoursACX GrantsAI Futures ProjectAI ImpactsAlignment Research CenterAnthropic (Funder)Apollo ResearchArb ResearchARC EvaluationsAstralis FoundationBlueprint BiosecurityBridgewater AIA LabsCenter for AI SafetyCenter for Applied RationalityCentre for Effective AltruismCentre for Long-Term ResilienceCHAIChan Zuckerberg InitiativeCoalition for Epidemic Preparedness InnovationsCoefficient GivingConjectureControlAICouncil on Strategic RisksCSER (Centre for the Study of Existential Risk)CSET (Center for Security and Emerging Technology)EA GlobalElicit (AI Research Tool)Elon Musk (Funder)Epoch AIFAR AIForecasting Research Institute (FRI)Founders FundFrontier Model ForumFTXFTX Future FundFuture of Humanity InstituteFuture of Life Institute (FLI)FutureSearchGiveWellGiving PledgeGiving What We CanGlobal Partnership on Artificial Intelligence (GPAI)Good Judgment (Forecasting)GoodfireGovAIGratifiedIBBIS (International Biosecurity and Biosafety Initiative for Science)Johns Hopkins Center for Health SecurityKalshi (Prediction Market)Leading the Future super PACLessWrongLighthaven (Event Venue)Lightning Rod LabsLionheart VenturesLong-Term Future Fund (LTFF)Longview PhilanthropyMacArthur FoundationMachine Intelligence Research InstituteManifest (Forecasting Conference)Manifold (Prediction Market)ManifundMATS ML Alignment Theory Scholars programMeta AI (FAIR)MetaculusMETRMicrosoft AINIST and AI SafetyNTI | bio (Nuclear Threat Initiative - Biological Program)NVIDIAOpen PhilanthropyOpenAIOpenAI FoundationPalisade ResearchPause AIPolymarketQURI (Quantified Uncertainty Research Institute)Red Queen BioRedwood ResearchRethink PrioritiesSafe Superintelligence IncSamotsvetySchmidt FuturesSecure AI ProjectSecureBioSecureDNASeldon LabSentinel (Catastrophic Risk Foresight)Situational Awareness LPSurvival and Flourishing FundSwift CentreThe Foundation LayerTurionUK AI Safety InstituteUS AI Safety InstituteValue Aligned Research AdvisorsWilliam and Flora Hewlett FoundationxAI
Property: Safety Researchers | Longterm Wiki