Navigation
Founded By
founded-by · 11 facts across 11 entities · people
Definition
| Name | Founded By |
| Description | Person(s) who founded this organization |
| Data Type | refs |
| Unit | — |
| Category | people |
| Temporal | No |
| Computed | No |
| Applies To | organization |
| Inverse | Founded (founder-of) |
All Facts (11)
AnthropiczR4nW8xB2f, tKMznr07QA, tom-brown, Tz48rTriBg, jared-kaplan, sam-mccandlish, jack-clark—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | zR4nW8xB2f, tKMznr07QA, tom-brown, Tz48rTriBg, jared-kaplan, sam-mccandlish, jack-clark | anthropic.com | f_tAtOfKqDjg |
Alignment Research CentervzxfzxBITd—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | vzxfzxBITd | alignment.org | f_tC3M5pbZUg |
ConjectureCrXoCsIucX, sid-black, gabriel-alfour—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | CrXoCsIucX, sid-black, gabriel-alfour | conjecture.dev | f_Tu50nzl3OA |
Google DeepMindAqcyu3onCA, shane-legg, mustafa-suleyman—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Aqcyu3onCA, shane-legg, mustafa-suleyman | en.wikipedia.org | f_AX7AnnLZQQ |
Meta AI (FAIR)cMbVUVK29Q—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | cMbVUVK29Q | ai.meta.com | f_lhQD26Idvw |
Machine Intelligence Research InstitutefS1tEGKuNq—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | fS1tEGKuNq | intelligence.org | f_QYWh1IZqjA |
OpenAIJKsVHQ5stQ, eYrBgMLJDw, Yq4s2cI7ng, 69vftmr1jg, wojciech-zaremba, john-schulman—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | JKsVHQ5stQ, eYrBgMLJDw, Yq4s2cI7ng, 69vftmr1jg, wojciech-zaremba, john-schulman | openai.com | f_A4ImW0y2sw |
Red Queen Bionikolai-eroshenko, hannu-rajaniemi—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | nikolai-eroshenko, hannu-rajaniemi | techcrunch.com | f_YndqOVXiTA |
Redwood Researchnate-thomas, buck-shlegeris—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | nate-thomas, buck-shlegeris | redwoodresearch.org | f_yii7fct7Gw |
Safe Superintelligence InceYrBgMLJDw, daniel-gross, daniel-levy—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | eYrBgMLJDw, daniel-gross, daniel-levy | ssi.inc | f_TUP4Wxpwqg |
xAI69vftmr1jg—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | 69vftmr1jg | x.ai | f_qr3MjbUa4Q |
Coverage
| Applies To | organization |
| Applicable Entities | 100 |
| Have Current Data | 11 of 100 (11%) |
Missing (89)
1Day Sooner80,000 HoursACX GrantsAI Futures ProjectAI ImpactsAnthropic (Funder)Apollo ResearchArb ResearchARC EvaluationsAstralis FoundationBlueprint BiosecurityBridgewater AIA LabsCenter for AI SafetyCenter for Applied RationalityCentre for Effective AltruismCentre for Long-Term ResilienceCHAIChan Zuckerberg InitiativeCoalition for Epidemic Preparedness InnovationsCoefficient GivingControlAICouncil on Strategic RisksCSER (Centre for the Study of Existential Risk)CSET (Center for Security and Emerging Technology)EA GlobalElicit (AI Research Tool)Elon Musk (Funder)Epoch AIFAR AIForecasting Research Institute (FRI)Founders FundFrontier Model ForumFTXFTX Future FundFuture of Humanity InstituteFuture of Life Institute (FLI)FutureSearchGiveWellGiving PledgeGiving What We CanGlobal Partnership on Artificial Intelligence (GPAI)Good Judgment (Forecasting)GoodfireGovAIGratifiedIBBIS (International Biosecurity and Biosafety Initiative for Science)Johns Hopkins Center for Health SecurityKalshi (Prediction Market)Leading the Future super PACLessWrongLighthaven (Event Venue)Lightning Rod LabsLionheart VenturesLong-Term Future Fund (LTFF)Longview PhilanthropyMacArthur FoundationManifest (Forecasting Conference)Manifold (Prediction Market)ManifundMATS ML Alignment Theory Scholars programMetaculusMETRMicrosoft AINIST and AI SafetyNTI | bio (Nuclear Threat Initiative - Biological Program)NVIDIAOpen PhilanthropyOpenAI FoundationPalisade ResearchPause AIPolymarketQURI (Quantified Uncertainty Research Institute)Rethink PrioritiesSamotsvetySchmidt FuturesSecure AI ProjectSecureBioSecureDNASeldon LabSentinel (Catastrophic Risk Foresight)Situational Awareness LPSurvival and Flourishing FundSwift CentreThe Foundation LayerTurionUK AI Safety InstituteUS AI Safety InstituteValue Aligned Research AdvisorsWilliam and Flora Hewlett Foundation