Navigation
Benchmark Score
benchmark-score · 4 facts across 2 entities · product
Definition
| Name | Benchmark Score |
| Description | Performance score on a specific benchmark |
| Data Type | number |
| Unit | — |
| Category | product |
| Temporal | Yes |
| Computed | No |
| Applies To | organization |
All Facts (4)
OpenAI83Sep 20242 values▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Sep 2024 | 83 | — | f_W5pFmO2ocw |
| Sep 2024 | 71.7 | — | f_5MsUCUfbSw |
xAI93.320252 values▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| 2025 | 93.3 | x.ai | f_tHAA1W30dw |
| 2025 | 1,402 | x.ai | f_mwjUCVDWoA |
Coverage
| Applies To | organization |
| Applicable Entities | 100 |
| Have Current Data | 2 of 100 (2%) |
Missing (98)
1Day Sooner80,000 HoursACX GrantsAI Futures ProjectAI ImpactsAlignment Research CenterAnthropicAnthropic (Funder)Apollo ResearchArb ResearchARC EvaluationsAstralis FoundationBlueprint BiosecurityBridgewater AIA LabsCenter for AI SafetyCenter for Applied RationalityCentre for Effective AltruismCentre for Long-Term ResilienceCHAIChan Zuckerberg InitiativeCoalition for Epidemic Preparedness InnovationsCoefficient GivingConjectureControlAICouncil on Strategic RisksCSER (Centre for the Study of Existential Risk)CSET (Center for Security and Emerging Technology)EA GlobalElicit (AI Research Tool)Elon Musk (Funder)Epoch AIFAR AIForecasting Research Institute (FRI)Founders FundFrontier Model ForumFTXFTX Future FundFuture of Humanity InstituteFuture of Life Institute (FLI)FutureSearchGiveWellGiving PledgeGiving What We CanGlobal Partnership on Artificial Intelligence (GPAI)Good Judgment (Forecasting)GoodfireGoogle DeepMindGovAIGratifiedIBBIS (International Biosecurity and Biosafety Initiative for Science)Johns Hopkins Center for Health SecurityKalshi (Prediction Market)Leading the Future super PACLessWrongLighthaven (Event Venue)Lightning Rod LabsLionheart VenturesLong-Term Future Fund (LTFF)Longview PhilanthropyMacArthur FoundationMachine Intelligence Research InstituteManifest (Forecasting Conference)Manifold (Prediction Market)ManifundMATS ML Alignment Theory Scholars programMeta AI (FAIR)MetaculusMETRMicrosoft AINIST and AI SafetyNTI | bio (Nuclear Threat Initiative - Biological Program)NVIDIAOpen PhilanthropyOpenAI FoundationPalisade ResearchPause AIPolymarketQURI (Quantified Uncertainty Research Institute)Red Queen BioRedwood ResearchRethink PrioritiesSafe Superintelligence IncSamotsvetySchmidt FuturesSecure AI ProjectSecureBioSecureDNASeldon LabSentinel (Catastrophic Risk Foresight)Situational Awareness LPSurvival and Flourishing FundSwift CentreThe Foundation LayerTurionUK AI Safety InstituteUS AI Safety InstituteValue Aligned Research AdvisorsWilliam and Flora Hewlett Foundation