Longterm Wiki

Benchmark Score

benchmark-score · 4 facts across 2 entities · product

Definition

NameBenchmark Score
DescriptionPerformance score on a specific benchmark
Data Typenumber
Unit
Categoryproduct
TemporalYes
ComputedNo
Applies Toorganization

All Facts (4)

OpenAI83Sep 20242 values
As OfValueSourceFact ID
Sep 202483f_W5pFmO2ocw
Sep 202471.7f_5MsUCUfbSw
xAI93.320252 values
As OfValueSourceFact ID
202593.3x.aif_tHAA1W30dw
20251,402x.aif_mwjUCVDWoA

Coverage

Applies Toorganization
Applicable Entities100
Have Current Data2 of 100 (2%)

Missing (98)

1Day Sooner80,000 HoursACX GrantsAI Futures ProjectAI ImpactsAlignment Research CenterAnthropicAnthropic (Funder)Apollo ResearchArb ResearchARC EvaluationsAstralis FoundationBlueprint BiosecurityBridgewater AIA LabsCenter for AI SafetyCenter for Applied RationalityCentre for Effective AltruismCentre for Long-Term ResilienceCHAIChan Zuckerberg InitiativeCoalition for Epidemic Preparedness InnovationsCoefficient GivingConjectureControlAICouncil on Strategic RisksCSER (Centre for the Study of Existential Risk)CSET (Center for Security and Emerging Technology)EA GlobalElicit (AI Research Tool)Elon Musk (Funder)Epoch AIFAR AIForecasting Research Institute (FRI)Founders FundFrontier Model ForumFTXFTX Future FundFuture of Humanity InstituteFuture of Life Institute (FLI)FutureSearchGiveWellGiving PledgeGiving What We CanGlobal Partnership on Artificial Intelligence (GPAI)Good Judgment (Forecasting)GoodfireGoogle DeepMindGovAIGratifiedIBBIS (International Biosecurity and Biosafety Initiative for Science)Johns Hopkins Center for Health SecurityKalshi (Prediction Market)Leading the Future super PACLessWrongLighthaven (Event Venue)Lightning Rod LabsLionheart VenturesLong-Term Future Fund (LTFF)Longview PhilanthropyMacArthur FoundationMachine Intelligence Research InstituteManifest (Forecasting Conference)Manifold (Prediction Market)ManifundMATS ML Alignment Theory Scholars programMeta AI (FAIR)MetaculusMETRMicrosoft AINIST and AI SafetyNTI | bio (Nuclear Threat Initiative - Biological Program)NVIDIAOpen PhilanthropyOpenAI FoundationPalisade ResearchPause AIPolymarketQURI (Quantified Uncertainty Research Institute)Red Queen BioRedwood ResearchRethink PrioritiesSafe Superintelligence IncSamotsvetySchmidt FuturesSecure AI ProjectSecureBioSecureDNASeldon LabSentinel (Catastrophic Risk Foresight)Situational Awareness LPSurvival and Flourishing FundSwift CentreThe Foundation LayerTurionUK AI Safety InstituteUS AI Safety InstituteValue Aligned Research AdvisorsWilliam and Flora Hewlett Foundation
Property: Benchmark Score | Longterm Wiki