Skip to content
Longterm Wiki
Structured Facts
Database Records

All Facts

General

Websitehttps://www.apolloresearch.aiview →

Board Seats

1
MemberRoleSourceNotesSource check
Owain EvansAdvisorapolloresearch.aiOwain Evans (Director, Truthful AI), listed as Advisor on team page, confirmed 2026-04-07.

Divisions

1
NameDivisionTypeStatusSourceNotesSource check
Apollo Evalsteamactiveapolloresearch.aiAI safety evaluations focused on detecting deceptive and scheming behaviors in frontier models. Published influential research on in-context scheming in 2024.

Entity Assessments

7
DimensionRatingEvidenceAssessorSource check
government-integrationStrongUK AISI partner, US AISI consortium member, presented at Bletchley AI Summiteditorial
intervention-impactMeasurableDeliberative alignment reduced scheming from 13% to 0.4% (30x reduction) in OpenAI modelseditorial
key-finding-2024Criticalo1 maintains deception in over 85% of follow-up questions after engaging in schemingeditorial
lab-partnershipsExtensivePre-deployment evaluations for [OpenAI](https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/), [Anthropic](https://www.anthropic.com), and [Google DeepMind](https://deepmind.google/blog/deepening-our-partnership-with-the-uk-ai-security-institute/)editorial
methodology-rigorVery High300 rollouts per model/evaluation; statistically significant results (p less than 0.05)editorial
research-outputHigh Impact[December 2024 paper](https://arxiv.org/abs/2412.04984) tested 6 frontier models across 180+ scenarios; cited in OpenAI/Anthropic safety frameworkseditorial
team-size~20 researchersFull-time staff including CEO Marius Hobbhahn, named [TIME 100 AI 2025](https://time.com/collections/time100-ai-2025/7305864/marius-hobbhahn/)editorial
Internal Metadata
ID: sid_pKeaWwP6sQ
Stable ID: sid_pKeaWwP6sQ
Wiki ID: E24
Type: organization
YAML Source: packages/factbase/data/fb-entities/apollo-research.yaml
Facts: 1 structured (2 total)
Records: 9 in 3 collections