Arb Research
- QualityRated 50 but structure suggests 80 (underrated by 30 points)
Quick Assessment
Section titled “Quick Assessment”| Attribute | Assessment |
|---|---|
| Type | Research consulting firm |
| Founded | Not specified (active by 2024) |
| Team Size | 4.8 FTE (2025) |
| Annual Output | 37 projects (2025) |
| Key Clients | Stripe, Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information., Schmidt Futures, Mercatus Center, FAR AILab ResearchFAR AIFAR AI (FAR.AI) is a 2022-founded AI safety research nonprofit led by CEO Adam Gleave and COO Karl Berzins. The organization focuses on technical AI safety research and coordination to ensure safet...Quality: 32/100, Institute for Progress |
| Focus Areas | Forecasting, machine learning, AI safety, policy analysis |
| Notable Work | Shallow Review of AI Safety, AI Safety Camp impact assessment, AI bias research (PNAS) |
Key Links
Section titled “Key Links”| Source | Link |
|---|---|
| Official Website | arbresearch.com |
| Wikipedia | en.wikipedia.org |
Overview
Section titled “Overview”Arb Research is a boutique consulting firm that provides original research, evidence gathering, and data pipeline services across forecasting, machine learning, and policy domains1. The organization operates at the intersection of AI safety research and effective altruism, conducting both private consulting work for major technology and philanthropic organizations and public research on AI alignmentAlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) achieve 75-90% effectiveness on existing systems but face critical scalability challenges, with ove...Quality: 91/100 topics2.
Founded by Gavin Leech, who holds a PhD in AI from the University of Bristol, Arb Research has established itself as a distinctive voice in the AI safety community through its rigorous, empirically-grounded approach to forecasting and technical analysis3. The firm’s work spans an unusually broad range of domains, from machine learning methodology to policy strategy, clinical trial logistics to generative biology reviews1.
In 2025, the organization completed 37 projects with a small team of 4.8 full-time equivalents, spending three months colocated in Stockholm and London to facilitate intensive collaboration2. Their current focus emphasizes building AI tools for truth-seeking, strategy, and prosthesis, while maintaining a diverse portfolio of consulting engagements1.
History and Development
Section titled “History and Development”Arb Research emerged within the effective altruism and AI safety communities, though specific founding dates are not publicly documented. The organization’s trajectory reflects the growing professionalization of AI safety research, transitioning from community-driven projects to formal consulting relationships with major funders and technology companies.
The firm received funding from Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. as part of that organization’s forecasting program, launched to support rigorous prediction methodology development4. This relationship positioned Arb Research within a network of forecasting-focused organizations including MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/100, Forecasting Research InstituteOrganizationForecasting Research Institute (FRI)FRI's XPT tournament found superforecasters gave 9.7% average probability to AI progress outcomes that occurred vs 24.6% from domain experts, suggesting superforecasters systematically underestimat...Quality: 55/100, and ARLIS4. More recently, the organization received a “generous open-ended grant” from Lightspeed Grants for AI and forecasting projects2.
By 2025, Arb Research had established working relationships with prominent clients including Stripe (for whom they wrote a major trade book on AI), the Schmidt Center on Science and Policy (SCSP), and multiple effective altruism-aligned research organizations1. The firm’s evolution reflects broader trends in the AI safety field toward specialized consulting services that bridge technical research, strategy, and policy analysis.
Research Portfolio and Major Projects
Section titled “Research Portfolio and Major Projects”Arb Research’s project portfolio demonstrates unusual breadth across technical, policy, and strategic domains. The organization’s 2025 public highlights include several significant contributions to AI safety discourse and methodology2.
AI Safety and Alignment Work
Section titled “AI Safety and Alignment Work”The firm’s most prominent AI safety contribution is the Shallow Review of AI Safety, an expanded comprehensive review that grew to three times the size of the previous year’s iteration, with an editorial component six times larger2. This work received sufficient attention to warrant a keynote presentation at the HAAISS conference in September 2025, delivered by co-founder Gavin Leech2.
Arb Research has also conducted self-funded research on hidden interpolation in frontier AI systems, examining how advanced models may exhibit unexpected behavior patterns2. Additionally, they developed a machine unlearning project as a testbed for evaluating ML consultancy capabilities2.
The organization’s involvement with AI Safety Camp (AISC) proved particularly impactful according to their own evaluation. Survey data from 24 respondents showed that 16.7% (4 individuals) credited AISC as pivotal for entering alignment research, while 33.3% (8 individuals) cited it as an influential nudge3. AISC-produced researchers went on to secure approximately $600,000 in follow-on grants, with a median grant size of $20,000 and some exceeding $100,0003. Notable researchers emerging from this pipeline include Lucius Bushnaq, now a research scientist at Apollo ResearchLab ResearchApollo ResearchApollo Research demonstrated in December 2024 that all six tested frontier models (including o1, Claude 3.5 Sonnet, Gemini 1.5 Pro) engage in scheming behaviors, with o1 maintaining deception in ov...Quality: 58/1003.
Academic and Scientific Contributions
Section titled “Academic and Scientific Contributions”In July 2025, Arb Research published research on AI bias against human text in the Proceedings of the National Academy of Sciences (PNAS), conducted in collaboration with ACS2. This work examined systematic biases in AI systems’ treatment of human-generated content.
The firm also contributed to metascience by collecting 200 of the year’s biggest scientific breakthroughs for Renaissance Philanthropy2, and organized a 48-hour collaborative event with Poseidon to read and review every paper presented at NeurIPS2.
Books and Strategic Analysis
Section titled “Books and Strategic Analysis”A major milestone came in October 2025 with the publication of a trade book on current AI through Stripe Press, co-authored by Dwarkesh and Gavin Leech12. This represented Arb Research’s most prominent public-facing work to date.
The organization’s consulting portfolio includes diverse strategic projects: experiment design and technical writing for new machine learning methods, location scouting for clinical trials, reviews of progress in generative biology, AI talent investigation for Emergent Ventures, analysis of elite education economics, and historical analysis of Isaac Asimov’s forecasting work1.
Forecasting and Methodology
Section titled “Forecasting and Methodology”As suggested by the firm’s name and founding mission, forecasting methodology remains central to Arb Research’s work. The organization has conducted evaluations of forecasting approaches, assessed generalist forecaster implementation, and developed AI forecasting methodologies1. Their work has contributed to debates about forecasting effectiveness, including critical analysis debunking certain superforecaster claims referenced by Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. in establishing their forecasting program4.
Impact and Cost-Effectiveness
Section titled “Impact and Cost-Effectiveness”Arb Research’s impact assessment of AI Safety Camp provides one of the few quantified examples of the organization’s effectiveness metrics. Using a pessimistic model, they estimated a cost of $60,000 per new alignment researcher produced ($300,000 divided by 5% probability times 100 participants)3. This calculation assumes that the $10,000 paid to Arb Research for the evaluation represented typical programmatic costs, though the evaluation funding came from separate sources provided by AISC organizers3.
The downstream impact of AISC participants extended beyond individual career transitions. Follow-on projects resulted in collaborations with Apollo Research, AI Safety Fundamentals, and AI Standards Lab, suggesting network effects beyond direct researcher production3. However, the organization acknowledged limitations in their impact methodology, including potential selection bias in survey responses and difficulty isolating AISC’s counterfactual contribution from other factors influencing participants’ career trajectories.
Funding and Business Model
Section titled “Funding and Business Model”Arb Research operates through a mixed funding model combining fee-for-service consulting with philanthropic grants for public research. The organization’s client base includes technology companies (Stripe), policy think tanks (Mercatus Center), research organizations (FAR AI, Institute for Progress), and philanthropic funders (Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information., Schmidt Futures)1.
Open Philanthropy provided funding through its donor-advised fund, the Open Philanthropy Project Fund (OPPF), as part of broader grantmaking in forecasting and effective altruism initiatives4. While exact grant amounts to Arb Research are not publicly specified, Open Philanthropy and its associated funds disbursed 998 grants totaling $1,171,866,584 by September 2020, primarily through OPPF for scientific research, global health, and catastrophic risks5.
More recently, Arb Research received an open-ended grant from Lightspeed Grants for AI and forecasting work2. The organization also self-funds selected research projects, including their investigation of hidden interpolation in frontier AI2. A small grant from Emergent Ventures supported work on UK medicines auto-approval2.
The firm’s business model appears designed to balance financial sustainability through consulting revenue with research independence through philanthropic support for public-facing projects. This hybrid approach is common among organizations bridging academic research and policy consulting in the AI safety space.
Key People
Section titled “Key People”Gavin Leech serves as co-founder and represents Arb Research’s public face in academic and community settings. He holds a PhD in AI from the University of Bristol and transitioned from being a representative AISC participant to establishing the consulting firm3. Leech delivered the keynote presentation on the Shallow Review of AI Safety at HAAISS in September 20252 and co-authored the Stripe Press book on AI1.
Beyond Leech, the organization’s small team size (4.8 FTE in 2025) suggests a lean operational structure2. Other contributors include co-author Dwarkesh for the Stripe Press book1, though detailed information about team composition and organizational structure is limited in public sources.
Methodological Approach and Limitations
Section titled “Methodological Approach and Limitations”Arb Research’s work demonstrates characteristic attention to methodological limitations and uncertainty quantification, particularly visible in their forecasting and evaluation projects.
In their analysis of “Big 3” author predictions, the organization acknowledged that their prediction detection regex missed 13% of manually labeled predictions (87% detection rate)6. Crowdsourcing accuracy reached 98% on manual subsamples, but vague predictions complicated scoring since authors rarely specified predictions in structured “By Year X, technology Y on metric Z” formats6. The team used partial correctness coding for ambiguities rather than exhaustive resolution, estimating 10 minutes per edge case, and imputed author averages for missing data6.
These acknowledged limitations reflect a research philosophy that prioritizes transparency about methodological constraints over claims of comprehensive coverage. The organization noted that expanding their prediction corpus, hand-inspecting additional books, or adding calibration levels would be costly to implement6.
Reception in the AI Safety Community
Section titled “Reception in the AI Safety Community”Discussion of Arb Research on the Effective Altruism Forum expresses generally positive opinions about the organization’s contributions. The impact assessment of AI Safety Camp received particular attention, with community members noting transparency in the organization’s disclosure that AISC organizers paid $10,000 for the evaluation using separate funding3.
Forum participants expressed being “impressed with the apparent counterfactual impact” of AISC based on Arb Research’s analysis3. The assessment’s focus on net-new researcher production as the primary contribution metric aligned with community preferences for quantified impact evaluation.
The organization’s forecasting work has influenced broader discussions in the rationalist and effective altruism communities, particularly through critical analysis of forecasting methodologies that informed Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information.’s approach to funding in this area4.
Key Uncertainties
Section titled “Key Uncertainties”Several important questions remain about Arb Research’s trajectory and impact:
- Long-term organizational strategy: Whether the firm will scale beyond its current boutique size or maintain its small, high-quality output model
- Public vs. private work balance: The ratio of publicly documented research to private consulting, and how this balance affects the organization’s broader influence on AI safety discourse
- Methodological influence: The extent to which Arb Research’s approaches to forecasting, impact evaluation, and AI safety analysis shape practices at client organizations
- Sustainability: Whether the mixed funding model of consulting revenue and philanthropic grants can support sustained research quality and organizational independence
- Team composition: Limited public information about team members beyond Gavin Leech makes it difficult to assess organizational depth and succession planning
Sources
Section titled “Sources”Footnotes
Section titled “Footnotes”-
Arb Research Work Portfolio ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14 ↩15 ↩16 ↩17
-
Impact Assessment of AI Safety Camp - Arb Research (EA Forum) ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10
-
New Open Philanthropy Grantmaking Program: Forecasting (EA Forum) ↩ ↩2 ↩3 ↩4 ↩5
-
Big Three Authors Predictions Analysis (Arb Research) ↩ ↩2 ↩3 ↩4