Vidur Kapur
- QualityRated 38 but structure suggests 80 (underrated by 42 points)
Quick Assessment
Section titled “Quick Assessment”| Attribute | Assessment |
|---|---|
| Primary Role | Superforecaster and AI Policy Researcher |
| Key Affiliations | Good JudgmentGood JudgmentGood Judgment Inc. is a commercial forecasting organization that emerged from successful IARPA research, demonstrating that trained 'superforecasters' can outperform intelligence analysts and predi...Quality: 50/100, Swift CentreSwift CentreSwift Centre is a UK forecasting organization that provides conditional forecasting services to various clients including some AI companies, but is not primarily focused on AI safety. While they de...Quality: 50/100, SamotsvetySamotsvetyElite forecasting group Samotsvety dominated INFER competitions 2020-2022 with relative Brier scores twice as good as competitors, providing influential probabilistic forecasts including 28% TAI by...Quality: 61/100, RAND, ControlAIControlaiControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 100+ UK parliamentarians) while promoting direct in...Quality: 63/100, SentinelSentinelSentinel is a 2024-founded foresight organization led by Nuño Sempere that processes millions of news items weekly through AI filtering and elite forecaster assessment to identify global catastroph...Quality: 39/100 |
| Focus Areas | AI existential risk, biological risks, geopolitical forecasting, global catastrophe early detection |
| Notable Contributions | Sentinel early warning system, AI x-risk forecasting, EA Forum discussions on utilitarianism and bias |
| Education | London School of Economics (LSE), University of Chicago |
| Community Presence | Active on EA Forum; limited LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 presence |
Key Links
Section titled “Key Links”| Source | Link |
|---|---|
| Official Website | tmcapital.com |
| Wikipedia | en.wikipedia.org |
Overview
Section titled “Overview”Vidur Kapur is a superforecaster and AI policy researcher affiliated with multiple forecasting organizations including Good JudgmentGood JudgmentGood Judgment Inc. is a commercial forecasting organization that emerged from successful IARPA research, demonstrating that trained 'superforecasters' can outperform intelligence analysts and predi...Quality: 50/100, Swift CentreSwift CentreSwift Centre is a UK forecasting organization that provides conditional forecasting services to various clients including some AI companies, but is not primarily focused on AI safety. While they de...Quality: 50/100, SamotsvetySamotsvetyElite forecasting group Samotsvety dominated INFER competitions 2020-2022 with relative Brier scores twice as good as competitors, providing influential probabilistic forecasts including 28% TAI by...Quality: 61/100, and RAND, while also working as an AI Policy Researcher at ControlAIControlaiControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 100+ UK parliamentarians) while promoting direct in...Quality: 63/1001. His work focuses on existential risks from AI, biological threats, and geopolitical instability, with particular emphasis on early detection and rapid response to global catastrophes.
Kapur is a key member of the SentinelSentinelSentinel is a 2024-founded foresight organization led by Nuño Sempere that processes millions of news items weekly through AI filtering and elite forecaster assessment to identify global catastroph...Quality: 39/100 team, a project dedicated to early detection and response for global catastrophes2. The project emphasizes rapid foresight on timescales of days to weeks, positioned as increasingly critical as AI capabilities integrate into society and potential dangers escalate. His forecasting work spans AI timelines, catastrophic risk scenarios, and geopolitical events, contributing to organizations that collectively shape risk assessment in the effective altruism and AI safety communities.
Beyond forecasting, Kapur maintains an active presence on the Effective Altruism Forum, where he contributes posts and commentary on utilitarianism, political bias, ethics, and cause prioritization3. His dual focus on technical forecasting and philosophical engagement positions him as a bridge between quantitative risk assessment and broader EA community discussions.
Background and Education
Section titled “Background and Education”Kapur attended the London School of Economics (LSE) and the University of Chicago4. His educational background in economics and social sciences informed his later work in forecasting and policy analysis. Before entering the forecasting and AI policy space, he faced personal challenges related to cultural alienation and identity, which he has addressed publicly through comedy and personal essays5.
His transition into forecasting and effective altruism appears to have occurred in the mid-2010s, with early engagement in Center for Applied RationalityCenter For Applied RationalityBerkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-y...Quality: 62/100 (CFAR) workshops on applied rationality and AI safety around 20146. These programs, designed for machine learning researchers and MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100 Summer Fellows, focused on technical AI safety researchCruxTechnical AI Safety ResearchTechnical AI safety research encompasses six major agendas (mechanistic interpretability, scalable oversight, AI control, evaluations, agent foundations, and robustness) with 500+ researchers and $...Quality: 66/100 components and strategy.
Forecasting Work
Section titled “Forecasting Work”Superforecasting Organizations
Section titled “Superforecasting Organizations”Kapur works with several prominent forecasting platforms and organizations. As a superforecaster with Good Judgment, he contributes to probabilistic forecasting on geopolitical and technological developments7. His work with Samotsvety, a group of elite forecasters, has included high-stakes AI risk assessments. Notably, Samotsvety estimated a 30.5% risk of AI catastrophe killing the vast majority of humanity by 22008.
His involvement with RAND and a hedge fund (name not disclosed in sources) extends his forecasting expertise into both policy and financial domains, suggesting applications of probabilistic reasoning across diverse contexts9.
Sentinel Project
Section titled “Sentinel Project”The Sentinel project represents Kapur’s most prominent contribution to existential risk mitigation. Sentinel focuses on early detection and response for global catastrophes, emphasizing rapid observation-orientation cycles to identify emerging threats10. The project improved content quality and distribution following a November 2024 fundraise, with Kapur listed as a key team member11.
Sentinel’s work includes forecasting AI developments and geopolitical risks. For example, Kapur participated in Sentinel Minutes podcast episodes discussing OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100’s plans for an automated research intern by September 2026 and a true AI researcher by March 202812. He has also contributed forecasts on drone attacks, regime change scenarios (such as Iranian political instability), and AI model capability timelines13.
In a podcast on Iranian regime change, Kapur provided probability estimates of 10% by the end of August and 15% by year-end, analyzing base rates, internal and external factors, economic suffering, corruption, and public disillusionment14. This work exemplifies Sentinel’s approach of combining geopolitical analysis with structured forecasting methodologies.
AI Safety and Existential Risk
Section titled “AI Safety and Existential Risk”Kapur’s engagement with AI safety encompasses both technical forecasting and conceptual analysis of catastrophic scenarios. He provided comments on a draft post ranking AI existential risk scenarios by “embarrassment level,” discussing how predictive models might enable catastrophic terrorist attacks due to insufficient safeguards and offense-biased scientific understanding15. This commentary reflects concern about dual-use AI capabilities and the asymmetry between offensive and defensive applications.
His forecasting work directly addresses AI timelines and capability milestones. Sentinel’s projections on automated AI researchers and model doubling times contribute to the broader AI safety community’s understanding of when transformative AI capabilities might emerge16. Kapur’s role bridges the gap between abstract x-risk scenarios and concrete near-term forecasts that can inform policy and research prioritization.
Kapur’s connection to the rationalist and effective altruist communities positions him within networks focused on AI alignmentAlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) achieve 75-90% effectiveness on existing systems but face critical scalability challenges, with ove...Quality: 91/100 and safety. His participation in CFAR workshops and engagement with MIRI-related training on technical AI safetyTechnical Ai SafetyThis page contains only code/component references with no actual content about technical AI safety. The page is a stub that imports React components but provides no information, analysis, or substa... research demonstrates early involvement in the movement’s capacity-building efforts17.
Effective Altruism Forum Contributions
Section titled “Effective Altruism Forum Contributions”Kapur maintains an active profile on the EA Forum, contributing posts, comments, and “Quick Takes” on various topics18. His contributions often challenge conventional EA thinking or introduce methodological refinements.
”EAs are not perfect utilitarians”
Section titled “”EAs are not perfect utilitarians””In a notable post, Kapur argued that EA participants have diverse motivations beyond pure utilitarianism, including prestige, loyalty, novelty-seeking, and burnout avoidance19. He urged caution against over-rationalizing non-utilitarian choices, suggesting that acknowledging human limitations leads to more realistic and sustainable engagement with effective altruism. Community responses noted this aligns with utilitarian self-awareness, as perfect utilitarians would account for human cognitive and motivational constraints.
Political Debiasing
Section titled “Political Debiasing”Kapur co-authored a post on political debiasing and the Political Bias Test, addressing criticisms such as ceiling effects and the ease of gaming bias assessments20. The work discussed pre-tests on Amazon Mechanical Turk and strategies for inferring bias from belief patterns. This methodological contribution reflects his interest in improving how the EA community identifies and corrects for cognitive biases.
Other Engagements
Section titled “Other Engagements”Kapur has commented on EA’s public perception, noting that while the community welcomes internal criticism, this may not be apparent to outsiders21. He has also engaged in discussions on estimating future human value, recommending analyses by Michael Dickens and What We Owe The Future, while acknowledging uncertainties like s-risks and the trajectory of human altruism22. His “Quick Takes” include critiques of evidence quality in animal welfare interventions, such as questioning evidence on electric stunning for shrimp welfare23.
Community Reception
Section titled “Community Reception”Within the EA Forum, Kapur is viewed as a constructive contributor whose posts spark discussion on practical EA limitations and methodological improvements24. His work on political debiasing has drawn methodological feedback, with commenters refining ideas around gaming effects and pre-test design25. No major controversies or criticisms have been noted; interactions focus on collaborative refinement of ideas rather than adversarial debate.
His forecasting work, particularly with Sentinel, has been positively received in EA contexts focused on existential risk. The project’s emphasis on rapid foresight aligns with growing concerns about AI acceleration and the need for early warning systems.
Key Uncertainties
Section titled “Key Uncertainties”Several aspects of Kapur’s work and influence remain unclear from available sources:
- Quantitative impact: Specific forecasting track records, accuracy metrics, or Brier scores are not publicly documented, making it difficult to assess predictive performance relative to other superforecasters.
- Funding and compensation: No information is available about personal funding, grants received, or compensation from forecasting organizations or ControlAI.
- Policy influence: The extent to which Kapur’s forecasts inform actual policy decisions or institutional strategies at organizations like RAND or hedge funds is not specified.
- LessWrong presence: Despite engagement with rationalist community institutions like CFAR and MIRI, Kapur has limited documented activity on LessWrong, suggesting either focused engagement on EA Forum or undocumented contributions.
- Relationship to other forecasters: Collaborative dynamics with other prominent superforecasters (e.g., Samotsvety members) and specific division of labor within Sentinel are not detailed.
Sources
Section titled “Sources”Footnotes
Section titled “Footnotes”-
Sentinel: Early Detection and Response for Global Catastrophes ↩
-
Sentinel: Early Detection and Response for Global Catastrophes ↩
-
The Process of Coming Out - New Indian Express, Oct 2, 2010 ↩
-
Four Free CFAR Programs on Applied Rationality and AI Safety ↩
-
Sentinel: Early Detection and Response for Global Catastrophes ↩
-
Sentinel: Early Detection and Response for Global Catastrophes ↩
-
Forecasts for Drone Attacks and AI Model Doubling Times - Sentinel Blog ↩
-
Iranian Regime Change: Unpacking Broad Forecasts - Sentinel Blog ↩
-
AI X-Risk Approximately Ordered by Embarrassment - Alignment Forum ↩
-
Forecasts for Drone Attacks and AI Model Doubling Times - Sentinel Blog ↩
-
Four Free CFAR Programs on Applied Rationality and AI Safety ↩
-
Political Debiasing and the Political Bias Test - EA Forum, Sep 11, 2015 ↩
-
Can We Estimate the Expected Value of Human’s Future Life? - EA Forum ↩
-
Political Debiasing and the Political Bias Test - EA Forum, Sep 11, 2015 ↩