Skip to content

Vidur Kapur

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:38 (Draft)⚠️
Importance:45 (Reference)
Last edited:2026-02-01 (5 days ago)
Words:1.5k
Structure:
📊 2📈 0🔗 17📚 276%Score: 12/15
LLM Summary:Vidur Kapur is a superforecaster and AI policy researcher involved in multiple forecasting organizations and the Sentinel early warning system, contributing to AI risk assessment and EA Forum discussions. While he appears to be a competent practitioner in forecasting and risk assessment, his individual contributions lack documented track records or major novel insights.
Issues (1):
  • QualityRated 38 but structure suggests 80 (underrated by 42 points)
AttributeAssessment
Primary RoleSuperforecaster and AI Policy Researcher
Key AffiliationsGood Judgment, Swift Centre, Samotsvety, RAND, ControlAI, Sentinel
Focus AreasAI existential risk, biological risks, geopolitical forecasting, global catastrophe early detection
Notable ContributionsSentinel early warning system, AI x-risk forecasting, EA Forum discussions on utilitarianism and bias
EducationLondon School of Economics (LSE), University of Chicago
Community PresenceActive on EA Forum; limited LessWrong presence
SourceLink
Official Websitetmcapital.com
Wikipediaen.wikipedia.org

Vidur Kapur is a superforecaster and AI policy researcher affiliated with multiple forecasting organizations including Good Judgment, Swift Centre, Samotsvety, and RAND, while also working as an AI Policy Researcher at ControlAI1. His work focuses on existential risks from AI, biological threats, and geopolitical instability, with particular emphasis on early detection and rapid response to global catastrophes.

Kapur is a key member of the Sentinel team, a project dedicated to early detection and response for global catastrophes2. The project emphasizes rapid foresight on timescales of days to weeks, positioned as increasingly critical as AI capabilities integrate into society and potential dangers escalate. His forecasting work spans AI timelines, catastrophic risk scenarios, and geopolitical events, contributing to organizations that collectively shape risk assessment in the effective altruism and AI safety communities.

Beyond forecasting, Kapur maintains an active presence on the Effective Altruism Forum, where he contributes posts and commentary on utilitarianism, political bias, ethics, and cause prioritization3. His dual focus on technical forecasting and philosophical engagement positions him as a bridge between quantitative risk assessment and broader EA community discussions.

Kapur attended the London School of Economics (LSE) and the University of Chicago4. His educational background in economics and social sciences informed his later work in forecasting and policy analysis. Before entering the forecasting and AI policy space, he faced personal challenges related to cultural alienation and identity, which he has addressed publicly through comedy and personal essays5.

His transition into forecasting and effective altruism appears to have occurred in the mid-2010s, with early engagement in Center for Applied Rationality (CFAR) workshops on applied rationality and AI safety around 20146. These programs, designed for machine learning researchers and MIRI Summer Fellows, focused on technical AI safety research components and strategy.

Kapur works with several prominent forecasting platforms and organizations. As a superforecaster with Good Judgment, he contributes to probabilistic forecasting on geopolitical and technological developments7. His work with Samotsvety, a group of elite forecasters, has included high-stakes AI risk assessments. Notably, Samotsvety estimated a 30.5% risk of AI catastrophe killing the vast majority of humanity by 22008.

His involvement with RAND and a hedge fund (name not disclosed in sources) extends his forecasting expertise into both policy and financial domains, suggesting applications of probabilistic reasoning across diverse contexts9.

The Sentinel project represents Kapur’s most prominent contribution to existential risk mitigation. Sentinel focuses on early detection and response for global catastrophes, emphasizing rapid observation-orientation cycles to identify emerging threats10. The project improved content quality and distribution following a November 2024 fundraise, with Kapur listed as a key team member11.

Sentinel’s work includes forecasting AI developments and geopolitical risks. For example, Kapur participated in Sentinel Minutes podcast episodes discussing OpenAI’s plans for an automated research intern by September 2026 and a true AI researcher by March 202812. He has also contributed forecasts on drone attacks, regime change scenarios (such as Iranian political instability), and AI model capability timelines13.

In a podcast on Iranian regime change, Kapur provided probability estimates of 10% by the end of August and 15% by year-end, analyzing base rates, internal and external factors, economic suffering, corruption, and public disillusionment14. This work exemplifies Sentinel’s approach of combining geopolitical analysis with structured forecasting methodologies.

Kapur’s engagement with AI safety encompasses both technical forecasting and conceptual analysis of catastrophic scenarios. He provided comments on a draft post ranking AI existential risk scenarios by “embarrassment level,” discussing how predictive models might enable catastrophic terrorist attacks due to insufficient safeguards and offense-biased scientific understanding15. This commentary reflects concern about dual-use AI capabilities and the asymmetry between offensive and defensive applications.

His forecasting work directly addresses AI timelines and capability milestones. Sentinel’s projections on automated AI researchers and model doubling times contribute to the broader AI safety community’s understanding of when transformative AI capabilities might emerge16. Kapur’s role bridges the gap between abstract x-risk scenarios and concrete near-term forecasts that can inform policy and research prioritization.

Kapur’s connection to the rationalist and effective altruist communities positions him within networks focused on AI alignment and safety. His participation in CFAR workshops and engagement with MIRI-related training on technical AI safety research demonstrates early involvement in the movement’s capacity-building efforts17.

Kapur maintains an active profile on the EA Forum, contributing posts, comments, and “Quick Takes” on various topics18. His contributions often challenge conventional EA thinking or introduce methodological refinements.

In a notable post, Kapur argued that EA participants have diverse motivations beyond pure utilitarianism, including prestige, loyalty, novelty-seeking, and burnout avoidance19. He urged caution against over-rationalizing non-utilitarian choices, suggesting that acknowledging human limitations leads to more realistic and sustainable engagement with effective altruism. Community responses noted this aligns with utilitarian self-awareness, as perfect utilitarians would account for human cognitive and motivational constraints.

Kapur co-authored a post on political debiasing and the Political Bias Test, addressing criticisms such as ceiling effects and the ease of gaming bias assessments20. The work discussed pre-tests on Amazon Mechanical Turk and strategies for inferring bias from belief patterns. This methodological contribution reflects his interest in improving how the EA community identifies and corrects for cognitive biases.

Kapur has commented on EA’s public perception, noting that while the community welcomes internal criticism, this may not be apparent to outsiders21. He has also engaged in discussions on estimating future human value, recommending analyses by Michael Dickens and What We Owe The Future, while acknowledging uncertainties like s-risks and the trajectory of human altruism22. His “Quick Takes” include critiques of evidence quality in animal welfare interventions, such as questioning evidence on electric stunning for shrimp welfare23.

Within the EA Forum, Kapur is viewed as a constructive contributor whose posts spark discussion on practical EA limitations and methodological improvements24. His work on political debiasing has drawn methodological feedback, with commenters refining ideas around gaming effects and pre-test design25. No major controversies or criticisms have been noted; interactions focus on collaborative refinement of ideas rather than adversarial debate.

His forecasting work, particularly with Sentinel, has been positively received in EA contexts focused on existential risk. The project’s emphasis on rapid foresight aligns with growing concerns about AI acceleration and the need for early warning systems.

Several aspects of Kapur’s work and influence remain unclear from available sources:

  • Quantitative impact: Specific forecasting track records, accuracy metrics, or Brier scores are not publicly documented, making it difficult to assess predictive performance relative to other superforecasters.
  • Funding and compensation: No information is available about personal funding, grants received, or compensation from forecasting organizations or ControlAI.
  • Policy influence: The extent to which Kapur’s forecasts inform actual policy decisions or institutional strategies at organizations like RAND or hedge funds is not specified.
  • LessWrong presence: Despite engagement with rationalist community institutions like CFAR and MIRI, Kapur has limited documented activity on LessWrong, suggesting either focused engagement on EA Forum or undocumented contributions.
  • Relationship to other forecasters: Collaborative dynamics with other prominent superforecasters (e.g., Samotsvety members) and specific division of labor within Sentinel are not detailed.
  1. Sentinel: Early Detection and Response for Global Catastrophes

  2. Sentinel: Early Detection and Response for Global Catastrophes

  3. Vidur Kapur - EA Forum Profile

  4. Vidur Kapur: Closets and Comedy

  5. The Process of Coming Out - New Indian Express, Oct 2, 2010

  6. Four Free CFAR Programs on Applied Rationality and AI Safety

  7. License India Expo - Vidur Kapur Profile

  8. Technical AI Safety Crisis and Security Research Report

  9. License India Expo - Vidur Kapur Profile

  10. Sentinel: Early Detection and Response for Global Catastrophes

  11. Sentinel: Early Detection and Response for Global Catastrophes

  12. Sentinel Minutes Podcast - Spotify

  13. Forecasts for Drone Attacks and AI Model Doubling Times - Sentinel Blog

  14. Iranian Regime Change: Unpacking Broad Forecasts - Sentinel Blog

  15. AI X-Risk Approximately Ordered by Embarrassment - Alignment Forum

  16. Forecasts for Drone Attacks and AI Model Doubling Times - Sentinel Blog

  17. Four Free CFAR Programs on Applied Rationality and AI Safety

  18. Vidur Kapur - EA Forum Profile

  19. EAs Are Not Perfect Utilitarians - EA Forum

  20. Political Debiasing and the Political Bias Test - EA Forum, Sep 11, 2015

  21. EA’s Image Problem - Comment by Vidur Kapur

  22. Can We Estimate the Expected Value of Human’s Future Life? - EA Forum

  23. Vidur Kapur’s Quick Takes - EA Forum

  24. EAs Are Not Perfect Utilitarians - EA Forum

  25. Political Debiasing and the Political Bias Test - EA Forum, Sep 11, 2015