Longterm Wiki
Navigation
Updated 2026-03-07HistoryDataStatementsClaims
Citations verified66 accurate7 flagged25 unchecked
Page StatusContent
Edited 1 day ago2.7k words5 backlinksUpdated every 6 weeksDue in 6 weeks
73QualityGood61ImportanceUseful31.5ResearchLow
Summary

Philip Tetlock is a psychologist who revolutionized forecasting research by demonstrating that expert predictions often perform no better than chance, while identifying systematic methods and 'superforecasters' who achieve superior accuracy. His work has significant implications for AI safety and existential risk assessment, though faces challenges when applied to long-term, low-probability events with limited feedback loops.

Content5/13
LLM summaryScheduleEntityEdit historyOverview
Tables2/ ~11Diagrams0/ ~1Int. links9/ ~21Ext. links2/ ~13Footnotes0/ ~8References28/ ~8Quotes73/98Accuracy73/98RatingsN:6 R:8 A:7 C:8Backlinks5
Issues1
Links1 link could use <R> components

Philip Tetlock (Forecasting Pioneer)

Person

Philip Tetlock (Forecasting Pioneer)

Philip Tetlock is a psychologist who revolutionized forecasting research by demonstrating that expert predictions often perform no better than chance, while identifying systematic methods and 'superforecasters' who achieve superior accuracy. His work has significant implications for AI safety and existential risk assessment, though faces challenges when applied to long-term, low-probability events with limited feedback loops.

Related
Organizations
Good Judgment (Forecasting)Forecasting Research Institute (FRI)Metaculus
2.7k words · 5 backlinks

Quick Assessment

DimensionAssessment
Primary AchievementPioneered forecasting tournaments demonstrating that systematic methods outperform expert intuition; identified "superforecasters" with superior accuracy
Key PublicationsExpert Political Judgment (2005), Superforecasting (2015)
Institutional AffiliationLeonore Annenberg University Professor at University of Pennsylvania (Wharton and Psychology)
Major ProjectsGood Judgment Project (IARPA tournament winner 2011-2015), Forecasting Research Institute
Influence on AI SafetyMethods applied to existential risk assessment; adversarial collaboration on AI forecasting; EA community adoption of forecasting practices
Key FindingMost expert predictions perform no better than chance; "fox-like" integrative thinkers outperform "hedgehog" theorists
SourceLink
Official Websiteen.wikiquote.org
Wikipediaen.wikipedia.org

Overview

Philip E. Tetlock is a Canadian-born psychologist who revolutionized the study of forecasting accuracy through decades of research demonstrating that expert predictions on political and economic events are often no better than random chance, while identifying systematic methods to achieve superior forecasting performance12. As the Leonore Annenberg University Professor at the University of Pennsylvania with cross-appointments at the Wharton School and School of Arts and Sciences, Tetlock has authored over 200 peer-reviewed articles and nine books examining judgment, decision-making, and prediction accuracy34.

Tetlock's most influential work emerged from forecasting tournaments he initiated during the Cold War era through the National Academy of Sciences Committee on Nuclear War Prevention, analyzing over 82,000 predictions from 284 experts between 1984 and 200356. This research culminated in his landmark 2005 book Expert Political Judgment, which documented that experts with access to classified information performed no better than Berkeley undergraduates or "dart-throwing chimpanzees" on long-range forecasts78. However, Tetlock also identified a minority of superior forecasters—"foxes" who integrate diverse perspectives rather than "hedgehogs" who apply single theories—leading to his co-founding of the Good Judgment Project with Barbara Mellers and Don Moore9.

The Good Judgment Project won a four-year IARPA-sponsored forecasting tournament (2011-2015) involving thousands of forecasters making over one million predictions on geopolitical events1011. The project identified "superforecasters"—ordinary citizens whose accuracy exceeded intelligence analysts with classified information access by 60-85%1213. This work established systematic methods for improving prediction accuracy, including training protocols, team dynamics, and aggregation algorithms that have influenced intelligence agencies, forecasting platforms like Metaculus, and the effective altruism community's approach to decision-making under uncertainty1415.

History and Academic Career

Education and Early Career

Tetlock was born in Toronto, Canada, and grew up in Winnipeg and Vancouver16. He received his B.A. in psychology from the University of British Columbia in 1975, followed by an M.A. in 1976 working with Peter Suedfeld on content analysis of diplomatic communications1718. He completed his Ph.D. in psychology at Yale University in 1979 under the supervision of Phoebe C. Ellsworth19.

From 1979 to 1995, Tetlock served as Assistant Professor of psychology at the University of California, Berkeley, directing the Institute of Personality and Social Research from 1988 to 199520. He then held the Harold E. Burtt Endowed Chair in Psychology and Political Science at Ohio State University (1996-2001) before returning to Berkeley as the Mitchell Endowed Chair at the Haas School of Business (2001-2010)2122. In December 2010, he was appointed Leonore Annenberg University Professor of Democracy and Citizenship at the University of Pennsylvania, becoming a Penn Integrates Knowledge (PIK) Professor with joint appointments in Psychology, Management, and the Annenberg School for Communication2324.

Origins of Forecasting Research

Tetlock's forecasting research originated from his work on the National Academy of Sciences Committee for the Prevention of Nuclear War in the early 1980s during Cold War tensions25. He became concerned that public debate on nuclear policy relied heavily on vague, unverifiable predictions that could not be systematically evaluated26. This led him to create the first forecasting tournament during the Cold War to test expert predictions scientifically27.

Between 1984 and 2003, Tetlock conducted small-scale forecasting tournaments with 284 experts—including government officials, professors, and journalists spanning ideologies from Marxists to free-market advocates—on geopolitical outcomes2829. These experts made predictions about events such as the Soviet Union's collapse, the future of apartheid in South Africa, and Middle East peace prospects. The results formed the empirical basis for his 2005 book Expert Political Judgment: How Good Is It? How Can We Know?, published by Princeton University Press30.

Good Judgment Project

The publication of Expert Political Judgment directly influenced U.S. intelligence agencies to create a four-year geopolitical forecasting tournament sponsored by IARPA (Intelligence Advanced Research Projects Activity)31. From 2011 to 2015, Tetlock co-led the winning team—the Good Judgment Project—with his spouse Barbara Mellers and UC Berkeley colleague Don Moore3233. The multidisciplinary team included experts in statistics, computer science, economics, psychology, and political science34.

The project involved thousands of forecasters making over one million predictions on geopolitical questions35. It identified "superforecasters"—high-performing individuals who consistently outperformed both average forecasters and professional intelligence analysts with access to classified information36. According to analysis of the project's results, superforecasters were approximately 60-85% more accurate than average forecasters and demonstrated the ability to distinguish 10-15 degrees of uncertainty while maintaining calibration across hundreds of events3738.

The Good Judgment Project's success led to the founding of Good Judgment Inc., a consultancy co-founded by Tetlock that offers bespoke forecasting services, workshops for private clients, and the Good Judgment Open platform for crowd-based forecasts3940. The project's methods have been adapted for use by U.S. intelligence agencies and inspired forecasting platforms including Metaculus and INFER-Public41.

Research Contributions

The Fox-Hedgehog Distinction

One of Tetlock's most influential conceptual contributions is the distinction between "fox-like" and "hedgehog-like" thinkers, inspired by Isaiah Berlin's essay "The Hedgehog and the Fox"42. Hedgehogs organize their thinking around a single grand theory or ideology and make bold, confident predictions. Foxes, by contrast, are modest, self-critical thinkers who draw on diverse perspectives and remain skeptical of grand theories43.

Tetlock's research demonstrated that fox-like forecasters consistently outperformed hedgehog forecasters, particularly on long-range forecasts44. Foxes showed greater willingness to update their beliefs in response to evidence and were more accurate across a wider range of prediction domains45. However, early critiques noted that while foxes outperformed hedgehogs, they still only modestly exceeded simple benchmarks like extrapolation algorithms, rather than achieving substantial superiority over baseline models46.

Superforecasting Methodology

The Good Judgment Project identified specific attributes and practices associated with superior forecasting performance. Superforecasters typically exhibit:

  • Probabilistic thinking: Ability to think in granular probabilities rather than binary yes/no predictions
  • Active open-mindedness: Willingness to consider alternative hypotheses and update beliefs based on evidence
  • Intellectual humility: Recognition of uncertainty and limits of their knowledge
  • Pattern recognition: Skill at identifying relevant historical analogies
  • Team collaboration: Ability to productively combine perspectives with other forecasters
  • Regular practice: Consistent engagement with forecasting questions to refine judgment4748

Tetlock's research demonstrated that forecasting accuracy could be improved through training programs focusing on these cognitive habits, team structures that facilitate information sharing, and aggregation algorithms that appropriately weight the judgments of top performers4950. The project developed techniques including extremizing weighted averages (adjusting crowd predictions to account for shared information) and Bayesian question clusters (breaking complex forecasts into component questions)5152.

Accountability and Judgment

Beyond forecasting accuracy, Tetlock has extensively researched how accountability affects judgment and decision-making. His 2006 paper "Conflicts of Interest and the Case of Auditor Independence: Moral Seduction and Strategic Issue Cycling" (co-authored with Don Moore, Lloyd Tanlu, and Max Bazerman) analyzed how conflicts of interest in auditing contributed to scandals like Enron and WorldCom5354. The paper introduced "moral seduction theory"—the concept that professionals can become unaware of moral compromise from conflicts of interest at a micro level—and "issue-cycle theory" explaining how such conflicts persist at a macro level in major accounting firms55.

Tetlock has warned that accountability mechanisms can degrade into "bureaucratic rituals" or "Potemkin villages"—symbolic facades designed to deflect critics rather than genuinely improve decision-making56. His work emphasizes that outcome accountability requires careful, calibrated implementation through controlled evaluation rather than simple demands to "hold rascals accountable"57.

Application to Existential Risk and AI Safety

Forecasting Research Institute and X-Risk

In 2022, Tetlock became President and Chief Scientist of the Forecasting Research Institute (FRI), which received over $6 million in funding from Coefficient Giving for developing forecasting techniques applicable to global catastrophic and existential risks5859. In June-October 2022, FRI organized an "Existential Risk Persuasion Tournament" involving 169 experts—80 subject matter experts and 89 superforecasters—estimating probabilities of catastrophes (≥10% of humanity deaths) or extinction (<1,000 humans) by 2030, 2050, and 210060.

Tetlock has acknowledged significant challenges in applying forecasting methods to existential risks, including the lack of feedback loops for learning from errors on long-term predictions, the difficulty of recruiting sufficient expertise, and the potential for information hazards when discussing specific risk scenarios61. His recent research explores "hybrid persuasion-forecasting tournaments" that combine expert argumentation with probabilistic forecasting to improve judgments about low-probability, high-impact events62.

AI Forecasting Work

Tetlock has engaged directly with AI governance concerns through multiple initiatives. He conducted a survey of 135 AI safety and governance researchers on advanced AI risks with Ezra Karger and others63. More recently, his team conducted a two-month intensive adversarial collaboration focused on identifying short-term "cruxes"—key questions about AI that could be resolved by 2030—to explore the limits of how disagreements about AI risks can be resolved through structured debate64.

His 2025 research published in ACM Transactions on Interactive Intelligent Systems examined how large language models can achieve forecasting accuracy comparable to human forecasters when predictions are combined, raising questions about both AI capabilities in prediction tasks and the potential role of AI systems in risk assessment65. This work suggests that AI-augmented forecasting—combining human judgment with machine learning—may offer advantages over either approach alone for certain types of predictions66.

Influence on Effective Altruism

Tetlock has become a prominent figure in the effective altruism (EA) community, with "Tetlock-style judgmental forecasting" notably more popular within EA than in broader contexts67. Coefficient Giving has directly supported forecasting infrastructure influenced by Tetlock's research, funding FRI, Metaculus, and INFER (a program supporting forecasting use by U.S. policymakers)68. Founders Pledge has evaluated Tetlock's forecasting research on existential risk as high-impact work suitable for philanthropic support69.

Tetlock has participated in multiple EA Global conferences through fireside chats and Q&A sessions, discussing topics including prediction algorithms, long-term future considerations, epistemic modesty, and belief updating mechanics7071. His work on identifying cognitive biases, tracking prediction accuracy, and conducting systematic post-mortems provides methodological tools relevant to assessing low-probability, high-impact scenarios central to EA priorities72.

Criticisms and Limitations

Methodological Concerns

Critics have raised several concerns about the scope and interpretation of Tetlock's forecasting research. While fox-like forecasters outperform hedgehog forecasters, early analyses noted that foxes still only modestly exceed simple benchmarks like extrapolation algorithms, raising questions about whether the framework sufficiently distinguishes skill from noise7374. Hedgehogs performed worse than basic models—in some tests, slightly below random chance—but the practical significance of foxes' advantage over simple algorithms remains debated75.

Tetlock's research confronts inherent challenges in evaluating predictions, including the role of exogenous shocks and missing variables that can undermine even sound analyses, giving undue credit to improbable theories76. Arbitrary time frames for prediction windows (such as 5 versus 10 years for Soviet collapse predictions) can distort evaluations of forecaster accuracy77. Domains involving high combinatorial complexity—such as AI risk debates or complex simulations—reveal blind spots even in skilled forecasters, as the number of relevant variables exceeds human cognitive capacity78.

A persistent limitation identified by Tetlock himself is that experts without regular accuracy feedback struggle to convert causal knowledge into probabilistic forecasts79. This challenge is particularly acute for long-term existential risk forecasts, where feedback loops for learning from errors may not exist until after catastrophic outcomes80.

Misinterpretation and Misuse

Tetlock has expressed frustration that his research has been misinterpreted and misused to justify dismissing expert opinion entirely, rather than improving forecasting practices81. He particularly criticized how political figures like Michael Gove cited Expert Political Judgment to justify ignoring expert consensus on Brexit consequences, characterizing this as a "dangerous misreading" of his findings82. Tetlock emphasized that "it's not that I'm saying that the experts are going to be right, but I would say completely ignoring them is dangerous"83.

Populist "know-nothingism" represents a misreading of Tetlock's work, which demonstrates problems with expert forecasting—including systematic overconfidence and reluctance to change minds—without implying that expert opinion should be completely discounted84. His more recent work, including Superforecasting, emphasizes that forecasting accuracy can be improved through better methodology and training, rather than arguing that prediction is fundamentally impossible85.

Accountability Mechanisms

Tetlock's proposals for improving forecaster accountability face significant practical challenges. Implementing respected arbiters to evaluate pundit accuracy encounters difficulties ensuring perceived fairness amid partisan divisions86. Process accountability—requiring forecasters to document their reasoning and methods—can degrade into bureaucratic rituals or symbolic facades ("Potemkin villages") rather than genuine improvement, as observed in domains from public education to intelligence analysis87. Outcome accountability, while valuable, requires complex and calibrated implementation through controlled evaluation rather than simple demands for accountability88.

Scope Limitations

Forecasters are valued for multiple purposes beyond pure accuracy, including ideological comfort, entertainment value, and regret minimization (such as in pandemic preparedness)89. Fox-like thinking helps navigate these conflicting values but isn't solely about predictive performance. Tetlock acknowledges that forecasting serves multiple social functions, and that the temptation exists for activists to exaggerate risks (framing certainty as group commitment) or for ideological groups to exclude those expressing doubt90.

Some critics argue that Tetlock's findings about expert underperformance, while methodologically sound for short and medium-term forecasts, have been inappropriately extrapolated to long-range planning domains. Tetlock himself has expressed skepticism about very long-term forecasts (such as IPCC projections to 2100), noting that wide estimate spreads and the lack of feedback mechanisms limit the applicability of his methods to century-scale predictions9192.

Recent Developments

Tetlock continues active research and institutional involvement in forecasting. In January 2026, he was appointed to the Board of Directors of ForecastEx, Interactive Brokers' prediction market platform, where his expertise in forecasting and decision-making under uncertainty aligns with the platform's mission to help market participants trade probabilities of future outcomes9394.

Recent publications include "AI-Augmented predictions: LLM assistants improve human forecasting accuracy" (2025) in ACM Transactions on Interactive Intelligent Systems, "Subjective-probability forecasts of existential risk: Initial Results from a hybrid persuasion-forecasting tournament" (2025) in the International Journal of Forecasting, and "Long-range subjective-probability forecasts of slow-motion variables in world politics: Exploring limits on expert judgment" (2024) in Futures and Foresight Science9596.

According to the Financial Times in October 2025, superforecasters associated with the Good Judgment Project proved 30% more accurate on average than futures markets and continued to beat market predictions on Federal Reserve decisions, demonstrating the continued relevance of Tetlock's forecasting methods97. Tetlock received significant media attention throughout 2024-2025, with appearances and coverage in outlets including the Financial Times, Bloomberg, Forbes, Newsweek, The Guardian, and Times Radio98.

Key Uncertainties

Several important questions remain about the scope and applicability of Tetlock's forecasting methods:

Scalability to existential risks: How well do forecasting techniques validated on short and medium-term geopolitical questions transfer to low-probability, high-impact scenarios with limited historical precedent? The lack of feedback loops for century-scale predictions presents fundamental challenges for evaluating and improving long-term forecasts.

AI augmentation limits: As large language models achieve forecasting accuracy comparable to human forecasters, what is the optimal division of labor between human and machine intelligence in prediction tasks? Recent research suggests hybrid approaches may be superior, but the specific conditions favoring human versus AI forecasting remain unclear.

Institutional adoption barriers: Despite demonstrated accuracy improvements, why have forecasting tournaments and superforecaster methods seen limited adoption outside intelligence agencies and specialized platforms? Organizational resistance, incentive misalignment, and the multiple non-accuracy functions that expert predictions serve may present barriers beyond methodological validation.

Long-term forecast calibration: Can any systematic methods achieve meaningful calibration for predictions extending decades or centuries into the future, or are such forecasts inherently limited by irreducible uncertainty and the absence of feedback mechanisms for learning?

Information hazards in risk assessment: How should forecasting tournaments balance the value of detailed, specific predictions about existential risks against the potential for such forecasts to provide roadmaps for malicious actors or create self-fulfilling prophecies?

Sources

Footnotes

  1. Philip E. Tetlock, PhD | Annenberg School for Communication at the University of PennsylvaniaPhilip E. Tetlock, PhD | Annenberg School for Communication at the University of Pennsylvania

  2. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision LabPhilip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  3. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  4. Philip Tetlock | Alliance for Decision EducationPhilip Tetlock | Alliance for Decision Education

  5. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  6. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  7. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision LabPhilip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  8. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  9. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  10. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  11. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  12. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  13. How to win at forecasting - Philip Tetlock | Edge.orgHow to win at forecasting - Philip Tetlock | Edge.org

  14. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  15. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  16. Philip E. Tetlock - WikipediaPhilip E. Tetlock - Wikipedia

  17. Philip E. Tetlock - WikipediaPhilip E. Tetlock - Wikipedia

  18. Philip Tetlock wins Grawemeyer Award (2008)Philip Tetlock wins Grawemeyer Award (2008)

  19. Philip E. Tetlock - WikipediaPhilip E. Tetlock - Wikipedia

  20. Philip E. Tetlock - WikipediaPhilip E. Tetlock - Wikipedia

  21. Philip E. Tetlock - WikipediaPhilip E. Tetlock - Wikipedia

  22. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  23. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  24. Philip E. Tetlock, PhD | Annenberg School for Communication at the University of PennsylvaniaPhilip E. Tetlock, PhD | Annenberg School for Communication at the University of Pennsylvania

  25. Philip E. Tetlock - WikipediaPhilip E. Tetlock - Wikipedia

  26. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  27. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  28. Philip E. Tetlock - WikipediaPhilip E. Tetlock - Wikipedia

  29. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  30. Philip E. Tetlock - WikipediaPhilip E. Tetlock - Wikipedia

  31. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  32. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  33. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  34. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  35. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  36. Philip Tetlock - PIK Professors - University of PennsylvaniaPhilip Tetlock - PIK Professors - University of Pennsylvania

  37. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  38. How to win at forecasting - Philip Tetlock | Edge.orgHow to win at forecasting - Philip Tetlock | Edge.org

  39. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  40. Good Judgment - AboutGood Judgment - About

  41. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  42. How to win at forecasting - Philip Tetlock | Edge.orgHow to win at forecasting - Philip Tetlock | Edge.org

  43. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision LabPhilip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  44. How to win at forecasting - Philip Tetlock | Edge.orgHow to win at forecasting - Philip Tetlock | Edge.org

  45. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision LabPhilip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  46. Overcoming Our Aversion to Acknowledging Our Ignorance | Cato UnboundOvercoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound

  47. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  48. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision LabPhilip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  49. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  50. Evidence on good forecasting practices from the Good Judgment Project | AI ImpactsEvidence on good forecasting practices from the Good Judgment Project | AI Impacts

  51. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  52. How to win at forecasting - Philip Tetlock | Edge.orgHow to win at forecasting - Philip Tetlock | Edge.org

  53. Conflicts of Interest and the Case of Auditor Independence (PDF)Conflicts of Interest and the Case of Auditor Independence (PDF)

  54. Conflicts of Interest and the Case of Auditor Independence | Semantic ScholarConflicts of Interest and the Case of Auditor Independence | Semantic Scholar

  55. Conflicts of Interest and the Case of Auditor Independence | Semantic ScholarConflicts of Interest and the Case of Auditor Independence | Semantic Scholar

  56. Evaluating Intelligence: A Competent Authority | National AcademiesEvaluating Intelligence: A Competent Authority | National Academies

  57. Evaluating Intelligence: A Competent Authority | National AcademiesEvaluating Intelligence: A Competent Authority | National Academies

  58. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  59. New Coefficient Giving Grantmaking Program: Forecasting | EA ForumNew Coefficient Giving Grantmaking Program: Forecasting | EA Forum

  60. Philip E. Tetlock - WikipediaPhilip E. Tetlock - Wikipedia

  61. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  62. Philip Tetlock Faculty Page | University of Pennsylvania PsychologyPhilip Tetlock Faculty Page | University of Pennsylvania Psychology

  63. AI Risk Surveys | AI Impacts WikiAI Risk Surveys | AI Impacts Wiki

  64. Adversarial Collaboration on AI Risk | Wiley Online LibraryAdversarial Collaboration on AI Risk | Wiley Online Library

  65. Philip Tetlock Faculty Page | University of Pennsylvania PsychologyPhilip Tetlock Faculty Page | University of Pennsylvania Psychology

  66. Philip Tetlock Faculty Page | Wharton SchoolPhilip Tetlock Faculty Page | Wharton School

  67. Why is EA so enthusiastic about forecasting? | EA ForumWhy is EA so enthusiastic about forecasting? | EA Forum

  68. Why is EA so enthusiastic about forecasting? | EA ForumWhy is EA so enthusiastic about forecasting? | EA Forum

  69. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  70. Philip Tetlock Fireside Chat | EA ForumPhilip Tetlock Fireside Chat | EA Forum

  71. Interview with Prof Tetlock on epistemic modesty | EA ForumInterview with Prof Tetlock on epistemic modesty | EA Forum

  72. Prof. Philip Tetlock's Forecasting Research | Founders PledgeProf. Philip Tetlock's Forecasting Research | Founders Pledge

  73. Overcoming Our Aversion to Acknowledging Our Ignorance | Cato UnboundOvercoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound

  74. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision LabPhilip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  75. Overcoming Our Aversion to Acknowledging Our Ignorance | Cato UnboundOvercoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound

  76. Evaluating Intelligence: A Competent Authority | National AcademiesEvaluating Intelligence: A Competent Authority | National Academies

  77. Evaluating Intelligence: A Competent Authority | National AcademiesEvaluating Intelligence: A Competent Authority | National Academies

  78. Philip Tetlock on forecasting and existential risks | 80,000 Hours PodcastPhilip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  79. Adversarial Collaboration on AI Risk | Wiley Online LibraryAdversarial Collaboration on AI Risk | Wiley Online Library

  80. Philip Tetlock on forecasting and existential risks | 80,000 Hours PodcastPhilip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  81. Philip Tetlock on forecasting and existential risks | 80,000 Hours PodcastPhilip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  82. Philip Tetlock on forecasting and existential risks | 80,000 Hours PodcastPhilip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  83. Philip Tetlock on forecasting and existential risks | 80,000 Hours PodcastPhilip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  84. Philip Tetlock on forecasting and existential risks | 80,000 Hours PodcastPhilip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  85. Philip Tetlock interview | Conversations with TylerPhilip Tetlock interview | Conversations with Tyler

  86. Overcoming Our Aversion to Acknowledging Our Ignorance | Cato UnboundOvercoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound

  87. Evaluating Intelligence: A Competent Authority | National AcademiesEvaluating Intelligence: A Competent Authority | National Academies

  88. Evaluating Intelligence: A Competent Authority | National AcademiesEvaluating Intelligence: A Competent Authority | National Academies

  89. Philip Tetlock interview | Conversations with TylerPhilip Tetlock interview | Conversations with Tyler

  90. Philip Tetlock on forecasting and existential risks | 80,000 Hours PodcastPhilip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  91. Fireside chat with Philip Tetlock | Effective AltruismFireside chat with Philip Tetlock | Effective Altruism

  92. Philip Tetlock on forecasting and existential risks | 80,000 Hours PodcastPhilip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  93. ForecastEx Appoints Philip Tetlock to Board | Business WireForecastEx Appoints Philip Tetlock to Board | Business Wire

  94. ForecastEx Appoints Philip Tetlock to Board | BarchartForecastEx Appoints Philip Tetlock to Board | Barchart

  95. Philip Tetlock Faculty Page | University of Pennsylvania PsychologyPhilip Tetlock Faculty Page | University of Pennsylvania Psychology

  96. Philip Tetlock Faculty Page | University of Pennsylvania PsychologyPhilip Tetlock Faculty Page | University of Pennsylvania Psychology

  97. Good Judgment Press & NewsGood Judgment Press & News

  98. Good Judgment Press & NewsGood Judgment Press & News

References

View claims
Claims (2)View all claims
More recently, his team conducted a two-month intensive adversarial collaboration focused on identifying short-term "cruxes"—key questions about AI that could be resolved by 2030—to explore the limits of how disagreements about AI risks can be resolved through structured debate.
A persistent limitation identified by Tetlock himself is that experts without regular accuracy feedback struggle to convert causal knowledge into probabilistic forecasts.
2Philip E. Tetlock - Wikipediaen.wikipedia.org·Reference
Claims (9)View all claims
Tetlock was born in Toronto, Canada, and grew up in Winnipeg and Vancouver.
in 1976 working with Peter Suedfeld on content analysis of diplomatic communications.
Ellsworth.
+6 more claims
3Good Judgment Inc. - Aboutgoodjudgment.com
Claims (1)View all claims
The Good Judgment Project's success led to the founding of Good Judgment Inc., a consultancy co-founded by Tetlock that offers bespoke forecasting services, workshops for private clients, and the Good Judgment Open platform for crowd-based forecasts.
Minor issues80%Feb 22, 2026
Good Judgment Inc is now making this winning approach to harnessing the wisdom of the crowd available for commercial use.

The source does not explicitly state that Good Judgment Inc. offers workshops for private clients. The source does not explicitly state that Philip Tetlock co-founded Good Judgment Inc.

Claims (1)View all claims
Tetlock's research demonstrated that forecasting accuracy could be improved through training programs focusing on these cognitive habits, team structures that facilitate information sharing, and aggregation algorithms that appropriately weight the judgments of top performers.
Accurate100%Feb 22, 2026
For example, they ran an RCT to test the effect of a short training program on forecasting accuracy.
Claims (6)View all claims
Tetlock is a Canadian-born psychologist who revolutionized the study of forecasting accuracy through decades of research demonstrating that expert predictions on political and economic events are often no better than random chance, while identifying systematic methods to achieve superior forecasting performance.
Accurate90%Feb 22, 2026
It was psychologist Philip Tetlock who demonstrated that, generally, the accuracy of our predictions is no better than chance, which means that flipping a coin is just as good as our best guess.
This research culminated in his landmark 2005 book Expert Political Judgment, which documented that experts with access to classified information performed no better than Berkeley undergraduates or "dart-throwing chimpanzees" on long-range forecasts.
Foxes, by contrast, are modest, self-critical thinkers who draw on diverse perspectives and remain skeptical of grand theories.
+3 more claims
Claims (2)View all claims
According to the Financial Times in October 2025, superforecasters associated with the Good Judgment Project proved 30% more accurate on average than futures markets and continued to beat market predictions on Federal Reserve decisions, demonstrating the continued relevance of Tetlock's forecasting methods.
Minor issues85%Feb 22, 2026
“Superforecasters have continued to beat the market so far this year when it comes to anticipating Fed decisions, as they had also in 2023 and 2024,” writes Financial Times data journalist Joel Suss for FT’s exclusive Monetary Policy Radar service.

The claim that superforecasters were '30% more accurate on average than futures markets' is not directly supported by the source. The source only states that superforecasters 'continued to beat the market so far this year when it comes to anticipating Fed decisions'. The claim mentions 'the continued relevance of Tetlock's forecasting methods', but the source does not explicitly mention this.

Tetlock received significant media attention throughout 2024-2025, with appearances and coverage in outlets including the Financial Times, Bloomberg, Forbes, Newsweek, The Guardian, and Times Radio.
Minor issues85%Feb 22, 2026
Monetary Policy Radar: ‘Superforecasters’ tend to beat the market Financial Times (October 2025) “Superforecasters have continued to beat the market so far this year when it comes to anticipating Fed decisions, as they had also in 2023 and 2024,” writes Financial Times data journalist Joel Suss for FT’s exclusive Monetary Policy Radar service.

The claim covers 2024-2025, but the source also includes media attention from 2023 and 2026. The claim states that Tetlock received the media attention, but the source focuses on Superforecasting and Good Judgement.

Claims (2)View all claims
Tetlock is a Canadian-born psychologist who revolutionized the study of forecasting accuracy through decades of research demonstrating that expert predictions on political and economic events are often no better than random chance, while identifying systematic methods to achieve superior forecasting performance.
Minor issues85%Feb 22, 2026
His best-known work, Expert Political Judgment: How Good Is It? How Can We Know? (Princeton University Press, 2005), argued that “expert” predictions of political and economic trends are no more reliable than those of non-experts, based on a 20-year study of more than 82,000 predictions by 284 experts.

The source does not explicitly state that Tetlock is Canadian-born, only that he attended the University of British Columbia. The claim states that Tetlock identified 'systematic methods to achieve superior forecasting performance,' but the source only mentions that his book argued expert predictions are no more reliable than non-experts.

In December 2010, he was appointed Leonore Annenberg University Professor of Democracy and Citizenship at the University of Pennsylvania, becoming a Penn Integrates Knowledge (PIK) Professor with joint appointments in Psychology, Management, and the Annenberg School for Communication.
Minor issues85%Feb 22, 2026
Philip Tetlock, Ph.D. Leonore Annenberg University Professor of Psychology and Management, Wharton School of Business and School of Arts and Sciences

The source does not mention the date of appointment (December 2010). The source states that Tetlock is the Leonore Annenberg University Professor of Psychology and Management, not Democracy and Citizenship. The source does not explicitly state that Tetlock is a Penn Integrates Knowledge (PIK) Professor.

Claims (13)View all claims
As the Leonore Annenberg University Professor at the University of Pennsylvania with cross-appointments at the Wharton School and School of Arts and Sciences, Tetlock has authored over 200 peer-reviewed articles and nine books examining judgment, decision-making, and prediction accuracy.
Inaccurate30%Feb 22, 2026
Professor Philip Tetlock was named the Leonore Annenberg University Professor of Democracy and Citizenship in December 2010.

The source does not mention that Tetlock has cross-appointments at the Wharton School and School of Arts and Sciences. The source does not state that Tetlock has authored over 200 peer-reviewed articles. The source only lists 3 books by Tetlock, not nine.

Tetlock's most influential work emerged from forecasting tournaments he initiated during the Cold War era through the National Academy of Sciences Committee on Nuclear War Prevention, analyzing over 82,000 predictions from 284 experts between 1984 and 2003.
Inaccurate30%Feb 22, 2026
Tetlock created the first ever forecasting competition during the Cold War when he grew increasingly concerned that public debate was dominated by vague, unverifiable predictions.

The source does not mention the National Academy of Sciences Committee on Nuclear War Prevention. The source does not mention the analysis of 82,000 predictions from 284 experts. The source does not specify the time frame of 1984 to 2003.

However, Tetlock also identified a minority of superior forecasters—"foxes" who integrate diverse perspectives rather than "hedgehogs" who apply single theories—leading to his co-founding of the Good Judgment Project with Barbara Mellers and Don Moore.
Minor issues85%Feb 22, 2026
That work also served as a pilot for The Good Judgment Project , a prediction tournament among five universities sponsored by the U.S. intelligence community. Tetlock, along with his Penn colleague and spouse, Barbara Mellers , and his UC Berkeley colleague Don Moore, co-led that contest’s winning team, which includes experts in statistics, computer science, economics, psychology, and political science.

The source does not mention the terms 'foxes' and 'hedgehogs'. The source does not explicitly state that Tetlock identified a minority of superior forecasters.

+10 more claims
9Philip Tetlock | Alliance for Decision Educationalliancefordecisioneducation.org
Claims (1)View all claims
As the Leonore Annenberg University Professor at the University of Pennsylvania with cross-appointments at the Wharton School and School of Arts and Sciences, Tetlock has authored over 200 peer-reviewed articles and nine books examining judgment, decision-making, and prediction accuracy.
Minor issues90%Feb 22, 2026
Phil serves on the faculty at the University of Pennsylvania as Annenberg University Professor, with appointments in the Wharton School in Management and the School of Arts and Sciences in Psychology.

The source says Tetlock has written or co-written 10 books, not 9. The source says Tetlock is the Annenberg University Professor, not the Leonore Annenberg University Professor.

Claims (18)View all claims
Tetlock has acknowledged significant challenges in applying forecasting methods to existential risks, including the lack of feedback loops for learning from errors on long-term predictions, the difficulty of recruiting sufficient expertise, and the potential for information hazards when discussing specific risk scenarios.
Inaccurate60%Feb 22, 2026
We expect that additional funding at this time would help Professor Tetlock and his collaborators expand their research on forecasting global catastrophic risks. They are seeking support for tackling the ten methodological challenges to X-risk forecasting outlined in their recent paper: “Managing Rigor-Relevance Trade-Offs” and finding ways to make forecasting more actionable for decision-makers. “Crafting Incisive Forecasting Questions” by building “Bayesian question clusters” and “conditional trees” to enhance question relevance. “Incentivizing Persuasive, Predictively Powerful Explanations” to ensure that forecasts are more than just a number. “Incentivizing True Reports about X-Risk Mitigation” and developing methods to measure the effect sizes of policy interventions. “Recruiting the Right Talent” for second-generation forecasting research. “Motivating the Talent”. “Picking Probability-Elicitation Tools and Scoring Rules” to craft the appropriate measures to forecast low-probability high-impact events. “Helping People Prepare for Distinctive Analytic Challenges of X-Risk Assessment”. “Benchmarking against External Standards”. “Managing Information Hazards” and making sure that predictions of risks don’t do more harm than good.

The claim mentions 'lack of feedback loops for learning from errors on long-term predictions', 'the difficulty of recruiting sufficient expertise', and 'the potential for information hazards'. While the source mentions recruiting the right talent and managing information hazards, it does not explicitly mention the lack of feedback loops for learning from errors on long-term predictions.

According to analysis of the project's results, superforecasters were approximately 60-85% more accurate than average forecasters and demonstrated the ability to distinguish 10-15 degrees of uncertainty while maintaining calibration across hundreds of events.
Unsupported0%Feb 22, 2026
Professor Tetlock identified high-performing forecasters, who were consistently able to finish at the top of the tournaments they entered, whom he and his collaborators dubbed “superforecasters.” The superforecasting team, which Professor Tetlock called the Good Judgment Project, beat teams of other experts and intelligence professionals to win the IARPA tournament.

The source does not contain any information about the accuracy of superforecasters compared to average forecasters or their ability to distinguish degrees of uncertainty.

- Regular practice: Consistent engagement with forecasting questions to refine judgment
Unsupported0%Feb 22, 2026
Between 1984 and 2003, Professor Tetlock ran a number of forecasting tournaments in which predictions about future events were solicited from hundreds of experts.

The source does not mention that consistent engagement with forecasting questions refines judgment.

+15 more claims
Claims (5)View all claims
The project identified "superforecasters"—ordinary citizens whose accuracy exceeded intelligence analysts with classified information access by 60-85%.
Inaccurate70%Feb 22, 2026
In our tournament, we've skimmed off the very best forecasters in the first year, the top two percent. We call them "super forecasters."

WRONG NUMBERS: The claim states that superforecasters exceeded intelligence analysts by 60-85%, but the source does not provide these specific numbers.

According to analysis of the project's results, superforecasters were approximately 60-85% more accurate than average forecasters and demonstrated the ability to distinguish 10-15 degrees of uncertainty while maintaining calibration across hundreds of events.
Minor issues85%Feb 22, 2026
In our tournament, we've skimmed off the very best forecasters in the first year, the top two percent. We call them "super forecasters."

The source does not explicitly state that superforecasters were 'approximately 60-85% more accurate than average forecasters'. It only mentions that they are 'far more accurate than I would have ever supposed possible'. The source does not explicitly state that superforecasters demonstrated the ability to distinguish '10-15 degrees of uncertainty'. It does mention that they are skillful at 'finding information, synthesizing it, and applying it, and then updating the response to new information.'

The project developed techniques including extremizing weighted averages (adjusting crowd predictions to account for shared information) and Bayesian question clusters (breaking complex forecasts into component questions).
Minor issues85%Feb 22, 2026
The Intelligence Advance Research Projects Agency about two years ago committed to supporting five university based research teams and funded their efforts to recruit forecasters, set up websites for eliciting forecasts, hire statisticians for aggregating forecasts, and conduct a variety of experiments on factors that might either make forecasters more accurate or less accurate.

The source mentions weighted averaging, but refers to it as a method of aggregating individual forecasts, not of adjusting crowd predictions to account for shared information. The source mentions Bayesian belief adjustment, but not Bayesian question clusters.

+2 more claims
Claims (1)View all claims
in 1976 working with Peter Suedfeld on content analysis of diplomatic communications.
Claims (4)View all claims
However, early critiques noted that while foxes outperformed hedgehogs, they still only modestly exceeded simple benchmarks like extrapolation algorithms, rather than achieving substantial superiority over baseline models.
Minor issues85%Feb 22, 2026
One group would actually have been beaten rather soundly even by the chimp, not to mention the more formidable extrapolation algorithm. The other would have beaten the chimp and sometimes even the extrapolation algorithm, although not by a wide margin.

The source does not explicitly state that foxes only 'modestly' exceeded simple benchmarks. It says they sometimes beat extrapolation algorithms, but not by a wide margin. The claim mentions 'extrapolation algorithms' as a benchmark, while the source mentions 'simple extrapolation algorithms'.

While fox-like forecasters outperform hedgehog forecasters, early analyses noted that foxes still only modestly exceed simple benchmarks like extrapolation algorithms, raising questions about whether the framework sufficiently distinguishes skill from noise.
Minor issues85%Feb 22, 2026
One could say that this latter cluster of experts had real predictive insight, however modest.

The claim states that early analyses noted that foxes only modestly exceed simple benchmarks like extrapolation algorithms, but the source says that foxes sometimes exceed extrapolation algorithms, not that they only modestly exceed them. The claim raises questions about whether the framework sufficiently distinguishes skill from noise, but the source does not explicitly raise this question.

Hedgehogs performed worse than basic models—in some tests, slightly below random chance—but the practical significance of foxes' advantage over simple algorithms remains debated.
Minor issues85%Feb 22, 2026
One group would actually have been beaten rather soundly even by the chimp, not to mention the more formidable extrapolation algorithm.

The source states that hedgehogs would have been beaten by the chimp, not that they performed worse than basic models. The source states that simple extrapolation algorithms performed better than the average expert, not basic models. The source does not mention that the practical significance of foxes' advantage over simple algorithms remains debated.

+1 more claims
Claims (1)View all claims
His 2006 paper "Conflicts of Interest and the Case of Auditor Independence: Moral Seduction and Strategic Issue Cycling" (co-authored with Don Moore, Lloyd Tanlu, and Max Bazerman) analyzed how conflicts of interest in auditing contributed to scandals like Enron and WorldCom.
Claims (2)View all claims
His 2006 paper "Conflicts of Interest and the Case of Auditor Independence: Moral Seduction and Strategic Issue Cycling" (co-authored with Don Moore, Lloyd Tanlu, and Max Bazerman) analyzed how conflicts of interest in auditing contributed to scandals like Enron and WorldCom.
The paper introduced "moral seduction theory"—the concept that professionals can become unaware of moral compromise from conflicts of interest at a micro level—and "issue-cycle theory" explaining how such conflicts persist at a macro level in major accounting firms.
Claims (6)View all claims
Tetlock has warned that accountability mechanisms can degrade into "bureaucratic rituals" or "Potemkin villages"—symbolic facades designed to deflect critics rather than genuinely improve decision-making.
Accurate100%Feb 22, 2026
Proponents of outcome accountability also worry that: (1) process accountability can readily ossify into bureaucratic rituals and mutual backscratching—Potemkin-village facades of process accountability and rigor designed to deflect annoying questions from external critics (Edelman, 1992; Meyer and Rowan, 1977); and (2) process accountability can distract analysts from the central task of understanding the external world by squandering cognitive resources on impression management aimed at convincing superiors of how rigorous their analytical processes are (Lazear, 1989).
His work emphasizes that outcome accountability requires careful, calibrated implementation through controlled evaluation rather than simple demands to "hold rascals accountable".
Accurate100%Feb 22, 2026
If “accountability cures” exist for what ails intelligence analysis, those cures will need to be far more complex and carefully calibrated than cries for “greater accountability” imply—and will need to be implemented in carefully controlled and phased field research trials to ensure that the desired effects outweigh the undesired.
Tetlock's research confronts inherent challenges in evaluating predictions, including the role of exogenous shocks and missing variables that can undermine even sound analyses, giving undue credit to improbable theories.
Accurate100%Feb 22, 2026
Exogenous shocks or missing information on key variables that cause lower probability outcomes to occur—and cast into false doubt fundamentally sound analyses of causal dynamics. Exogenous shocks that cause credit to be assigned to far-fetched theories.
+3 more claims
Claims (1)View all claims
In 2022, Tetlock became President and Chief Scientist of the Forecasting Research Institute (FRI), which received over \$6 million in funding from Coefficient Giving for developing forecasting techniques applicable to global catastrophic and existential risks.
Minor issues90%Feb 22, 2026
This follows our October 2021 support ($275,000) for planning work by FRI Chief Scientist Philip Tetlock, and falls within our work on global catastrophic risks ( writeup ) two grants totaling $6,305,675 over three years to support the Forecasting Research Institute (FRI)’s work on projects to advance the science of forecasting as a tool to improve public policy and reduce existential risk.

The source does not mention Tetlock becoming President of FRI, only Chief Scientist. The source states that FRI received grants totaling $6,305,675, not 'over $6 million'.

Claims (3)View all claims
His recent research explores "hybrid persuasion-forecasting tournaments" that combine expert argumentation with probabilistic forecasting to improve judgments about low-probability, high-impact events.
Accurate100%Feb 22, 2026
Karger, E., Jacobs, Z., Rosenberg, J. & Tetlock, P. E. (2025). Subjective-probability forecasts of existential risk: Initial Results from a hybrid persuasion-forecasting tournament. International Journal of Forecasting .
His 2025 research published in ACM Transactions on Interactive Intelligent Systems examined how large language models can achieve forecasting accuracy comparable to human forecasters when predictions are combined, raising questions about both AI capabilities in prediction tasks and the potential role of AI systems in risk assessment.
Accurate100%Feb 22, 2026
Schoenegger, Coombs, S., Karger, E. & Tetlock, P.E. (2025). AI-Augmented predictions: LLM assistants improve human forecasting accuracy. Association for Computing Machinery (ACM): Transactions on interactive intelligent systems.
Recent publications include "AI-Augmented predictions: LLM assistants improve human forecasting accuracy" (2025) in ACM Transactions on Interactive Intelligent Systems, "Subjective-probability forecasts of existential risk: Initial Results from a hybrid persuasion-forecasting tournament" (2025) in the International Journal of Forecasting, and "Long-range subjective-probability forecasts of slow-motion variables in world politics: Exploring limits on expert judgment" (2024) in Futures and Foresight Science.
Accurate100%Feb 22, 2026
Schoenegger, Coombs, S., Karger, E. & Tetlock, P.E. (2025). AI-Augmented predictions: LLM assistants improve human forecasting accuracy. Association for Computing Machinery (ACM): Transactions on interactive intelligent systems. Karger, E., Jacobs, Z., Rosenberg, J. & Tetlock, P. E. (2025). Subjective-probability forecasts of existential risk: Initial Results from a hybrid persuasion-forecasting tournament. International Journal of Forecasting . Tetlock, P. E., Karvetski, C., Satopää, V. A., & Chen, K. (2024). Long-range subjective-probability forecasts of slow-motion variables in world politics: Exploring limits on expert judgment. Futures and Foresight Science, 6(1), e157. doi:10.1002/ffo2.157
Claims (1)View all claims
This work suggests that AI-augmented forecasting—combining human judgment with machine learning—may offer advantages over either approach alone for certain types of predictions.
Accurate100%Feb 22, 2026
New research from Wharton's Philip Tetlock finds that combining predictions from large language models can achieve accuracy on par with human forecasters.
20Why is EA so enthusiastic about forecasting? | EA Forumforum.effectivealtruism.org·Blog post
Claims (2)View all claims
policymakers).
Unsupported0%Feb 22, 2026
Open Philanthropy has funded the Forecasting Research Institute (research), Metaculus (forecasting platform) and INFER (a program to support the use of forecasting by US policymakers).
Tetlock has become a prominent figure in the effective altruism (EA) community, with "Tetlock-style judgmental forecasting" notably more popular within EA than in broader contexts.
Accurate100%Feb 22, 2026
why is Tetlock-style judgmental forecasting so popular within EA, but not that popular outside of it?
21Philip Tetlock Fireside Chat | EA Forumforum.effectivealtruism.org·Blog post
Claims (1)View all claims
Tetlock has participated in multiple EA Global conferences through fireside chats and Q&A sessions, discussing topics including prediction algorithms, long-term future considerations, epistemic modesty, and belief updating mechanics.
Accurate100%Feb 22, 2026
Labenz: I&#x27;ve certainly heard some sincerely pro-Trump positions at EA Global in the past.
22Interview with Prof Tetlock on epistemic modesty | EA Forumforum.effectivealtruism.org·Blog post
Claims (1)View all claims
Tetlock has participated in multiple EA Global conferences through fireside chats and Q&A sessions, discussing topics including prediction algorithms, long-term future considerations, epistemic modesty, and belief updating mechanics.
Claims (4)View all claims
Tetlock himself has expressed skepticism about very long-term forecasts (such as IPCC projections to 2100), noting that wide estimate spreads and the lack of feedback mechanisms limit the applicability of his methods to century-scale predictions.
Minor issues90%Feb 22, 2026
Am I a believer in climate change or am I disbeliever, if I say, &#8220;Well, when I think about the UN intergovernmental panel on Climate Change forecasts for the year 2100, the global surface temperature forecasts, I&#8217;m 72% confident that they&#8217;re within plus or minus 0.3 degrees centigrade in their projections&#8221;?

The source does not explicitly mention 'lack of feedback mechanisms' as a limitation. The source refers to 'UN intergovernmental panel on Climate Change forecasts for year 2100' rather than 'IPCC projections to 2100'.

Domains involving high combinatorial complexity—such as AI risk debates or complex simulations—reveal blind spots even in skilled forecasters, as the number of relevant variables exceeds human cognitive capacity.
Accurate100%Feb 22, 2026
I think that&#8217;s a great question and I was working with that more or less that assumption myself, but it seems that for the counterfactual questions that are being posed in a simulation that is as complex as Civ5 where the combinatorics are staggering and the number of possible states of civilizations and variables probably is greater than number of atoms in the universe, that even very skilled Civ5 players will have serious blind spots that can be exploited by clever question posers.
Tetlock acknowledges that forecasting serves multiple social functions, and that the temptation exists for activists to exaggerate risks (framing certainty as group commitment) or for ideological groups to exclude those expressing doubt.
Accurate100%Feb 22, 2026
And I think that’s one of the key reasons why forecasting tournaments are hard sell. I think people… forecasts do not just serve an accuracy function, people aren’t just interested in accuracy, they’re interested in fitting in, they want to avoid embarrassment, they don’t want their friends to call them names, I don’t want to be called a denialist or a racist or whatever other kind of thing I might be… whatever the epithet you might incur by assigning a probability on the wrong side of maybe.
+1 more claims
Claims (4)View all claims
Tetlock has expressed frustration that his research has been misinterpreted and misused to justify dismissing expert opinion entirely, rather than improving forecasting practices.
Accurate100%Feb 22, 2026
I mean I was always a big fan of Monty Python and John Cleese. I think John Cleese was a brilliant comedian, he may still be a brilliant comedian, but the John Cleese, Michael Gove perspective that Expert Political Judgment somehow justified not listening to expert opinion about the consequences of Brexit struck me as somewhat dangerous misreading of the book.
He particularly criticized how political figures like Michael Gove cited Expert Political Judgment to justify ignoring expert consensus on Brexit consequences, characterizing this as a "dangerous misreading" of his findings.
Accurate100%Feb 22, 2026
the John Cleese, Michael Gove perspective that Expert Political Judgment somehow justified not listening to expert opinion about the consequences of Brexit struck me as a somewhat dangerous misreading of the book.
Tetlock emphasized that "it's not that I'm saying that the experts are going to be right, but I would say completely ignoring them is dangerous".
Accurate100%Feb 22, 2026
It&#8217;s not that I&#8217;m saying that the experts are going to be right, but I would say completely ignoring them is dangerous.
+1 more claims
Claims (2)View all claims
His more recent work, including Superforecasting, emphasizes that forecasting accuracy can be improved through better methodology and training, rather than arguing that prediction is fundamentally impossible.
Accurate100%Feb 22, 2026
People also look to forecasters for ideological assurance, entertainment, and to minimize regret–such as that caused by not taking a global pandemic seriously enough.
Forecasters are valued for multiple purposes beyond pure accuracy, including ideological comfort, entertainment value, and regret minimization (such as in pandemic preparedness).
Accurate100%Feb 22, 2026
Accuracy is only one of the things we want from forecasters, says Philip Tetlock, a professor at the University of Pennsylvania and co-author of Superforecasting: The Art and Science of Prediction . People also look to forecasters for ideological assurance, entertainment, and to minimize regret–such as that caused by not taking a global pandemic seriously enough.
Claims (1)View all claims
Tetlock himself has expressed skepticism about very long-term forecasts (such as IPCC projections to 2100), noting that wide estimate spreads and the lack of feedback mechanisms limit the applicability of his methods to century-scale predictions.
Accurate100%Feb 22, 2026
These were aggressive, long-range forecasts way beyond anything we look at. In doing political judgment work, our longest forecast is for five to 10 years, and in our work with IARPA, the longest forecasts are 18 to 24 months. Most of them are 12 months ahead or less.
Claims (1)View all claims
In January 2026, he was appointed to the Board of Directors of ForecastEx, Interactive Brokers' prediction market platform, where his expertise in forecasting and decision-making under uncertainty aligns with the platform's mission to help market participants trade probabilities of future outcomes.
Claims (1)View all claims
In January 2026, he was appointed to the Board of Directors of ForecastEx, Interactive Brokers' prediction market platform, where his expertise in forecasting and decision-making under uncertainty aligns with the platform's mission to help market participants trade probabilities of future outcomes.
Minor issues90%Feb 22, 2026
Interactive Brokers (Nasdaq: IBKR), an automated global electronic broker, announced the appointment of Dr. Philip Tetlock to the Board of Directors of ForecastEx. Dr. Tetlock is internationally recognized for his groundbreaking expertise in forecasting, probability-based judgment, and decision-making under uncertainty, which closely aligns with ForecastEx’s prediction market model.

The article was published on January 22, 2026, but the claim states that he was appointed in January 2026 without specifying the exact date. The claim states that ForecastEx is Interactive Brokers' prediction market platform, but the source states that ForecastEx is a wholly-owned subsidiary of Interactive Brokers.

Citation verification: 42 verified, 4 flagged, 25 unchecked of 98 total

Related Pages

Top Related Pages

Analysis

XPT (Existential Risk Persuasion Tournament)ForecastBench

Organizations

Coefficient GivingEA GlobalKalshi (Prediction Market)

Approaches

AI Risk Public EducationPrediction Markets (AI Forecasting)AI-Human Hybrid Systems

Concepts

AI TimelinesAI Scaling LawsAGI Timeline

Models

Irreversibility Threshold ModelAI Capability Threshold ModelAI Risk Activation Timeline ModelAI-Bioweapons Timeline Model

Key Debates

AI Risk Critical Uncertainties Model

Risks

Automation Bias (AI Systems)

Other

Eli LiflandRobin Hanson