Skip to content

Philip Tetlock

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:73 (Good)
Importance:75 (High)
Last edited:2026-02-01 (5 days ago)
Words:3.6k
Structure:
📊 2📈 0🔗 11📚 1003%Score: 12/15
LLM Summary:Philip Tetlock is a psychologist who revolutionized forecasting research by demonstrating that expert predictions often perform no better than chance, while identifying systematic methods and 'superforecasters' who achieve superior accuracy. His work has significant implications for AI safety and existential risk assessment, though faces challenges when applied to long-term, low-probability events with limited feedback loops.
Issues (1):
  • Links1 link could use <R> components
AspectAssessment
Primary AchievementPioneered forecasting tournaments demonstrating that systematic methods outperform expert intuition; identified “superforecasters” with superior accuracy
Key PublicationsExpert Political Judgment (2005), Superforecasting (2015)
Institutional AffiliationLeonore Annenberg University Professor at University of Pennsylvania (Wharton and Psychology)
Major ProjectsGood Judgment Project (IARPA tournament winner 2011-2015), Forecasting Research Institute
Influence on AI SafetyMethods applied to existential risk assessment; adversarial collaboration on AI forecasting; EA community adoption of forecasting practices
Key FindingMost expert predictions perform no better than chance; “fox-like” integrative thinkers outperform “hedgehog” theorists
SourceLink
Official Websiteen.wikiquote.org
Wikipediaen.wikipedia.org

Philip E. Tetlock is a Canadian-born psychologist who revolutionized the study of forecasting accuracy through decades of research demonstrating that expert predictions on political and economic events are often no better than random chance, while identifying systematic methods to achieve superior forecasting performance12. As the Leonore Annenberg University Professor at the University of Pennsylvania with cross-appointments at the Wharton School and School of Arts and Sciences, Tetlock has authored over 200 peer-reviewed articles and nine books examining judgment, decision-making, and prediction accuracy34.

Tetlock’s most influential work emerged from forecasting tournaments he initiated during the Cold War era through the National Academy of Sciences Committee on Nuclear War Prevention, analyzing over 82,000 predictions from 284 experts between 1984 and 200356. This research culminated in his landmark 2005 book Expert Political Judgment, which documented that experts with access to classified information performed no better than Berkeley undergraduates or “dart-throwing chimpanzees” on long-range forecasts78. However, Tetlock also identified a minority of superior forecasters—“foxes” who integrate diverse perspectives rather than “hedgehogs” who apply single theories—leading to his co-founding of the Good Judgment Project with Barbara Mellers and Don Moore9.

The Good Judgment Project won a four-year IARPA-sponsored forecasting tournament (2011-2015) involving thousands of forecasters making over one million predictions on geopolitical events1011. The project identified “superforecasters”—ordinary citizens whose accuracy exceeded intelligence analysts with classified information access by 60-85%1213. This work established systematic methods for improving prediction accuracy, including training protocols, team dynamics, and aggregation algorithms that have influenced intelligence agencies, forecasting platforms like Metaculus, and the effective altruism community’s approach to decision-making under uncertainty1415.

Tetlock was born in Toronto, Canada, and grew up in Winnipeg and Vancouver16. He received his B.A. in psychology from the University of British Columbia in 1975, followed by an M.A. in 1976 working with Peter Suedfeld on content analysis of diplomatic communications1718. He completed his Ph.D. in psychology at Yale University in 1979 under the supervision of Phoebe C. Ellsworth19.

From 1979 to 1995, Tetlock served as Assistant Professor of psychology at the University of California, Berkeley, directing the Institute of Personality and Social Research from 1988 to 199520. He then held the Harold E. Burtt Endowed Chair in Psychology and Political Science at Ohio State University (1996-2001) before returning to Berkeley as the Mitchell Endowed Chair at the Haas School of Business (2001-2010)2122. In December 2010, he was appointed Leonore Annenberg University Professor of Democracy and Citizenship at the University of Pennsylvania, becoming a Penn Integrates Knowledge (PIK) Professor with joint appointments in Psychology, Management, and the Annenberg School for Communication2324.

Tetlock’s forecasting research originated from his work on the National Academy of Sciences Committee for the Prevention of Nuclear War in the early 1980s during Cold War tensions25. He became concerned that public debate on nuclear policy relied heavily on vague, unverifiable predictions that could not be systematically evaluated26. This led him to create the first forecasting tournament during the Cold War to test expert predictions scientifically27.

Between 1984 and 2003, Tetlock conducted small-scale forecasting tournaments with 284 experts—including government officials, professors, and journalists spanning ideologies from Marxists to free-market advocates—on geopolitical outcomes2829. These experts made predictions about events such as the Soviet Union’s collapse, the future of apartheid in South Africa, and Middle East peace prospects. The results formed the empirical basis for his 2005 book Expert Political Judgment: How Good Is It? How Can We Know?, published by Princeton University Press30.

The publication of Expert Political Judgment directly influenced U.S. intelligence agencies to create a four-year geopolitical forecasting tournament sponsored by IARPA (Intelligence Advanced Research Projects Activity)31. From 2011 to 2015, Tetlock co-led the winning team—the Good Judgment Project—with his spouse Barbara Mellers and UC Berkeley colleague Don Moore3233. The multidisciplinary team included experts in statistics, computer science, economics, psychology, and political science34.

The project involved thousands of forecasters making over one million predictions on geopolitical questions35. It identified “superforecasters”—high-performing individuals who consistently outperformed both average forecasters and professional intelligence analysts with access to classified information36. According to analysis of the project’s results, superforecasters were approximately 60-85% more accurate than average forecasters and demonstrated the ability to distinguish 10-15 degrees of uncertainty while maintaining calibration across hundreds of events3738.

The Good Judgment Project’s success led to the founding of Good Judgment Inc., a consultancy co-founded by Tetlock that offers bespoke forecasting services, workshops for private clients, and the Good Judgment Open platform for crowd-based forecasts3940. The project’s methods have been adapted for use by U.S. intelligence agencies and inspired forecasting platforms including Metaculus and INFER-Public41.

One of Tetlock’s most influential conceptual contributions is the distinction between “fox-like” and “hedgehog-like” thinkers, inspired by Isaiah Berlin’s essay “The Hedgehog and the Fox”42. Hedgehogs organize their thinking around a single grand theory or ideology and make bold, confident predictions. Foxes, by contrast, are modest, self-critical thinkers who draw on diverse perspectives and remain skeptical of grand theories43.

Tetlock’s research demonstrated that fox-like forecasters consistently outperformed hedgehog forecasters, particularly on long-range forecasts44. Foxes showed greater willingness to update their beliefs in response to evidence and were more accurate across a wider range of prediction domains45. However, early critiques noted that while foxes outperformed hedgehogs, they still only modestly exceeded simple benchmarks like extrapolation algorithms, rather than achieving substantial superiority over baseline models46.

The Good Judgment Project identified specific attributes and practices associated with superior forecasting performance. Superforecasters typically exhibit:

  • Probabilistic thinking: Ability to think in granular probabilities rather than binary yes/no predictions
  • Active open-mindedness: Willingness to consider alternative hypotheses and update beliefs based on evidence
  • Intellectual humility: Recognition of uncertainty and limits of their knowledge
  • Pattern recognition: Skill at identifying relevant historical analogies
  • Team collaboration: Ability to productively combine perspectives with other forecasters
  • Regular practice: Consistent engagement with forecasting questions to refine judgment4748

Tetlock’s research demonstrated that forecasting accuracy could be improved through training programs focusing on these cognitive habits, team structures that facilitate information sharing, and aggregation algorithms that appropriately weight the judgments of top performers4950. The project developed techniques including extremizing weighted averages (adjusting crowd predictions to account for shared information) and Bayesian question clusters (breaking complex forecasts into component questions)5152.

Beyond forecasting accuracy, Tetlock has extensively researched how accountability affects judgment and decision-making. His 2006 paper “Conflicts of Interest and the Case of Auditor Independence: Moral Seduction and Strategic Issue Cycling” (co-authored with Don Moore, Lloyd Tanlu, and Max Bazerman) analyzed how conflicts of interest in auditing contributed to scandals like Enron and WorldCom5354. The paper introduced “moral seduction theory”—the concept that professionals can become unaware of moral compromise from conflicts of interest at a micro level—and “issue-cycle theory” explaining how such conflicts persist at a macro level in major accounting firms55.

Tetlock has warned that accountability mechanisms can degrade into “bureaucratic rituals” or “Potemkin villages”—symbolic facades designed to deflect critics rather than genuinely improve decision-making56. His work emphasizes that outcome accountability requires careful, calibrated implementation through controlled evaluation rather than simple demands to “hold rascals accountable”57.

Application to Existential Risk and AI Safety

Section titled “Application to Existential Risk and AI Safety”

In 2022, Tetlock became President and Chief Scientist of the Forecasting Research Institute (FRI), which received over $6 million in funding from Open Philanthropy for developing forecasting techniques applicable to global catastrophic and existential risks5859. In June-October 2022, FRI organized an “Existential Risk Persuasion Tournament” involving 169 experts—80 subject matter experts and 89 superforecasters—estimating probabilities of catastrophes (≥10% of humanity deaths) or extinction (<1,000 humans) by 2030, 2050, and 210060.

Tetlock has acknowledged significant challenges in applying forecasting methods to existential risks, including the lack of feedback loops for learning from errors on long-term predictions, the difficulty of recruiting sufficient expertise, and the potential for information hazards when discussing specific risk scenarios61. His recent research explores “hybrid persuasion-forecasting tournaments” that combine expert argumentation with probabilistic forecasting to improve judgments about low-probability, high-impact events62.

Tetlock has engaged directly with AI governance concerns through multiple initiatives. He conducted a survey of 135 AI safety and governance researchers on advanced AI risks with Ezra Karger and others63. More recently, his team conducted a two-month intensive adversarial collaboration focused on identifying short-term “cruxes”—key questions about AI that could be resolved by 2030—to explore the limits of how disagreements about AI risks can be resolved through structured debate64.

His 2025 research published in ACM Transactions on Interactive Intelligent Systems examined how large language models can achieve forecasting accuracy comparable to human forecasters when predictions are combined, raising questions about both AI capabilities in prediction tasks and the potential role of AI systems in risk assessment65. This work suggests that AI-augmented forecasting—combining human judgment with machine learning—may offer advantages over either approach alone for certain types of predictions66.

Tetlock has become a prominent figure in the effective altruism (EA) community, with “Tetlock-style judgmental forecasting” notably more popular within EA than in broader contexts67. Open Philanthropy has directly supported forecasting infrastructure influenced by Tetlock’s research, funding FRI, Metaculus, and INFER (a program supporting forecasting use by U.S. policymakers)68. Founders Pledge has evaluated Tetlock’s forecasting research on existential risk as high-impact work suitable for philanthropic support69.

Tetlock has participated in multiple EA Global conferences through fireside chats and Q&A sessions, discussing topics including prediction algorithms, long-term future considerations, epistemic modesty, and belief updating mechanics7071. His work on identifying cognitive biases, tracking prediction accuracy, and conducting systematic post-mortems provides methodological tools relevant to assessing low-probability, high-impact scenarios central to EA priorities72.

Critics have raised several concerns about the scope and interpretation of Tetlock’s forecasting research. While fox-like forecasters outperform hedgehog forecasters, early analyses noted that foxes still only modestly exceed simple benchmarks like extrapolation algorithms, raising questions about whether the framework sufficiently distinguishes skill from noise7374. Hedgehogs performed worse than basic models—in some tests, slightly below random chance—but the practical significance of foxes’ advantage over simple algorithms remains debated75.

Tetlock’s research confronts inherent challenges in evaluating predictions, including the role of exogenous shocks and missing variables that can undermine even sound analyses, giving undue credit to improbable theories76. Arbitrary time frames for prediction windows (such as 5 versus 10 years for Soviet collapse predictions) can distort evaluations of forecaster accuracy77. Domains involving high combinatorial complexity—such as AI risk debates or complex simulations—reveal blind spots even in skilled forecasters, as the number of relevant variables exceeds human cognitive capacity78.

A persistent limitation identified by Tetlock himself is that experts without regular accuracy feedback struggle to convert causal knowledge into probabilistic forecasts79. This challenge is particularly acute for long-term existential risk forecasts, where feedback loops for learning from errors may not exist until after catastrophic outcomes80.

Tetlock has expressed frustration that his research has been misinterpreted and misused to justify dismissing expert opinion entirely, rather than improving forecasting practices81. He particularly criticized how political figures like Michael Gove cited Expert Political Judgment to justify ignoring expert consensus on Brexit consequences, characterizing this as a “dangerous misreading” of his findings82. Tetlock emphasized that “it’s not that I’m saying that the experts are going to be right, but I would say completely ignoring them is dangerous”83.

Populist “know-nothingism” represents a misreading of Tetlock’s work, which demonstrates problems with expert forecasting—including systematic overconfidence and reluctance to change minds—without implying that expert opinion should be completely discounted84. His more recent work, including Superforecasting, emphasizes that forecasting accuracy can be improved through better methodology and training, rather than arguing that prediction is fundamentally impossible85.

Tetlock’s proposals for improving forecaster accountability face significant practical challenges. Implementing respected arbiters to evaluate pundit accuracy encounters difficulties ensuring perceived fairness amid partisan divisions86. Process accountability—requiring forecasters to document their reasoning and methods—can degrade into bureaucratic rituals or symbolic facades (“Potemkin villages”) rather than genuine improvement, as observed in domains from public education to intelligence analysis87. Outcome accountability, while valuable, requires complex and calibrated implementation through controlled evaluation rather than simple demands for accountability88.

Forecasters are valued for multiple purposes beyond pure accuracy, including ideological comfort, entertainment value, and regret minimization (such as in pandemic preparedness)89. Fox-like thinking helps navigate these conflicting values but isn’t solely about predictive performance. Tetlock acknowledges that forecasting serves multiple social functions, and that the temptation exists for activists to exaggerate risks (framing certainty as group commitment) or for ideological groups to exclude those expressing doubt90.

Some critics argue that Tetlock’s findings about expert underperformance, while methodologically sound for short and medium-term forecasts, have been inappropriately extrapolated to long-range planning domains. Tetlock himself has expressed skepticism about very long-term forecasts (such as IPCC projections to 2100), noting that wide estimate spreads and the lack of feedback mechanisms limit the applicability of his methods to century-scale predictions9192.

Tetlock continues active research and institutional involvement in forecasting. In January 2026, he was appointed to the Board of Directors of ForecastEx, Interactive Brokers’ prediction market platform, where his expertise in forecasting and decision-making under uncertainty aligns with the platform’s mission to help market participants trade probabilities of future outcomes9394.

Recent publications include “AI-Augmented predictions: LLM assistants improve human forecasting accuracy” (2025) in ACM Transactions on Interactive Intelligent Systems, “Subjective-probability forecasts of existential risk: Initial Results from a hybrid persuasion-forecasting tournament” (2025) in the International Journal of Forecasting, and “Long-range subjective-probability forecasts of slow-motion variables in world politics: Exploring limits on expert judgment” (2024) in Futures and Foresight Science9596.

According to the Financial Times in October 2025, superforecasters associated with the Good Judgment Project proved 30% more accurate on average than futures markets and continued to beat market predictions on Federal Reserve decisions, demonstrating the continued relevance of Tetlock’s forecasting methods97. Tetlock received significant media attention throughout 2024-2025, with appearances and coverage in outlets including the Financial Times, Bloomberg, Forbes, Newsweek, The Guardian, and Times Radio98.

Several important questions remain about the scope and applicability of Tetlock’s forecasting methods:

Scalability to existential risks: How well do forecasting techniques validated on short and medium-term geopolitical questions transfer to low-probability, high-impact scenarios with limited historical precedent? The lack of feedback loops for century-scale predictions presents fundamental challenges for evaluating and improving long-term forecasts.

AI augmentation limits: As large language models achieve forecasting accuracy comparable to human forecasters, what is the optimal division of labor between human and machine intelligence in prediction tasks? Recent research suggests hybrid approaches may be superior, but the specific conditions favoring human versus AI forecasting remain unclear.

Institutional adoption barriers: Despite demonstrated accuracy improvements, why have forecasting tournaments and superforecaster methods seen limited adoption outside intelligence agencies and specialized platforms? Organizational resistance, incentive misalignment, and the multiple non-accuracy functions that expert predictions serve may present barriers beyond methodological validation.

Long-term forecast calibration: Can any systematic methods achieve meaningful calibration for predictions extending decades or centuries into the future, or are such forecasts inherently limited by irreducible uncertainty and the absence of feedback mechanisms for learning?

Information hazards in risk assessment: How should forecasting tournaments balance the value of detailed, specific predictions about existential risks against the potential for such forecasts to provide roadmaps for malicious actors or create self-fulfilling prophecies?

  1. Philip E. Tetlock, PhD | Annenberg School for Communication at the University of Pennsylvania

  2. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  3. Philip Tetlock - PIK Professors - University of Pennsylvania

  4. Philip Tetlock | Alliance for Decision Education

  5. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  6. Philip Tetlock - PIK Professors - University of Pennsylvania

  7. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  8. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  9. Philip Tetlock - PIK Professors - University of Pennsylvania

  10. Philip Tetlock - PIK Professors - University of Pennsylvania

  11. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  12. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  13. How to win at forecasting - Philip Tetlock | Edge.org

  14. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  15. Philip Tetlock - PIK Professors - University of Pennsylvania

  16. Philip E. Tetlock - Wikipedia

  17. Philip E. Tetlock - Wikipedia

  18. Philip Tetlock wins Grawemeyer Award (2008)

  19. Philip E. Tetlock - Wikipedia

  20. Philip E. Tetlock - Wikipedia

  21. Philip E. Tetlock - Wikipedia

  22. Philip Tetlock - PIK Professors - University of Pennsylvania

  23. Philip Tetlock - PIK Professors - University of Pennsylvania

  24. Philip E. Tetlock, PhD | Annenberg School for Communication at the University of Pennsylvania

  25. Philip E. Tetlock - Wikipedia

  26. Philip Tetlock - PIK Professors - University of Pennsylvania

  27. Philip Tetlock - PIK Professors - University of Pennsylvania

  28. Philip E. Tetlock - Wikipedia

  29. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  30. Philip E. Tetlock - Wikipedia

  31. Philip Tetlock - PIK Professors - University of Pennsylvania

  32. Philip Tetlock - PIK Professors - University of Pennsylvania

  33. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  34. Philip Tetlock - PIK Professors - University of Pennsylvania

  35. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  36. Philip Tetlock - PIK Professors - University of Pennsylvania

  37. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  38. How to win at forecasting - Philip Tetlock | Edge.org

  39. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  40. Good Judgment - About

  41. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  42. How to win at forecasting - Philip Tetlock | Edge.org

  43. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  44. How to win at forecasting - Philip Tetlock | Edge.org

  45. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  46. Overcoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound

  47. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  48. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  49. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  50. Evidence on good forecasting practices from the Good Judgment Project | AI Impacts

  51. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  52. How to win at forecasting - Philip Tetlock | Edge.org

  53. Conflicts of Interest and the Case of Auditor Independence (PDF)

  54. Conflicts of Interest and the Case of Auditor Independence | Semantic Scholar

  55. Conflicts of Interest and the Case of Auditor Independence | Semantic Scholar

  56. Evaluating Intelligence: A Competent Authority | National Academies

  57. Evaluating Intelligence: A Competent Authority | National Academies

  58. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  59. New Open Philanthropy Grantmaking Program: Forecasting | EA Forum

  60. Philip E. Tetlock - Wikipedia

  61. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  62. Philip Tetlock Faculty Page | University of Pennsylvania Psychology

  63. AI Risk Surveys | AI Impacts Wiki

  64. Adversarial Collaboration on AI Risk | Wiley Online Library

  65. Philip Tetlock Faculty Page | University of Pennsylvania Psychology

  66. Philip Tetlock Faculty Page | Wharton School

  67. Why is EA so enthusiastic about forecasting? | EA Forum

  68. Why is EA so enthusiastic about forecasting? | EA Forum

  69. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  70. Philip Tetlock Fireside Chat | EA Forum

  71. Interview with Prof Tetlock on epistemic modesty | EA Forum

  72. Prof. Philip Tetlock’s Forecasting Research | Founders Pledge

  73. Overcoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound

  74. Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab

  75. Overcoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound

  76. Evaluating Intelligence: A Competent Authority | National Academies

  77. Evaluating Intelligence: A Competent Authority | National Academies

  78. Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  79. Adversarial Collaboration on AI Risk | Wiley Online Library

  80. Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  81. Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  82. Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  83. Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  84. Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  85. Philip Tetlock interview | Conversations with Tyler

  86. Overcoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound

  87. Evaluating Intelligence: A Competent Authority | National Academies

  88. Evaluating Intelligence: A Competent Authority | National Academies

  89. Philip Tetlock interview | Conversations with Tyler

  90. Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  91. Fireside chat with Philip Tetlock | Effective Altruism

  92. Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast

  93. ForecastEx Appoints Philip Tetlock to Board | Business Wire

  94. ForecastEx Appoints Philip Tetlock to Board | Barchart

  95. Philip Tetlock Faculty Page | University of Pennsylvania Psychology

  96. Philip Tetlock Faculty Page | University of Pennsylvania Psychology

  97. Good Judgment Press & News

  98. Good Judgment Press & News