Skip to content

Nuño Sempere

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:50 (Adequate)⚠️
Importance:45 (Reference)
Last edited:2026-02-01 (5 days ago)
Words:2.9k
Structure:
📊 3📈 0🔗 16📚 317%Score: 13/15
LLM Summary:Nuño Sempere is a Spanish superforecaster who co-founded the highly successful Samotsvety forecasting group and now runs Sentinel for global catastrophe early warning, while being known for skeptical views on high AI existential risk estimates and critical perspectives on EA institutions. The article provides comprehensive coverage of his work, achievements, and controversial positions within the rationalist/EA community.
Issues (2):
  • QualityRated 50 but structure suggests 87 (underrated by 37 points)
  • Links1 link could use <R> components
DimensionAssessment
Primary RoleSuperforecaster, researcher, entrepreneur
Key OrganizationsCo-founder of Samotsvety Forecasting; founder of Sentinel; runs Shapley Maximizers OÜ consultancy
Notable AchievementsSamotsvety won CSET-Foretell competition by approximately 2x margin over second place; ranked 2nd all-time on INFER platform
Core ExpertiseForecasting methodology, AI timelines, quantified uncertainty, risk assessment
CriticismsCommunication style described as “bellicose” and sometimes unproductive; skeptical of high AI existential risk estimates
Current FocusBuilding Sentinel as early-warning system for global catastrophes; independent consulting
SourceLink
Official Websitealignmentforum.org
Wikipediaen.wikipedia.org
EA Forumforum.effectivealtruism.org

Nuño Sempere (born October 10, 1998) is a Spanish forecaster, researcher, and entrepreneur known for exceptional performance in forecasting competitions and critical analysis of existential risk estimates. He co-founded Samotsvety, a forecasting group that won the CSET-Foretell competition “by an absolutely obscene margin, around twice as good as the next-best team in terms of the relative Brier score,” according to descriptions of their performance.1 Scott Alexander has described Samotsvety members as “some of the best superforecasters in the world.”2

Sempere currently leads Sentinel, a non-profit organization focused on early detection and response to global catastrophes including pandemics, wars, and financial crises that could kill over one million people.3 The organization processes millions of news items through automated scrapers to identify emerging risks and publishes weekly “Sentinel minutes” providing curated analysis of global catastrophic risks. He also runs Shapley Maximizers OÜ, a consultancy specializing in “niche estimation, evaluation, and impact auditing” for value-producing organizations.4

Beyond his operational work, Sempere has become a prominent voice critiquing aspects of the Effective Altruism community and questioning high existential risk estimates from AI. His 2023 “skepticism braindump” challenged AI doom probabilities around 80% by 2070, arguing they may reflect “selection effects, social pressures, and methodological issues” within the rationalist and EA communities.5 He has consulted with major AI labs and institutions, managed teams of 10-20 forecasters, and contributed significantly to forecasting methodology research.6

Sempere studied Mathematics and Philosophy but dropped out due to dissatisfaction with the educational system’s inefficiency.7 He subsequently pursued development economics and maintained interests in Spanish poetry and literature, having previously written a popular Spanish literature blog.

His forecasting career began on prediction platforms including Good Judgment Open and CSET-Foretell.7 Around 2020, he met fellow forecaster Alexei Yagudin at a summer fellowship at Oxford’s Future of Humanity Institute, where both developed their forecasting expertise.8 In 2020, Sempere served as a Future of Humanity Institute Summer Research Fellow and received a grant from the Long Term Future Fund to conduct “independent research on forecasting and optimal paths to improve the long-term.”7

During this period, Sempere worked at the Quantified Uncertainty Research Institute (QURI) on longtermism, forecasting, and quantification research.7 At QURI, he programmed Metaforecast.org, a search tool aggregating predictions from multiple forecasting platforms, which he continues to maintain. He also published a Forecasting Newsletter that accumulated thousands of subscribers before he discontinued it as his time became more valuable.7

Sempere has been involved in organizing the European Summer Program on Rationality during multiple years (2017, 2018, 2019, 2020, and 2022).7 He spent time in the Bahamas as part of the FTX EA Fellowship.7

Sempere is a founding member of Samotsvety, a forecasting collective that achieved extraordinary success in competitive forecasting. The group won the CSET-Foretell forecasting competition by performing approximately twice as well as the second-place team in terms of relative Brier score.2

Samotsvety’s track record includes multiple first-place finishes on the INFER/CSET-Foretell platform:

  • 2020: 1st place with relative Brier score of -0.912 versus -0.062 for second place; Samotsvety members ranked 5th, 6th, and 7th individually9
  • 2021: 1st place with score of -3.259 versus -0.889 for second place and -0.267 for Pro Forecasters; members ranked 1st, 2nd, 4th, and 5th individually9
  • 2022: 1st place despite reduced participation9

As of early 2024, Samotsvety team members held the 1st, 2nd, 3rd, and 4th positions all-time on INFER rankings.9 The team also placed 4th on the Insight Prediction leaderboard as of September 2022, notably due to a correct bet on the Russian invasion of Ukraine.9

Sempere’s personal forecasting achievements include ranking in the top 5 during INFER’s first season, achieving 2nd best performance in the second season, and holding the 2nd place all-time position as of February 2024.9 On Good Judgment Open, his Brier score of 0.206 compared favorably to the median of 0.29 (a ratio of 0.71).9

Sentinel: Global Catastrophe Early Warning

Section titled “Sentinel: Global Catastrophe Early Warning”

In recent years, Sempere founded Sentinel, described as a “free early-warning system for global catastrophes” that could kill over one million people.3 The organization focuses on high-impact catastrophes including pandemics, wars, and major financial crises, using advanced scraping technology to process millions of news items and maintain a foresight team for large-scale and existential risks.3

In late 2024, Sempere brought on Rai Sur as cofounder and CTO.10 Sur previously founded and served as CTO of the crypto fintech startup Alongside (now Universal), which raised over $13 million from a16z and remained operational for at least five years.6 Sur designed systems that secured $6 million of assets in smart contracts with no security breaches and has experience managing large budgets and multiple employees.6

Sentinel’s foresight team includes Lisa (surname redacted), Vidur Kapur, Tolga Bilge, Leif Sigrúnsson, and an anonymous expert geopolitics forecaster.6 The organization publishes weekly “Sentinel minutes” that have gained traction within the risk assessment community, with supporters noting they’ve become a primary news source for major global developments.10

In Q4 2024/Q1 2025, Sentinel sought additional funding to transition to full-time operations, incorporate as a US non-profit, expand the foresight and reserve teams, increase operational capacity, and establish an emergency response fund.10 The organization received early funding via Manifund, which Sempere described as “useful to not have money be a bottleneck.”11 Sentinel is supported by Impact Ops for operations, which enabled the organization to register as a 501(c)(3).6

Recent forecasts from Sentinel’s team in early 2025 included a 73% probability (50-90% range) that the US would carry out an attack on Venezuelan territory before the end of 2025, and a 47% probability (20-70% range) that Nicolás Maduro would remain President of Venezuela through March 2026.12

Sempere founded and runs Shapley Maximizers OÜ, a consultancy registered in Estonia on May 3, 2023.4 The company’s mission is “niche estimation, evaluation, and impact auditing for value-producing people/organizations to add clarity and improve prioritization via forecasting and judgment.”4

Core competencies include research on forecasting incentives, AI progress, prediction markets, and scoring rules, as well as project evaluations. Notable work includes an evaluation of the EA Wiki that received praise for rigor.4 Sempere has consulted with major AI labs and other large institutions, and has managed teams of 10-20 forecasters in various contexts.6

Financial data for Shapley Maximizers OÜ shows:

Metric2025 ForecastChange vs. Prior Year
Turnover€6,115-91%
Average monthly turnover€510N/A
Total profit€97,274N/A
Net profit€2,846N/A
Balance sheet size€97,283+3%
Profit margin47%-12%

The company has a reputation score of 640 and a credit score of 0.01.13

Sempere has stated that he established Shapley Maximizers as a “very profitable” consultancy which he used to bootstrap initial funding for Sentinel, and is now winding it down as he focuses on Sentinel.14

Sempere has produced significant research on forecasting methodology, AI assessment, and effective altruism evaluation.

Key research outputs include work on incentive problems in forecasting, prediction market design, and technological discontinuities.4 He co-authored a paper with Alex Lawsen on “Alignment Problems With Current Forecasting Platforms,” published on arXiv in June 2021.15 He has also explored practical limitations of forecasting methodologies and their application to AI progress.

In 2023, Sempere created approximately 700 AI safety forecasting questions for Open Philanthropy as part of work with the Arb Research team, along with documents on operationalizing FLOPs (floating point operations) and resolution councils.16 He authored “Hurdles of using forecasting as a tool for making sense of AI progress,” commissioned by Open Philanthropy, outlining challenges in AI forecasting.16

Sempere conducted a project through the Quantified Uncertainty Research Institute on valuing research works by eliciting comparisons from EA researchers.17 This work revealed significant disagreements among researchers about research value, sometimes spanning several orders of magnitude.

He has performed evaluations of various organizations and grant programs:

  • EA Wiki: External evaluation praised for thoroughness4
  • Long-Term Future Fund: Analysis of 2018-2019 grantees (23 grants totaling $803,650), finding 26% more successful than expected ($178,500), 22% as expected ($147,250), with 5 grants ($195,000) not evaluated due to conflicts of interest18
  • Longtermist organizations: Shallow evaluations of organizations including ALLFED, APPGFG, CSER, CSET, and FLI19

In 2021, Sempere developed cost-effectiveness models for AI safety, noted as a rare quantitative effort comparable to GiveWell-style analysis, though he highlighted issues like long feedback loops and field sensitivity that make such analysis challenging.20

Sempere has conducted significant work on forecasting human-level AI systems arrival and has explored the limitations of current approaches.21 He edited “A Gentle Introduction to Risk Frameworks Beyond Forecasting,” written by Nathaniel Cooke, which covers how disaster risks are conceptualized by risk scholars, Normal Accident Theory, and methods professionals use to study the future.22

Skepticism Toward High AI Existential Risk Estimates

Section titled “Skepticism Toward High AI Existential Risk Estimates”

In January 2023, Sempere published “My highly personal skepticism braindump on existential risk from artificial intelligence,” critiquing AI doom probabilities around 80% by 2070.5 He framed these high estimates as potentially “overhyped due to selection effects, social pressures, and methodological issues,” while acknowledging his views as “highly personal” and reactive to rationalist/EA worldviews from 2016-2019.5 He noted the document has “significant weaknesses” including verbalization-to-rationalization risks and mixing obvious with obscure points.5

Sempere’s main arguments include:

Selection Effects: He argues that high existential risk estimates may arise from communities selecting for alarmism. For example, he claims that CFAR (Center for Applied Rationality) “fetishized the end of the world” to justify its importance, injecting “doomy narratives” into the community.5

Conjunctiveness and Imperfect Concepts: He expresses skepticism about long conjunctive chains (multiple failures needed for doom scenarios) applied to near-term AI, arguing they rely on “in-the-limit” superintelligence assumptions not action-guiding for current systems. He critiques Nate Soares’ rebuttal to Joe Carlsmith’s power-seeking AI report as potentially biased under social pressure.23

Social Dynamics: Sempere describes feeling “uneasy with pressure to dismiss counterarguments probabilistically,” characterizing MIRI and CFAR as “one-sided doomers without paid counterpoints.”23

Forecasting Context: As part of Samotsvety, Sempere contributes to forecasts that tend toward lower AI risk estimates compared to some other forecasting groups.23

Responses to his skepticism have been mixed. Some commenters view it as healthy sanity-checking of “gung-ho advocates,” while others have pushed back on specific claims. For instance, some argue that MIRI does provide non-doom arguments and that disagreements on priors shouldn’t be dismissed.23 No direct claims in the research data label his skepticism as “exaggerated or misleading,” though his own framing acknowledges methodological limitations.

Effective Altruism Community Engagement and Criticism

Section titled “Effective Altruism Community Engagement and Criticism”

Sempere has been an active and increasingly critical voice within the Effective Altruism community. He was formerly a prolific contributor to the EA Forum and LessWrong, though he now primarily posts on his personal site (nunosempere.com/blog) due to dissatisfaction with EA Forum changes.14

In March 2024, Sempere published “Unflattering aspects of Effective Altruism,” outlining several concerns:24

EA Forum Stewardship: He criticized the forum’s shift toward “catering to marginal/newbie users” with more introductory content, comparing it unfavorably to Reddit. He questioned whether recent changes justify $2 million per year in funding and 6-8 full-time staff.25 Sempere argued that expansion during the FTX era was a “bad judgment call” now requiring downsizing, and expressed concern about moderation against “disagreeable voices.”25

Leadership Accountability: He claimed EA leaders like Holden Karnofsky at Open Philanthropy prioritize philosophy and funders over community input, citing instances of leaders ignoring comments.24

Philosophical Seduction: Sempere argued that while EA ideas are appealing, they can lead to ineffective projects, suggesting the philosophy sometimes masks poor execution.24

Open Philanthropy’s Criminal Justice Reform: He analyzed approximately $200 million donated by Open Philanthropy to criminal justice reform from 2013-2021, questioning the sincerity and effectiveness of this focus area.26 He suggested that “politics is the mind-killer” may have led to degraded reasoning, principal-agent problems, and motivated reasoning. Specific grants he critiqued included $2.5 million to Color Of Change Education Fund (approximately 50% of their one-year budget) and $10,000 to Photo Patch Foundation, which he compared unfavorably to deworming interventions costing $0.35-$0.97 per treatment.26

Multiple community members have described Sempere’s communication style as “bellicose” and sometimes unproductive for fostering dialogue on charged topics.27 For example, his phrasing “I disagree with the EA Forum’s approach to life” (later softened) caused confusion among readers.27 Critics note he sometimes alternates between obvious and controversial points, which can risk rationalization over genuine verbalization of concerns.5

However, defenders emphasize that his critiques often identify real issues even if the framing could be improved. Some argue that the “inferential distance” between Sempere and other community members contributes to misunderstandings, and that his substantive points about feedback loops in “EA machinery” merit serious consideration.27

Sempere created his own “soothing frontend” for the EA Forum (forum.nunosempere.com) that loads in approximately 0.5 seconds versus the official site’s approximately 5 seconds, and excludes certain users he considers to contribute low signal-to-noise ratio.28 He remains subscribed to the EA Forum RSS feed and skims posts, but has largely moved his own writing to his personal blog.14

Sempere has received funding from various sources within the EA and forecasting communities:

  • Long Term Future Fund: Received a grant of undisclosed amount for “independent research on forecasting and optimal paths to improve the long-term” (2020)7
  • Open Philanthropy: Received funding for AI forecasting documents and question creation (2023)16
  • Manifund: Received early funding for Sentinel via Manifund’s regranting program, which he described as “psychologically motivating” and ensuring “money wouldn’t be a bottleneck”11

Through Manifund, Sempere has directed grants to projects including Riesgos Catastróficos Globales (focusing on catastrophic risks in Spanish-speaking communities) and has considered funding for APART research and forecasting experimentation.11

Several aspects of Sempere’s work and impact remain uncertain or subject to debate:

Sentinel’s Long-term Viability: While Sentinel has established infrastructure for processing news and publishing weekly analyses, it remains unclear whether the organization can sustain operations long-term and whether its early-warning approach will prove effective at preventing or mitigating large-scale catastrophes.

AI Risk Assessment Accuracy: Sempere’s skepticism toward high AI existential risk estimates positions him against some prominent voices in the AI safety community. The accuracy of his position versus higher-doom-probability forecasts remains highly uncertain and depends on the development trajectory of AI systems.

Community Influence Trade-offs: Sempere’s direct, critical communication style has made him a polarizing figure. While some value his willingness to challenge consensus views, others argue his approach hinders productive dialogue. The net impact of his communication style on community epistemics remains debatable.

Forecasting Methodology Limitations: While Sempere has achieved exceptional results in forecasting competitions, the applicability of these skills to long-term, low-probability catastrophic risks (where feedback is sparse or nonexistent) remains an open question that he himself has explored in his research on forecasting limitations.

EA Critique Accuracy: The extent to which Sempere’s criticisms of EA institutions accurately identify problems versus reflect idiosyncratic preferences or incomplete information is contested within the community. His critiques of Open Philanthropy’s criminal justice reform funding, for instance, involve complex cause prioritization questions without clear empirical resolution.

  1. Alethios Substack - With Nuño Sempere: Superforecasting

  2. EA Forum - Sentinel: Early Detection and Response for Global Catastrophes 2

  3. YouTube - With Nuño Sempere: Superforecasting and global catastrophic risks 2 3

  4. Nuño Sempere - Consulting 2 3 4 5 6

  5. Nuño Sempere - My highly personal skepticism braindump on existential risk from artificial intelligence 2 3 4 5 6

  6. EA Forum - Sentinel: Early Detection and Response for Global Catastrophes 2 3 4 5 6

  7. LessWrong - NunoSempere user profile 2 3 4 5 6 7 8

  8. NASDAQ - A Look at Samotsvety Forecasting: One of the World’s Best Predictors of the Future

  9. Samotsvety - Track Record 2 3 4 5 6 7

  10. Manifund - Fund Sentinel for Q4 2024 2 3

  11. Manifund - NunoSempere 2 3

  12. Sentinel Blog - Rising China-Japan tensions, Iran developments

  13. Inforegister - Shapley Maximizers OÜ

  14. EA Forum - NunoSempere user profile 2 3

  15. Semantic Scholar - Alignment Problems With Current Forecasting Platforms

  16. GitHub - NunoSempere/clarivoyance 2 3

  17. EA Forum - Valuing research works by eliciting comparisons from EA researchers

  18. EA Forum - 2018-2019 Long-Term Future Fund grantees: How did they do?

  19. Nuño Sempere - Shallow evaluations of longtermist organizations

  20. EA Forum - Is there an AI Safety GiveWell?

  21. Epoch AI - Direct Approach Review - Nuño Sempere

  22. LessWrong - A Gentle Introduction to Risk Frameworks Beyond Forecasting

  23. EA Forum - My highly personal skepticism braindump on existential risk 2 3 4

  24. Nuño Sempere - Unflattering aspects of EA 2 3

  25. Nuño Sempere - EA Forum Stewardship 2

  26. Nuño Sempere - Open Philanthropy’s Criminal Justice Reform bet 2

  27. EA Forum - Unflattering aspects of Effective Altruism (discussion) 2 3

  28. Nuño Sempere Forum - Alternative EA Forum frontend