Nuño Sempere
- QualityRated 50 but structure suggests 87 (underrated by 37 points)
- Links1 link could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment |
|---|---|
| Primary Role | Superforecaster, researcher, entrepreneur |
| Key Organizations | Co-founder of Samotsvety Forecasting; founder of Sentinel; runs Shapley Maximizers OÜ consultancy |
| Notable Achievements | Samotsvety won CSETCsetCSET is a $100M+ Georgetown center with 50+ staff conducting data-driven AI policy research, particularly on U.S.-China competition and export controls. The center conducts hundreds of annual gover...Quality: 43/100-Foretell competition by approximately 2x margin over second place; ranked 2nd all-time on INFER platform |
| Core Expertise | Forecasting methodology, AI timelines, quantified uncertainty, risk assessment |
| Criticisms | Communication style described as “bellicose” and sometimes unproductive; skeptical of high AI existential risk estimates |
| Current Focus | Building SentinelSentinelSentinel is a 2024-founded foresight organization led by Nuño Sempere that processes millions of news items weekly through AI filtering and elite forecaster assessment to identify global catastroph...Quality: 39/100 as early-warning system for global catastrophes; independent consulting |
Key Links
Section titled “Key Links”| Source | Link |
|---|---|
| Official Website | alignmentforum.org |
| Wikipedia | en.wikipedia.org |
| EA Forum | forum.effectivealtruism.org |
Overview
Section titled “Overview”Nuño Sempere (born October 10, 1998) is a Spanish forecaster, researcher, and entrepreneur known for exceptional performance in forecasting competitions and critical analysis of existential risk estimates. He co-founded SamotsvetySamotsvetyElite forecasting group Samotsvety dominated INFER competitions 2020-2022 with relative Brier scores twice as good as competitors, providing influential probabilistic forecasts including 28% TAI by...Quality: 61/100, a forecasting group that won the CSET-Foretell competition “by an absolutely obscene margin, around twice as good as the next-best team in terms of the relative Brier score,” according to descriptions of their performance.1 Scott Alexander has described Samotsvety members as “some of the best superforecasters in the world.”2
Sempere currently leads Sentinel, a non-profit organization focused on early detection and response to global catastrophes including pandemics, wars, and financial crises that could kill over one million people.3 The organization processes millions of news items through automated scrapers to identify emerging risks and publishes weekly “Sentinel minutes” providing curated analysis of global catastrophic risks. He also runs Shapley Maximizers OÜ, a consultancy specializing in “niche estimation, evaluation, and impact auditing” for value-producing organizations.4
Beyond his operational work, Sempere has become a prominent voice critiquing aspects of the Effective Altruism community and questioning high existential risk estimates from AI. His 2023 “skepticism braindump” challenged AI doom probabilities around 80% by 2070, arguing they may reflect “selection effects, social pressures, and methodological issues” within the rationalist and EA communities.5 He has consulted with major AI labs and institutions, managed teams of 10-20 forecasters, and contributed significantly to forecasting methodology research.6
Background and Early Career
Section titled “Background and Early Career”Sempere studied Mathematics and Philosophy but dropped out due to dissatisfaction with the educational system’s inefficiency.7 He subsequently pursued development economics and maintained interests in Spanish poetry and literature, having previously written a popular Spanish literature blog.
His forecasting career began on prediction platforms including Good JudgmentGood JudgmentGood Judgment Inc. is a commercial forecasting organization that emerged from successful IARPA research, demonstrating that trained 'superforecasters' can outperform intelligence analysts and predi...Quality: 50/100 Open and CSET-Foretell.7 Around 2020, he met fellow forecaster Alexei Yagudin at a summer fellowship at Oxford’s Future of Humanity InstituteOrganizationFuture of Humanity InstituteThe Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving...Quality: 51/100, where both developed their forecasting expertise.8 In 2020, Sempere served as a Future of Humanity Institute Summer Research Fellow and received a grant from the Long Term Future Fund to conduct “independent research on forecasting and optimal paths to improve the long-term.”7
During this period, Sempere worked at the Quantified Uncertainty Research Institute (QURI) on longtermism, forecasting, and quantification research.7 At QURI, he programmed MetaforecastConceptMetaforecastMetaforecast is a forecast aggregation platform combining 2,100+ questions from 10+ sources (Metaculus, Manifold, Polymarket, etc.) with daily updates via automated scraping. Created by QURI, it pr...Quality: 35/100.org, a search tool aggregating predictions from multiple forecasting platforms, which he continues to maintain. He also published a Forecasting Newsletter that accumulated thousands of subscribers before he discontinued it as his time became more valuable.7
Sempere has been involved in organizing the European Summer Program on Rationality during multiple years (2017, 2018, 2019, 2020, and 2022).7 He spent time in the Bahamas as part of the FTX EA Fellowship.7
Samotsvety Forecasting
Section titled “Samotsvety Forecasting”Sempere is a founding member of Samotsvety, a forecasting collective that achieved extraordinary success in competitive forecasting. The group won the CSET-Foretell forecasting competition by performing approximately twice as well as the second-place team in terms of relative Brier score.2
Samotsvety’s track record includes multiple first-place finishes on the INFER/CSET-Foretell platform:
- 2020: 1st place with relative Brier score of -0.912 versus -0.062 for second place; Samotsvety members ranked 5th, 6th, and 7th individually9
- 2021: 1st place with score of -3.259 versus -0.889 for second place and -0.267 for Pro Forecasters; members ranked 1st, 2nd, 4th, and 5th individually9
- 2022: 1st place despite reduced participation9
As of early 2024, Samotsvety team members held the 1st, 2nd, 3rd, and 4th positions all-time on INFER rankings.9 The team also placed 4th on the Insight Prediction leaderboard as of September 2022, notably due to a correct bet on the Russian invasion of Ukraine.9
Sempere’s personal forecasting achievements include ranking in the top 5 during INFER’s first season, achieving 2nd best performance in the second season, and holding the 2nd place all-time position as of February 2024.9 On Good Judgment Open, his Brier score of 0.206 compared favorably to the median of 0.29 (a ratio of 0.71).9
Sentinel: Global Catastrophe Early Warning
Section titled “Sentinel: Global Catastrophe Early Warning”In recent years, Sempere founded Sentinel, described as a “free early-warning system for global catastrophes” that could kill over one million people.3 The organization focuses on high-impact catastrophes including pandemics, wars, and major financial crises, using advanced scraping technology to process millions of news items and maintain a foresight team for large-scale and existential risks.3
In late 2024, Sempere brought on Rai Sur as cofounder and CTO.10 Sur previously founded and served as CTO of the crypto fintech startup Alongside (now Universal), which raised over $13 million from a16z and remained operational for at least five years.6 Sur designed systems that secured $6 million of assets in smart contracts with no security breaches and has experience managing large budgets and multiple employees.6
Sentinel’s foresight team includes Lisa (surname redacted), Vidur KapurVidur KapurVidur Kapur is a superforecaster and AI policy researcher involved in multiple forecasting organizations and the Sentinel early warning system, contributing to AI risk assessment and EA Forum discu...Quality: 38/100, Tolga Bilge, Leif Sigrúnsson, and an anonymous expert geopolitics forecaster.6 The organization publishes weekly “Sentinel minutes” that have gained traction within the risk assessment community, with supporters noting they’ve become a primary news source for major global developments.10
In Q4 2024/Q1 2025, Sentinel sought additional funding to transition to full-time operations, incorporate as a US non-profit, expand the foresight and reserve teams, increase operational capacity, and establish an emergency response fund.10 The organization received early funding via ManifundManifundManifund is a $2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors ($50K-400K budgets), fiscal sponsorship, ...Quality: 50/100, which Sempere described as “useful to not have money be a bottleneck.”11 Sentinel is supported by Impact Ops for operations, which enabled the organization to register as a 501(c)(3).6
Recent forecasts from Sentinel’s team in early 2025 included a 73% probability (50-90% range) that the US would carry out an attack on Venezuelan territory before the end of 2025, and a 47% probability (20-70% range) that Nicolás Maduro would remain President of Venezuela through March 2026.12
Shapley Maximizers Consultancy
Section titled “Shapley Maximizers Consultancy”Sempere founded and runs Shapley Maximizers OÜ, a consultancy registered in Estonia on May 3, 2023.4 The company’s mission is “niche estimation, evaluation, and impact auditing for value-producing people/organizations to add clarity and improve prioritization via forecasting and judgment.”4
Core competencies include research on forecasting incentives, AI progress, prediction marketsInterventionPrediction MarketsPrediction markets achieve Brier scores of 0.16-0.24 (15-25% better than polls) by aggregating dispersed information through financial incentives, with platforms handling $1-3B annually. For AI saf...Quality: 56/100, and scoring rules, as well as project evaluations. Notable work includes an evaluation of the EA Wiki that received praise for rigor.4 Sempere has consulted with major AI labs and other large institutions, and has managed teams of 10-20 forecasters in various contexts.6
Financial data for Shapley Maximizers OÜ shows:
| Metric | 2025 Forecast | Change vs. Prior Year |
|---|---|---|
| Turnover | €6,115 | -91% |
| Average monthly turnover | €510 | N/A |
| Total profit | €97,274 | N/A |
| Net profit | €2,846 | N/A |
| Balance sheet size | €97,283 | +3% |
| Profit margin | 47% | -12% |
The company has a reputation score of 640 and a credit score of 0.01.13
Sempere has stated that he established Shapley Maximizers as a “very profitable” consultancy which he used to bootstrap initial funding for Sentinel, and is now winding it down as he focuses on Sentinel.14
Research Contributions
Section titled “Research Contributions”Sempere has produced significant research on forecasting methodology, AI assessment, and effective altruism evaluation.
Forecasting Methodology
Section titled “Forecasting Methodology”Key research outputs include work on incentive problems in forecasting, prediction market design, and technological discontinuities.4 He co-authored a paper with Alex Lawsen on “Alignment Problems With Current Forecasting Platforms,” published on arXiv in June 2021.15 He has also explored practical limitations of forecasting methodologies and their application to AI progress.
In 2023, Sempere created approximately 700 AI safety forecasting questions for Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. as part of work with the Arb ResearchArb ResearchArb Research is a small AI safety consulting firm that produces methodologically rigorous research and evaluations, particularly known for their AI Safety Camp impact assessment and forecasting wor...Quality: 50/100 team, along with documents on operationalizing FLOPs (floating point operations) and resolution councils.16 He authored “Hurdles of using forecasting as a tool for making sense of AI progress,” commissioned by Open Philanthropy, outlining challenges in AI forecasting.16
Evaluation and Cost-Effectiveness
Section titled “Evaluation and Cost-Effectiveness”Sempere conducted a project through the Quantified Uncertainty Research Institute on valuing research works by eliciting comparisons from EA researchers.17 This work revealed significant disagreements among researchers about research value, sometimes spanning several orders of magnitude.
He has performed evaluations of various organizations and grant programs:
- EA Wiki: External evaluation praised for thoroughness4
- Long-Term Future Fund: Analysis of 2018-2019 grantees (23 grants totaling $803,650), finding 26% more successful than expected ($178,500), 22% as expected ($147,250), with 5 grants ($195,000) not evaluated due to conflicts of interest18
- Longtermist organizations: Shallow evaluations of organizations including ALLFED, APPGFG, CSER, CSET, and FLI19
In 2021, Sempere developed cost-effectiveness models for AI safety, noted as a rare quantitative effort comparable to GiveWell-style analysis, though he highlighted issues like long feedback loops and field sensitivity that make such analysis challenging.20
AI Timelines and Risk Assessment
Section titled “AI Timelines and Risk Assessment”Sempere has conducted significant work on forecasting human-level AI systems arrival and has explored the limitations of current approaches.21 He edited “A Gentle Introduction to Risk Frameworks Beyond Forecasting,” written by Nathaniel Cooke, which covers how disaster risks are conceptualized by risk scholars, Normal Accident Theory, and methods professionals use to study the future.22
Skepticism Toward High AI Existential Risk Estimates
Section titled “Skepticism Toward High AI Existential Risk Estimates”In January 2023, Sempere published “My highly personal skepticism braindump on existential risk from artificial intelligence,” critiquing AI doom probabilities around 80% by 2070.5 He framed these high estimates as potentially “overhyped due to selection effects, social pressures, and methodological issues,” while acknowledging his views as “highly personal” and reactive to rationalist/EA worldviews from 2016-2019.5 He noted the document has “significant weaknesses” including verbalization-to-rationalization risks and mixing obvious with obscure points.5
Sempere’s main arguments include:
Selection Effects: He argues that high existential risk estimates may arise from communities selecting for alarmism. For example, he claims that CFAR (Center for Applied RationalityCenter For Applied RationalityBerkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-y...Quality: 62/100) “fetishized the end of the world” to justify its importance, injecting “doomy narratives” into the community.5
Conjunctiveness and Imperfect Concepts: He expresses skepticism about long conjunctive chains (multiple failures needed for doom scenarios) applied to near-term AI, arguing they rely on “in-the-limit” superintelligence assumptions not action-guiding for current systems. He critiques Nate Soares’ rebuttal to Joe Carlsmith’s power-seeking AIRiskPower-Seeking AIFormal proofs demonstrate optimal policies seek power in MDPs (Turner et al. 2021), now empirically validated: OpenAI o3 sabotaged shutdown in 79% of tests (Palisade 2025), and Claude 3 Opus showed...Quality: 67/100 report as potentially biased under social pressure.23
Social Dynamics: Sempere describes feeling “uneasy with pressure to dismiss counterarguments probabilistically,” characterizing MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100 and CFAR as “one-sided doomers without paid counterpoints.”23
Forecasting Context: As part of Samotsvety, Sempere contributes to forecasts that tend toward lower AI risk estimates compared to some other forecasting groups.23
Responses to his skepticism have been mixed. Some commenters view it as healthy sanity-checking of “gung-ho advocates,” while others have pushed back on specific claims. For instance, some argue that MIRI does provide non-doom arguments and that disagreements on priors shouldn’t be dismissed.23 No direct claims in the research data label his skepticism as “exaggerated or misleading,” though his own framing acknowledges methodological limitations.
Effective Altruism Community Engagement and Criticism
Section titled “Effective Altruism Community Engagement and Criticism”Sempere has been an active and increasingly critical voice within the Effective Altruism community. He was formerly a prolific contributor to the EA Forum and LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100, though he now primarily posts on his personal site (nunosempere.com/blog) due to dissatisfaction with EA Forum changes.14
Critiques of EA Institutions
Section titled “Critiques of EA Institutions”In March 2024, Sempere published “Unflattering aspects of Effective Altruism,” outlining several concerns:24
EA Forum Stewardship: He criticized the forum’s shift toward “catering to marginal/newbie users” with more introductory content, comparing it unfavorably to Reddit. He questioned whether recent changes justify $2 million per year in funding and 6-8 full-time staff.25 Sempere argued that expansion during the FTX era was a “bad judgment call” now requiring downsizing, and expressed concern about moderation against “disagreeable voices.”25
Leadership Accountability: He claimed EA leaders like Holden KarnofskyResearcherHolden KarnofskyHolden Karnofsky directed $300M+ in AI safety funding through Open Philanthropy, growing the field from ~20 to 400+ FTE researchers and developing influential frameworks like the 'Most Important Ce...Quality: 40/100 at Open Philanthropy prioritize philosophy and funders over community input, citing instances of leaders ignoring comments.24
Philosophical Seduction: Sempere argued that while EA ideas are appealing, they can lead to ineffective projects, suggesting the philosophy sometimes masks poor execution.24
Open Philanthropy’s Criminal Justice Reform: He analyzed approximately $200 million donated by Open Philanthropy to criminal justice reform from 2013-2021, questioning the sincerity and effectiveness of this focus area.26 He suggested that “politics is the mind-killer” may have led to degraded reasoning, principal-agent problems, and motivated reasoning. Specific grants he critiqued included $2.5 million to Color Of Change Education Fund (approximately 50% of their one-year budget) and $10,000 to Photo Patch Foundation, which he compared unfavorably to deworming interventions costing $0.35-$0.97 per treatment.26
Communication Style Concerns
Section titled “Communication Style Concerns”Multiple community members have described Sempere’s communication style as “bellicose” and sometimes unproductive for fostering dialogue on charged topics.27 For example, his phrasing “I disagree with the EA Forum’s approach to life” (later softened) caused confusion among readers.27 Critics note he sometimes alternates between obvious and controversial points, which can risk rationalization over genuine verbalization of concerns.5
However, defenders emphasize that his critiques often identify real issues even if the framing could be improved. Some argue that the “inferential distance” between Sempere and other community members contributes to misunderstandings, and that his substantive points about feedback loops in “EA machinery” merit serious consideration.27
Alternative Infrastructure
Section titled “Alternative Infrastructure”Sempere created his own “soothing frontend” for the EA Forum (forum.nunosempere.com) that loads in approximately 0.5 seconds versus the official site’s approximately 5 seconds, and excludes certain users he considers to contribute low signal-to-noise ratio.28 He remains subscribed to the EA Forum RSS feed and skims posts, but has largely moved his own writing to his personal blog.14
Funding and Financial Information
Section titled “Funding and Financial Information”Sempere has received funding from various sources within the EA and forecasting communities:
- Long Term Future Fund: Received a grant of undisclosed amount for “independent research on forecasting and optimal paths to improve the long-term” (2020)7
- Open Philanthropy: Received funding for AI forecasting documents and question creation (2023)16
- Manifund: Received early funding for Sentinel via Manifund’s regranting program, which he described as “psychologically motivating” and ensuring “money wouldn’t be a bottleneck”11
Through Manifund, Sempere has directed grants to projects including Riesgos Catastróficos Globales (focusing on catastrophic risks in Spanish-speaking communities) and has considered funding for APART research and forecasting experimentation.11
Key Uncertainties
Section titled “Key Uncertainties”Several aspects of Sempere’s work and impact remain uncertain or subject to debate:
Sentinel’s Long-term Viability: While Sentinel has established infrastructure for processing news and publishing weekly analyses, it remains unclear whether the organization can sustain operations long-term and whether its early-warning approach will prove effective at preventing or mitigating large-scale catastrophes.
AI Risk Assessment Accuracy: Sempere’s skepticism toward high AI existential risk estimates positions him against some prominent voices in the AI safety community. The accuracy of his position versus higher-doom-probability forecasts remains highly uncertain and depends on the development trajectory of AI systems.
Community Influence Trade-offs: Sempere’s direct, critical communication style has made him a polarizing figure. While some value his willingness to challenge consensus views, others argue his approach hinders productive dialogue. The net impact of his communication style on community epistemics remains debatable.
Forecasting Methodology Limitations: While Sempere has achieved exceptional results in forecasting competitions, the applicability of these skills to long-term, low-probability catastrophic risks (where feedback is sparse or nonexistent) remains an open question that he himself has explored in his research on forecasting limitations.
EA Critique Accuracy: The extent to which Sempere’s criticisms of EA institutions accurately identify problems versus reflect idiosyncratic preferences or incomplete information is contested within the community. His critiques of Open Philanthropy’s criminal justice reform funding, for instance, involve complex cause prioritization questions without clear empirical resolution.
Sources
Section titled “Sources”Footnotes
Section titled “Footnotes”-
EA Forum - Sentinel: Early Detection and Response for Global Catastrophes ↩ ↩2
-
YouTube - With Nuño Sempere: Superforecasting and global catastrophic risks ↩ ↩2 ↩3
-
Nuño Sempere - My highly personal skepticism braindump on existential risk from artificial intelligence ↩ ↩2 ↩3 ↩4 ↩5 ↩6
-
EA Forum - Sentinel: Early Detection and Response for Global Catastrophes ↩ ↩2 ↩3 ↩4 ↩5 ↩6
-
NASDAQ - A Look at Samotsvety Forecasting: One of the World’s Best Predictors of the Future ↩
-
Sentinel Blog - Rising China-Japan tensions, Iran developments ↩
-
Semantic Scholar - Alignment Problems With Current Forecasting Platforms ↩
-
EA Forum - Valuing research works by eliciting comparisons from EA researchers ↩
-
EA Forum - 2018-2019 Long-Term Future Fund grantees: How did they do? ↩
-
Nuño Sempere - Shallow evaluations of longtermist organizations ↩
-
LessWrong - A Gentle Introduction to Risk Frameworks Beyond Forecasting ↩
-
EA Forum - My highly personal skepticism braindump on existential risk ↩ ↩2 ↩3 ↩4
-
Nuño Sempere - Open Philanthropy’s Criminal Justice Reform bet ↩ ↩2
-
EA Forum - Unflattering aspects of Effective Altruism (discussion) ↩ ↩2 ↩3