Will MacAskill
Will MacAskill
Comprehensive biographical reference on Will MacAskill covering his founding of EA organizations, academic work on moral uncertainty and longtermism, AGI preparedness advocacy, and controversies including FTX; well-sourced but purely descriptive with no original analysis or actionable prioritization guidance.
William MacAskill (born William David Crouch, 24 March 1987) is a Scottish moral philosopher best known for co-founding the effective altruism (EA) movement and for popularizing the philosophy of longtermism. He has co-founded several influential organizations in the EA ecosystem and has authored widely read books on evidence-based ethics and the long-term future.
Key Links
| Source | Link |
|---|---|
| Official Website | williammacaskill.com |
| Wikipedia | en.wikipedia.org |
| Twitter/X | twitter.com/willmacaskill |
| EA Forum Profile | forum.effectivealtruism.org |
| Forethought Profile | forethought.org |
Quick Assessment
| Dimension | Assessment |
|---|---|
| Born | 24 March 1987, Glasgow, Scotland1 |
| Education | Cambridge, Princeton, Oxford (DPhil, 2014)2 |
| Current Role | Senior Research Fellow, Forethought3 |
| Former Role | Associate Professor of Philosophy, Oxford University2 |
| Co-founded | Giving What We Can (2009), Centre for Effective Altruism (2011), 80,000 Hours (2011), Global Priorities Institute1 |
| Key Books | Doing Good Better (2015), Moral Uncertainty (2020), What We Owe the Future (2022)2 |
| Research Focus | Moral uncertainty, effective altruism, longtermism, AGI preparedness4 |
Overview
Will MacAskill is one of the most prominent figures in applied ethics and the effective altruism movement. His work centers on the idea that people should use evidence and rational analysis to maximize positive impact—whether through charitable giving, career choices, or policy advocacy.5 He co-founded Giving What We Can in 2009 with Toby Ord, encouraging members to pledge at least 10% of their income to highly effective charities, and followed this with co-founding the Centre for Effective Altruism and 80,000 Hours in 2011.1
MacAskill's academic research focuses on normative uncertainty—how to make good decisions when one is uncertain not just about facts but about which ethical theory is correct. He has argued that moral uncertainty and empirical uncertainty should be treated analogously, and he has developed frameworks for comparing the "choice-worthiness" of actions across different ethical theories.6 His DPhil thesis from Oxford addressed these themes, and related work has appeared in journals including Ethics, Mind, and The Journal of Philosophy.1
More recently, MacAskill has shifted substantial attention toward longtermism—the view that positively shaping humanity's long-term future is among the most important moral priorities—and toward AGI preparedness. He left his Associate Professor position at Oxford, where he had been the youngest person appointed to such a role, to become a Senior Research Fellow at Forethought, focusing on topics such as AI governance, space governance, AI rights, and the ethical challenges posed by a potential intelligence explosion.3
History
Early Life and Education
MacAskill was born William David Crouch on 24 March 1987 in Glasgow, Scotland.1 He attended Hutchesons' Grammar School in Glasgow before studying at Cambridge, Princeton, and Oxford universities.2 He completed his DPhil in Philosophy at St. Anne's College, Oxford in 2014.7
Founding the Effective Altruism Movement (2009–2012)
The origins of the EA movement trace directly to MacAskill's collaborations at Oxford. In 2009, while a graduate student, he co-founded Giving What We Can with fellow philosopher Toby Ord.1 The organization asks members to pledge to donate at least 10% of their income to the most effective charities. MacAskill made a personal commitment at the time to donate everything he earned above £25,000 annually (in 2009 money), which he estimated would total over £1 million across his lifetime.8
In 2011, MacAskill co-founded the Centre for Effective Altruism as an umbrella organization, and in the same year co-founded 80,000 Hours with Benjamin Todd, an organization dedicated to providing career advice for those seeking to maximize their social impact.17 He later served as President of the Centre for Effective Altruism.7
Academic Career
Following the completion of his DPhil in 2014, MacAskill became an Associate Professor of Philosophy at Oxford University. According to multiple sources, he was at the time the youngest person in the world to hold such a position in philosophy.27 He also held a Research Fellowship at the Global Priorities Institute at Oxford, which he co-founded.4 He remained at Oxford until 2024, when he moved to Forethought as a Senior Research Fellow.3
Publications
MacAskill published Doing Good Better in 2015, which introduced EA principles to a broad audience and was reviewed in outlets including The New York Times and The Guardian.2 In 2020, he co-authored Moral Uncertainty with Toby Ord and Krister Bykvist, a more technical treatment of decision-making under ethical uncertainty.1 His 2022 book What We Owe the Future became a New York Times bestseller and attracted significant media coverage in Time, The Economist, and The Guardian, among others.2 It advanced the case for longtermism and argued that actions taken today can shape the trajectory of civilization across vast timescales.
Recognition
MacAskill was named to the Forbes 30 Under 30 list of social entrepreneurs in 2017.7 His TED Talk on effective altruism has accumulated close to one million views.9
FTX and Sam Bankman-Fried
MacAskill had a significant relationship with Sam Bankman-Fried (SBF) for several years, having introduced him to effective altruism. The FTX Future Fund, with which MacAskill was involved, reportedly granted $160 million to EA causes in 2022, including $33 million to organizations directly connected to MacAskill.1 Following the collapse of FTX and the ensuing bankruptcy proceedings in late 2022, MacAskill and the rest of the FTX Future Fund team resigned from the fund.1 Reports also indicate that in 2018, during an effort to remove Bankman-Fried from Alameda Research, MacAskill reportedly characterized claims of inappropriate conduct as a "he said-she said" situation.1 The FTX scandal brought significant scrutiny to MacAskill's personal and professional ties to Bankman-Fried and to the broader EA movement's reliance on large donors from the cryptocurrency sector.
Philosophy and Key Ideas
Effective Altruism
MacAskill's foundational contribution is the popularization of effective altruism: the project of applying evidence and reason to determine how to do the most good with available time and resources. EA as MacAskill describes it is not synonymous with utilitarianism or with any single moral theory—it is, in his framing, a question and an epistemic approach rather than a fixed ideology.5 The movement encompasses cause areas including global health and poverty, animal welfare, and catastrophic and existential risk reduction.
Moral Uncertainty
A central thread in MacAskill's academic work is the treatment of moral uncertainty. He argues that just as rational agents should act under empirical uncertainty by weighing probabilities and expected outcomes, they should treat uncertainty about which ethical theory is correct in an analogous way.6 His 2020 book Moral Uncertainty, co-authored with Toby Ord and Krister Bykvist, develops frameworks for making decisions when one is genuinely unsure which moral view is correct, including statistical normalization methods for comparing choice-worthiness across theories.6
Longtermism
MacAskill is one of the most prominent advocates for longtermism. In What We Owe the Future, he argues that the long-run future of humanity is vast and that the moral weight of future generations is at least comparable to that of people alive today. He uses a three-factor framework to assess the long-term value of interventions: significance (the average value of an outcome), persistence (how long the outcome endures), and contingency (whether the outcome would have occurred anyway without the intervention).10
A key concept in his longtermist framework is "civilizational plasticity"—the idea that certain historical moments are more malleable than others and that actions taken during these periods can have disproportionate long-term effects. MacAskill identifies the potential emergence of artificial general intelligence as one such moment of high plasticity.10
AGI Preparedness
MacAskill's recent work has shifted toward the challenges posed by rapid AI progress. In a 2025 paper co-authored with Fin Moorhouse, "Preparing for the Intelligence Explosion," he warns that AI-driven cognitive labor is increasing at a very rapid rate annually and that recursive self-improvement in AI research could lead to an intelligence explosion within the next several years.1112 He identifies grand challenges including the concentration of power, destructive technologies, space governance, and AI rights as critical areas requiring preparation.11
In his work on AI safety, MacAskill identifies value alignment—ensuring that AI systems pursue goals compatible with human values—as a central strategy. He has also discussed the importance of corrigible, risk-averse AI design, and monitoring AI systems for signs of deceptive behavior.13 He situates AI safety within the broader EA framework, arguing that governance, biorisk, and other non-technical dimensions of AI risk remain important even if technical alignment challenges are eventually solved.14
Organizations
MacAskill has co-founded or played a founding role in several organizations:
| Organization | Year | Role | Notes |
|---|---|---|---|
| Giving What We Can | 2009 | Co-founder | With Toby Ord; encourages 10% income pledges1 |
| Centre for Effective Altruism | 2011 | Co-founder, President, Trustee | Umbrella organization for EA projects1 |
| 80,000 Hours | 2011 | Co-founder | With Benjamin Todd; career advice for impact1 |
| Global Priorities Institute | 2017 | Co-founder | University of Oxford; prioritization research4 |
| Forethought | — | Senior Research Fellow | AGI preparedness, space governance, AI rights3 |
According to various sources, the organizations MacAskill co-founded have collectively moved over $300 million to effective charities, with Giving What We Can alone reporting over $1.5 billion in pledged donations.18
Criticisms and Controversies
Philosophical Objections to Longtermism
Critics have raised substantial philosophical objections to MacAskill's longtermist framework. Some argue that prioritizing distant future generations—potentially those alive in 25,000 AD—over contemporaries conflicts with natural human partiality and common moral intuitions about obligations to those near us in time.15 Frank Jackson's thought experiment about a policeman allocating scarce resources is sometimes invoked to question whether resource diversion to the far future is clearly justified even on long-termist grounds.15
Population ethics poses particular difficulties. Critics have noted that MacAskill's tentative view that preventing sufficiently good future lives constitutes a moral loss goes beyond standard positions in population ethics, and that his treatment of rival population-ethical views is insufficiently thorough.16
Methodological Criticisms
Critics have challenged MacAskill's use of evidence and his representation of sources. A detailed analysis of Doing Good Better by Alexey Guzey argued that the book applies inconsistent criteria—for example, criticizing textbook distributions for lacking test score impact while not holding deworming to the same standard—and selectively cites benefits of interventions like deworming while omitting evidence of null effects on outcomes such as hemoglobin levels and grades.17 The same analysis noted that MacAskill echoed GiveWell's cost-effectiveness estimates despite warnings from GiveWell's own co-founder against taking such estimates too literally.17
A broader criticism, developed in the Logos Journal, is that MacAskill and EA more generally adopt a narrow positivistic conception of evidence that excludes non-quantifiable moral and historical considerations and that fails to engage deeply with the philosophical tradition.18
The FTX Scandal
The collapse of FTX in late 2022 generated significant criticism of MacAskill and the EA movement. MacAskill's close personal and professional ties to Sam Bankman-Fried—including his role in introducing Bankman-Fried to EA, his involvement with the FTX Future Fund, and his earlier dismissal of misconduct allegations against Bankman-Fried—drew sustained criticism.1 Critics argued that EA's utilitarian logic, which in some interpretations could justify harmful means for beneficial ends, may have created a culture susceptible to rationalized wrongdoing.19 MacAskill has publicly expressed shame over his association with Bankman-Fried and has attributed the misconduct to personal corruption rather than EA's philosophical framework, while acknowledging the need for stronger institutional safeguards.20
Systemic Change and Measurement Bias
A recurring critique from activists and social theorists is that EA's emphasis on measurable, cost-effective interventions systematically undervalues systemic or structural change—organizing, advocacy, institutional reform—in favor of interventions that are easier to quantify.5 Critics also argue that EA's "doing the most good" framing can crowd out funding from social justice movements and community-led efforts, particularly in the Global South, in ways that reflect colonial assumptions.21
Concerns About Longtermism's Practical Implications
Some critics warn that longtermism promotes excessive deliberation and caution around emerging technologies in ways that could lead to policy paralysis and slowed innovation.22 Others argue that the longtermist framework, by emphasizing speculative future harms, risks diverting attention and resources from pressing present-day problems. The risks MacAskill highlights—such as AI takeover or value lock-in—are sometimes dismissed by critics as speculative.15
Key Uncertainties
- The degree to which MacAskill's longtermist framework can be operationalized into concrete, tractable interventions remains contested, both philosophically and practically.
- The timelines MacAskill has advanced for transformative AI—years rather than decades—are subject to significant uncertainty and debate within the broader AI research community. (See When Will AGI Arrive?)
- The long-term institutional and reputational consequences of the FTX scandal for MacAskill and the broader EA movement are still unfolding.
- Whether EA's approach to moral uncertainty provides a genuinely action-guiding framework, or merely formalizes intuitions in utilitarian terms, remains an open philosophical question.
Sources
Footnotes
-
William MacAskill - Wikipedia — William MacAskill - Wikipedia ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14 ↩15 ↩16
-
Will MacAskill Press Page — Will MacAskill Press Page ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7
-
William MacAskill - Forethought — William MacAskill - Forethought ↩ ↩2 ↩3 ↩4
-
William MacAskill - PhilPeople — William MacAskill - PhilPeople ↩ ↩2 ↩3
-
Citation rc-fcad (data unavailable — rebuild with wiki-server access) ↩ ↩2 ↩3
-
William MacAskill Research — William MacAskill Research ↩ ↩2 ↩3
-
William MacAskill - Chartwell Speakers — William MacAskill - Chartwell Speakers ↩ ↩2 ↩3 ↩4 ↩5
-
Citation rc-0a35 (data unavailable — rebuild with wiki-server access) ↩ ↩2
-
Will MacAskill - Daily Stoic Interview — Will MacAskill - Daily Stoic Interview ↩
-
Review of What We Owe the Future - Cambridge Utilitas — Review of What We Owe the Future - Cambridge Utilitas ↩ ↩2
-
80,000 Hours Podcast #213 - Will MacAskill: Century in a Decade — 80,000 Hours Podcast #213 - Will MacAskill: Century in a Decade ↩ ↩2
-
AlgorithmWatch - AGI and Longtermist Abstractions — AlgorithmWatch - AGI and Longtermist Abstractions ↩
-
How to Make the Future Better - MacAskill (PDF) — How to Make the Future Better - MacAskill (PDF) ↩
-
Effective Altruism in the Age of AI - MacAskill Substack — Effective Altruism in the Age of AI - MacAskill Substack ↩
-
How Effective Is William MacAskill's Altruism? - UnHerd — How Effective Is William MacAskill's Altruism? - UnHerd ↩ ↩2 ↩3
-
Review of What We Owe the Future - Notre Dame Philosophical Reviews — Review of What We Owe the Future - Notre Dame Philosophical Reviews ↩
-
Critique of Doing Good Better - Alexey Guzey — Critique of Doing Good Better - Alexey Guzey ↩ ↩2
-
What We Owe the Past - Logos Journal — What We Owe the Past - Logos Journal ↩
-
A Rant on FTX, William MacAskill, and Utilitarianism - Crooked Timber — A Rant on FTX, William MacAskill, and Utilitarianism - Crooked Timber ↩
-
William MacAskill Interview - Persuasion Community — William MacAskill Interview - Persuasion Community ↩
-
The Predictably Grievous Harms of Effective Altruism - OUP Blog — The Predictably Grievous Harms of Effective Altruism - OUP Blog ↩
-
Broughel on Longtermism - EconLib — Broughel on Longtermism - EconLib ↩
References
“It favors welfare-oriented interventions that increase countable measures of well-being and both neglects and diverts funds from social movements that address injustices and agitate for social change, particularly in marginalized communities both in the US and in the Global South.”
“The explanation is that MacAskill, along with the rest of the Effective Altruism movement, are restricted to a very narrow positivistic account of what constitutes as evidence and reason, in which only some data is taken seriously, and only some accounts of the future of humanity are taken to be an example of urbane, neutral consideration.”