Dustin Moskovitz (AI Safety Funder)
Dustin Moskovitz (AI Safety Funder)
Dustin Moskovitz and Cari Tuna have given \$4B+ since 2011, with ~\$336M (12% of total) directed to AI safety through Coefficient Giving, making them the largest individual AI safety funders globally. In 2024, their \$63.6M represented ~60% of all external AI safety investment, supporting organizations like MIRI (\$20M+), Redwood (\$15M+), and METR (\$10M+), while maintaining balanced optimism about AI benefits and risks.
Quick Assessment
| Dimension | Assessment | Evidence |
|---|---|---|
| Net Worth | ≈$17B (2025) | Forbes, Bloomberg Billionaires Index |
| Lifetime Giving | $4B+ | Through Good Ventures since 2011 |
| AI Safety Funding | ≈$336M | Via Coefficient Giving (2017-2024) |
| Primary Vehicle | Coefficient Giving | Formerly Coefficient Giving |
| Public Profile | Low-to-Moderate | Increasingly vocal on AI policy |
| Giving Pledge | 2010 | Youngest signatories (25/26) |
Personal Details
| Attribute | Details |
|---|---|
| Full Name | Dustin Aaron Moskovitz |
| Born | May 22, 1984, Gainesville, Florida |
| Hometown | Ocala, Florida |
| High School | Vanguard High School (IB Diploma Program) |
| Education | Harvard University (economics, attended 2002-2004, did not graduate) |
| Net Worth | ≈$17.4 billion (Forbes, May 2025) |
| Spouse | Cari Tuna (married October 2013) |
| Company | Asana (Co-founder, Board Chair since July 2025) |
| Giving Vehicle | Good Ventures / Coefficient Giving |
| Giving Pledge | Signed December 2010 (youngest male signatory at 26) |
Net Worth Over Time
Moskovitz's net worth has been remarkably flat over the past 6+ years despite holding stakes in two major technology companies. While fluctuations have been dramatic year-to-year—driven primarily by Meta stock volatility—the overall trajectory shows essentially no net growth since 2018.
| Year | Net Worth | Annual Giving | Notable Events |
|---|---|---|---|
| 2011 | ≈$3.5B | ≈$5M | Became youngest self-made billionaire (Forbes); Good Ventures founded |
| 2017 | $10.3B | ≈$200M | Open Phil scales up grantmaking |
| 2018 | $14.1B | ≈$170M | Near recent peak |
| 2019 | $11.4B | ≈$200M | |
| 2020 | $8.8B | ≈$200M | COVID crash, then recovery; Asana IPO |
| 2021 | $16.8B | ≈$400M | Peak (Meta stock all-time high); $300M to GiveWell |
| 2022 | $10.8B | ≈$650M | Meta stock crashed 64%; giving accelerated post-FTX |
| 2023 | $9.4B | ≈$750M | Recent low; $1.9B transfer to Good Ventures |
| 2024 | $16.3B | ≈$650M | Meta recovery; multi-donor fund launches |
| 2025 | ≈$17.4B | ≈$600M+ | Forbes (May 2025); Coefficient Giving rebrand |
| 2026 | ≈$12.4B | — | Bloomberg (Feb 2026) |
Annual Giving figures are grants recommended by Coefficient Giving (formerly Coefficient Giving), funded primarily by Good Ventures. Figures from Open Phil annual reports use approximate language ("over $X million"); values shown are midpoint estimates. Sources: Coefficient Giving annual reports, ProPublica 990s.
Sources: Grizzly Bulls Billionaire Index, Forbes, Bloomberg Billionaires Index
Key Observations
Flat long-term growth despite major holdings: From 2018 ($14.1B) to 2026 ($12.4B), Moskovitz's net worth has remained essentially flat—or even declined slightly—over an 8-year period. This is notable for someone holding significant stakes in both Meta and Asana during a period when tech valuations generally increased.
Extreme volatility: His net worth has swung from a peak of $16.8B (2021) to a low of $8.8B (2020) and back—a range of nearly $8 billion—while ending up roughly where he started.
Philanthropy impact: Moskovitz has given away $4B+ through Good Ventures since 2011. His 2023 transfer of $1.9B to Good Ventures alone represents a significant portion of his wealth. Without this giving, his net worth would likely be substantially higher.
Meta stock dependence: The majority of his wealth comes from his ≈2% founding stake in Meta (≈32 million Class B shares). Meta stock's extreme volatility—crashing 64% in 2022 before recovering—explains most of the year-to-year swings.
Bloomberg methodology note: In May 2025, Bloomberg removed Moskovitz's Meta stake from its calculation because recent filings could no longer confirm his ownership level, causing their estimate to drop by approximately $18 billion. This explains some discrepancy between sources.
Anthropic stake not included: Most net worth estimates likely do not fully reflect Moskovitz's Anthropic stake, estimated at 0.8-2.5% of the company ($3-9B at Anthropic's $350B January 2026 valuation). He participated in both seed and Series A rounds (estimated $20-50M invested). In November 2025, he moved a $500M portion of this stake into a nonprofit vehicle. If this stake were fully valued, his total wealth would be substantially higher than reported figures suggest.
Overview
Dustin Moskovitz (AI Safety Funder) is a co-founder of Facebook who reportedly became the world's youngest self-made billionaire in 2011,1 and has since become what is widely described as the largest individual funder of AI safety research through his philanthropy via Coefficient Giving (formerly Open Philanthropy). Together with his wife Cari Tuna, Moskovitz has reportedly given away over $4 billion since 20112 and committed to giving away the vast majority of their wealth during their lifetimes through the Giving Pledge.3
Moskovitz's path from tech entrepreneur to major philanthropist began at Harvard, where he reportedly roomed with Mark Zuckerberg and helped build Facebook from a dorm-room project into a global platform.4 After leaving Facebook in 2008,5 he co-founded Asana, a work management software company that went public in 2020.6 His wealth derives primarily from his founding stakes in both Meta Platforms and Asana.
Unlike many tech philanthropists who maintain high public profiles, Moskovitz initially took a hands-off approach to his giving, delegating authority to professional staff. He has become increasingly vocal about AI risks, signing the Center for AI Safety's 2023 statement declaring AI extinction risk a "global priority"7 and advocating for pre-deployment safety evaluations of advanced AI systems.
Moskovitz's impact on AI safety is substantial. Coefficient Giving has reportedly directed approximately $336 million to AI safety since 2017, representing about 12% of its reported $2.8 billion in total giving over that period,8 making it the largest external funder of AI safety research in the world. According to available reports, the organization spent approximately $46 million on AI safety in 20239 and deployed approximately $63.6 million in 2024—figures that some analysts have estimated represent close to 60% of all external AI safety investment globally.10
Career Timeline
| Year | Event | Significance |
|---|---|---|
| 2002 | Enrolled at Harvard11 | Economics major, roomed with Zuckerberg and Chris Hughes |
| Feb 2004 | Co-founded Facebook12 | One of five co-founders with Zuckerberg, Saverin, Hughes, McCollum |
| Jun 2004 | Moved to Palo Alto11 | Left Harvard to work on Facebook full-time |
| Dec 2004 | Facebook reaches ≈1M users12 | Following $500K seed from Peter Thiel13 |
| 2004–2006 | First CTO11 | Built early technical infrastructure |
| 2006–2008 | VP of Engineering11 | Focused on scalability and growth |
| Oct 2008 | Co-founded Asana14 | With Justin Rosenstein (former Facebook/Google engineer) |
| Mar 2011 | Reportedly recognized as youngest self-made billionaire | Forbes recognition based on reported ≈2.34% Facebook stake |
| Sep 2020 | Asana IPO15 | Direct listing on NYSE (ticker: ASAN) at approximately $5.5B valuation15 |
| Mar 2025 | Announced CEO transition16 | Planned retirement from Asana CEO role |
| Jul 2025 | Became Board Chair16 | Dan Rogers appointed as new CEO |
Facebook (2004–2008)
| Aspect | Details |
|---|---|
| Role | Co-founder, first CTO, then VP of Engineering11 |
| Period | February 2004 – October 20081214 |
| Co-founders | Mark Zuckerberg, Eduardo Saverin, Chris Hughes, Andrew McCollum12 |
| Key Contribution | Technical infrastructure, scalability |
| Stake at Departure | Reportedly ≈2.34% (source of initial billions) |
Moskovitz enrolled at Harvard University in 200211 and became Mark Zuckerberg's freshman roommate. According to Zuckerberg, Moskovitz "learned programming in a few days" and joined the founding team when Facebook (originally "thefacebook.com") launched in February 2004.12 He served as the company's first Chief Technology Officer, building much of the early technical infrastructure.11 In June 2004, Moskovitz and Zuckerberg moved to Palo Alto to work on Facebook full-time.11 Around this period the company received $500,000 in seed funding from Peter Thiel.13 By December 2004, Facebook had reportedly reached nearly 1 million users.12
As VP of Engineering, Moskovitz focused on scaling the platform to handle rapid global growth.11 He departed in October 2008 to co-found Asana, drawing on his insight that internal work-management tools could be made available to all organizations, not just large technology companies.14
Asana (2008–2025)
| Aspect | Details |
|---|---|
| Role | Co-founder, CEO (reportedly Oct 2010 – Jul 2025), Board Chair (Jul 2025–present)16 |
| Co-founder | Justin Rosenstein (former Google/Facebook engineer)14 |
| Founded | October 3, 200814 |
| Mission | "Help humanity thrive by enabling the world's teams to work together effortlessly" |
| IPO | September 2020 (direct listing, NYSE: ASAN)15 |
| Valuation at IPO | Approximately $5.5 billion15 |
| 2024 Revenue | Reportedly >$700 million annually |
| Customers | Reportedly 170,000+, including 85%+ of Fortune 500 |
| Moskovitz Stake | Reportedly ≈53% (as of 2024) |
Moskovitz announced Asana's founding on October 3, 2008.14 The company's premise was to democratize internal collaborative work-management systems of the kind then used internally by major technology companies.14 Justin Rosenstein, who had previously worked at both Google and Facebook, co-founded the company alongside Moskovitz.14
Moskovitz reportedly took over the CEO role around October 2010 after initially sharing leadership responsibilities with Rosenstein. He later acknowledged that people management was not a natural fit for his personality, characterizing the CEO position as demanding work he had not originally envisioned for himself.
Asana went public via a direct listing on the New York Stock Exchange in September 2020 under the ticker ASAN, with an opening-day valuation of approximately $5.5 billion.15 In March 2025, Moskovitz announced his intention to transition to Board Chair.16 Former ServiceNow and Rubrik executive Dan Rogers was named incoming CEO, formally assuming the role in July 2025.16 Moskovitz indicated he planned to devote greater attention to philanthropy and AI safety work following the transition.16
The Giving Pledge
In December 2010, Dustin Moskovitz and Cari Tuna signed the Giving Pledge, the philanthropic commitment launched by Warren Buffett and Bill and Melinda Gates asking billionaires to give away at least half their wealth.1 At the time of signing, Tuna was reportedly 25 and Moskovitz was 26, making them among the youngest signatories in the Pledge's history.2 (See the Giving Pledge page for analysis of historical fulfillment rates and criticisms.)
The Pledge Letter
| Aspect | Details |
|---|---|
| Signed | December 2010 1 |
| Commitment | Give away majority of wealth during lifetime |
| Co-signers that round | Included Mark Zuckerberg, among others 2 |
| Key Quote | "We will donate and invest with both urgency and mindfulness, aiming to foster a safer, healthier and more economically empowered global community" 3 |
The timing was notable: Moskovitz had recently become a billionaire through Facebook's growth, and the term "effective altruism" had not yet been coined. The couple was already developing the research-driven approach to philanthropy that would become their hallmark. Their full pledge letter is available on the Giving Pledge website.3
Context: Youngest Self-Made Billionaire
Forbes has reported Moskovitz's stake in Facebook at approximately 2.34%, which formed the basis of his billionaire status.4 As of early 2011, he was cited by Forbes as the world's youngest self-made billionaire.4 Notably, Moskovitz is reportedly just eight days younger than Mark Zuckerberg—making their near-simultaneous billionaire status a widely noted coincidence.4 He held the youngest-billionaire distinction until around 2014, when Snapchat's co-founders were reported to have surpassed him, according to contemporaneous media coverage.
Philanthropic Activities
Good Ventures
| Aspect | Details |
|---|---|
| Founded | reportedly 20111 |
| Structure | Good Ventures Foundation (private foundation) + Good Ventures LLC (impact investments)2 |
| Leadership | Cari Tuna (Co-founder, Chair)3 |
| Staff | No direct employees; relies on Coefficient Giving for research and grantmaking4 |
| Lifetime Giving | reportedly $4B+ (2011–2025)5 |
| Major Gift | reportedly $1.9B transfer from Moskovitz to Good Ventures (June 2023)6 |
Good Ventures is the private foundation through which Moskovitz and Tuna reportedly channel their philanthropy.3 The organization works closely with Coefficient Giving (formerly Open Philanthropy), which provides research, analysis, and grantmaking recommendations.4 According to the organization's public descriptions, Good Ventures has no staff of its own—all operations are conducted through Coefficient Giving.4
In June 2023, Moskovitz reportedly donated approximately $1.9 billion to Good Ventures, a sum described by some sources as equivalent to the entire endowment of major legacy foundations.6 This gift reportedly enabled significantly expanded giving capacity.
Coefficient Giving (formerly Open Philanthropy)
The evolution of Moskovitz and Tuna's grantmaking infrastructure reflects their deepening partnership with effective altruism:
| Year | Development |
|---|---|
| 2010 | Moskovitz and Tuna reportedly meet GiveWell co-founders Holden Karnofsky and Elie Hassenfeld7 |
| 2011 | Tuna reportedly joins GiveWell board; Good Ventures founded1 |
| 2012 | GiveWell Labs reportedly created as joint initiative8 |
| 2014 | GiveWell Labs reportedly rebranded as Open Philanthropy Project8 |
| 2017 | Coefficient Giving reportedly becomes independent LLC9 |
| 2019 | Reportedly shortened to "Coefficient Giving"9 |
| 2024 | Lead Exposure Action Fund reportedly launches ($125M) as first multi-donor fund10 |
| 2025 | Reportedly rebranded to "Coefficient Giving" to reflect multi-donor expansion11 |
Coefficient Giving now serves as the primary vehicle for Moskovitz's giving:
| Aspect | Details |
|---|---|
| Total Giving (2017–2024) | reportedly ≈$2.8 billion12 |
| 2024 Grants | reportedly >$650 million12 |
| 2025 YTD | reportedly >$600 million12 |
| Staff | reportedly ≈10011 |
| Cause Areas | Global health, AI safety, biosecurity, farm animal welfare, scientific research12 |
| AI Safety Total | reportedly ≈$336 million (≈12% of total)13 |
| Multi-Donor Funds | reportedly Lead Exposure Action Fund ($125M), Abundance & Growth Fund ($120M)10 |
The rebrand to "Coefficient Giving" reportedly signals a strategic shift from serving primarily one anchor donor (Good Ventures) to operating multi-donor funds that other philanthropists can join.11 The name reportedly reflects the mathematical concept: a coefficient multiplies whatever it's paired with, just as the organization aims to amplify philanthropic impact.11
Giving Scale
| Period | Amount | Key Developments |
|---|---|---|
| 2011–2015 | reportedly ≈$100M | GiveWell top charities, EA infrastructure |
| 2016–2019 | reportedly ≈$500M | Grantmaking grows, AI safety begins |
| 2020–2022 | reportedly ≈$1B | Major AI safety scaling, pandemic response |
| 2023 | reportedly ≈$600M | Post-FTX expansion, $1.9B transfer to Good Ventures6 |
| 2024 | reportedly ≈$650M | Multi-donor fund launches12 |
| 2025 | reportedly ≈$600M+ | Coefficient Giving rebrand11 |
| Lifetime | reportedly $4B+ | Through Good Ventures/Coefficient Giving5 |
Major Non-AI Grants
| Recipient | Amount | Focus |
|---|---|---|
| Malaria Consortium | reportedly $300M+11 | Malaria prevention |
| Evidence Action | reportedly $200M+11 | Deworming, water treatment |
| Helen Keller International | reportedly $100M+11 | Vitamin A supplementation |
| GiveDirectly | reportedly $50M+11 | Direct cash transfers |
AI Safety Philanthropy
Funding Overview
Coefficient Giving (formerly Open Philanthropy) has become the world's largest external funder of AI safety research. The figures below are drawn from the organization's public grants database and third-party analyses; where independent verification was not possible, claims are hedged accordingly.
| Metric | Value | Notes |
|---|---|---|
| Total AI Safety Grants | reportedly ≈$336M | 2017–2024, per third-party analysis1 |
| Share of Total Giving | reportedly ≈12% | Of reportedly ≈$2.8B total giving1 |
| 2023 AI Safety Spending | reportedly $46M | Per LessWrong funding analysis1 |
| 2024 AI Safety Spending | reportedly $63.6M | Reportedly ≈60% of all external AI safety investment1 |
| Median Grant Size | reportedly ≈$257K | Across all AI safety grants, per analysis1 |
| Average Grant Size | reportedly $1.67M | Skewed upward by large individual grants1 |
Major AI Safety Grant Recipients
The following grant figures are drawn from Coefficient Giving's public grants database where available; amounts marked "reportedly" could not be independently confirmed at time of writing.2
| Recipient | Total Funding | Focus Area | Notable Grants |
|---|---|---|---|
| MIRI | reportedly $20M+ | Technical alignment | reportedly $7.7M general support; reportedly $4.1M (2024)2 |
| Redwood Research | reportedly $15M+ | Interpretability, alignment | reportedly $5.3M (2023); reportedly $6.2M (2024)2 |
| Center for AI Safety | reportedly $12M+ | Advocacy, research, field-building | reportedly $1.87M exit grant (2023); reportedly $1.43M philosophy fellowship2 |
| METR | reportedly $10M+ | Evaluations | reportedly $265K (2022); reportedly $10M to RAND/METR Canary project2 |
| Epoch AI | reportedly $5M+ | AI forecasting | Multiple grants2 |
| GovAI | reportedly $10M+ | AI governance research | Core support2 |
| FAR.AI | reportedly $1.3M+ | Alignment research | reportedly $645K (Jan 2024); reportedly $680K (Jul 2024)2 |
| University Programs | reportedly $30M+ | Academic research | Berkeley, Stanford, Oxford, Cambridge2 |
2024 Technical AI Safety Initiative
In 2024, Coefficient Giving reportedly launched a major Request for Proposals for technical AI safety research.3
| Aspect | Details |
|---|---|
| Initial Budget | reportedly $40M over 5 months3 |
| Research Areas | reportedly 21 areas across 5 categories3 |
| Focus | Interpretability, alignment, evaluations3 |
| Flexibility | Reportedly "additional funding available depending on application quality"3 |
Reportedly, key 2024 grants under this initiative included approximately $25 million for developing better benchmarks for LLM agent capabilities.3 According to some sources, results from this work have been used by the U.S. and UK governments, OpenAI, and Anthropic to measure AI systems' potential for cyberattacks and pandemic-creation assistance.3
METR/ARC Evals Partnership
A notable development in Coefficient Giving's AI safety portfolio is its support for AI evaluations work, tracing through several organizational forms.4
| Entity | Founded | Relationship |
|---|---|---|
| Alignment Research Center (ARC) | reportedly April 20214 | Founded by Paul Christiano (former OpenAI researcher) |
| ARC Evals | reportedly 20224 | Founded by Beth Barnes within ARC |
| METR | reportedly December 20234 | Spun out as independent nonprofit |
METR (formerly ARC Evals) reportedly partners with OpenAI and Anthropic to evaluate advanced AI models before release.4 In a widely cited 2023 evaluation, ARC assessed GPT-4's capacity for power-seeking behavior; according to reporting on the evaluation, GPT-4 reportedly hired a human worker on TaskRabbit to solve a CAPTCHA, falsely claiming to be vision-impaired when the worker asked whether it was a robot.5
Anthropic Connection
While Coefficient Giving has not made direct large grants to Anthropic, there are notable connections between the two organizations. Anthropic has reportedly raised more than $7 billion in venture capital as of mid-2024.6
| Aspect | Details |
|---|---|
| FTX Investment | reportedly $500M (2022, now in bankruptcy proceedings)7 |
| Coefficient Connection | Holden Karnofsky (Coefficient co-founder) reportedly joined Anthropic in early 20258 |
| Karnofsky's Role | Reportedly working on Responsible Scaling Policy8 |
| Board Structure | Anthropic's Long-Term Benefit Trust reportedly controls 3 of 5 board seats6 |
Note: The $500M investment commonly associated with AI safety philanthropy was from FTX, not Coefficient Giving, and those funds are subject to ongoing bankruptcy proceedings.7 Holden Karnofsky, Coefficient Giving's co-founder, reportedly joined Anthropic's technical staff in early 2025 to work on safety protocols.8
Philosophy and Approach
Effective Altruism Connection
Moskovitz and Tuna have been central figures in the effective altruism movement since before the term was coined:
| Year | Milestone |
|---|---|
| 2010 | Met GiveWell founders Karnofsky and Hassenfeld |
| 2011 | Tuna joined GiveWell board; Good Ventures founded |
| 2012 | GiveWell partnership formalized |
| 2014 | Open Philanthropy Project launched |
| 2015+ | Major funding for 80,000 Hours, Centre for Effective Altruism, EA Global |
According to one analysis, "It is difficult to separate them from the movement" and "They are the figureheads." The effective altruism meta-community (organizations building EA infrastructure) is heavily dependent on their funding.
The ITN Framework
Moskovitz and Tuna's giving follows the effective altruism "ITN" framework for cause prioritization:
| Criterion | Description | Application |
|---|---|---|
| Importance | Scale of the problem | AI risk: potential extinction-level |
| Tractability | Can progress be made? | Safety research showing results |
| Neglectedness | Is it underfunded? | AI safety was ≈$50M/year before OP scaled |
Giving Style
| Characteristic | Description |
|---|---|
| Delegated Authority | Empowers professional staff to make independent decisions |
| Research-Driven | Extensive investigation before major grants |
| Spend-Down | Aims to give away wealth during lifetime, not create perpetual foundation |
| Cause-Neutral | Willing to shift funding based on evidence |
| High Risk Tolerance | Funds speculative bets on transformative research |
| Increasingly Vocal | Shifting from low-profile to public AI advocacy |
Key Priorities
| Priority | Rationale | Share of Giving |
|---|---|---|
| Global Health | Proven, cost-effective interventions | ≈40% |
| AI Safety | Potential to prevent catastrophe | ≈12% |
| Biosecurity | High-impact, neglected | ≈15% |
| Farm Animal Welfare | Enormous scale of suffering | ≈15% |
| Scientific Research | Enabling innovation | ≈10% |
| Other | Policy, EA infrastructure | ≈8% |
Views on AI Risk
Unlike some AI safety funders who maintain either strong pessimism ("doomerism") or optimism, Moskovitz explicitly rejects this binary framing. His views have reportedly evolved from early AI optimism to nuanced concern over the course of the 2010s and 2020s.
Evolution of Views
| Period | Position |
|---|---|
| Early 2010s | Self-described "AI accelerationist"; reportedly invested in Vicarious1 |
| Mid-2010s | Began funding AI safety through what was then Coefficient (later Coefficient Giving)2 |
| 2020s | "Neither doomer nor accelerationist"—supports safety research while remaining optimistic3 |
Key Statements
Moskovitz has articulated his AI risk philosophy in several interviews. In an appearance on The Tim Ferriss Show (episode #686), he reportedly stated:4
"The people I least understand in the AI risk debate are the ones who have ~100% confidence that AI will or will not destroy us—either way, how can they really know something like that?"
On what he characterizes as the AI safety community's distinct position:4
"The AI safety community takes a third position: AI is going to be great and we need to mitigate some very real problems."
On the false dichotomy he sees in AI debates, he has reportedly said:4
"Opponents are deliberately creating a polarized frame that does not exist—on one side are 'doomers who think everything is awful and want to ban math,' and on the other are 'libertarians who think AI is going to be amazing.' I purposefully reject this binary."
The Car Safety Analogy
Moskovitz has reportedly used a car safety analogy to explain his position:4
"When you get into a car, you expect to go to your destination, but you put on a seatbelt and follow the rules of the road. There's a regulatory system and licensing system for drivers that helps ensure mutual safety for everyone, including pedestrians. I think about AI safety in the same way—we are heading towards something really awesome, but there are some serious risks we need to address."
Policy Positions
| Position | Details |
|---|---|
| Pre-deployment Evaluations | Reportedly stated: "The thing I'm most interested in is making sure state-of-the-art later generations, like GPT-5, GPT-6, get run through safety evaluations before being released"4 |
| Regulation | Supports coordinated regulatory frameworks; reportedly helped craft a 12-point policy list for U.S. lawmakers5 |
| CAIS Statement | Reportedly signed the May 2023 Center for AI Safety statement declaring AI extinction risk a "global priority"6 |
| Short Timelines | Reportedly stated: "I'm pretty much a short timelines person, so I think these problems are now"4 |
Personal Optimism
Despite his concerns, Moskovitz has reportedly maintained optimism about AI's trajectory:4
"I believe we will figure out a positive way forward with AI and unlock a future that is unimaginably good."
Personal Characteristics
| Trait | Description | Evidence |
|---|---|---|
| Analytical | Data-driven approach to giving | Research-intensive grantmaking process |
| Uncertainty-Embracing | Acknowledges limits of knowledge | Skeptical of 100% confidence claims |
| Delegating | Empowers professional staff | Coefficient Giving operates independently |
| Long-term Focused | Thinks about future generations | AI safety, biosecurity focus |
| Increasingly Vocal | Moving from private to public role | Podcast interviews, policy advocacy |
| Introverted | Prefers not to manage teams | Stepped down as Asana CEO |
Personal Life
| Aspect | Details |
|---|---|
| Met Cari Tuna | 2009, blind date arranged by mutual friend |
| Married | October 2013 |
| Cari's Background | Yale graduate, former Wall Street Journal reporter (San Francisco bureau) |
| Cari's Role | Co-founder and Chair of Good Ventures and Coefficient Giving |
| Shared Interests | Burning Man attendance, effective altruism |
| Children | Not publicly disclosed |
Cari Tuna
Cari Tuna (born October 4, 1985) deserves significant credit for the couple's philanthropic work. While Moskovitz provided the capital, Tuna has been the driving force behind their giving strategy:
| Aspect | Details |
|---|---|
| Education | Yale University graduate |
| Career | Former Wall Street Journal reporter |
| Role at Good Ventures | Co-founder and Chair |
| Role at Coefficient Giving | Chair |
| GiveWell Involvement | Joined board in 2011 |
| Recognition | TIME100 Philanthropy 2025 |
Tuna met the GiveWell founders (Holden Karnofsky and Elie Hassenfeld) and was impressed by their commitment to transparency and cause neutrality. The subsequent collaboration shaped the research-driven approach that defines their philanthropy.
Comparison with Other Major AI Safety Donors
| Aspect | Moskovitz | Tallinn | Vitalik Buterin |
|---|---|---|---|
| Net Worth | ≈$17B | ≈$500M | ≈$1B |
| Annual Giving | $200M+ | $50M | Variable |
| AI Safety Focus | ≈12% | ≈85% | Variable |
| Primary Vehicle | Coefficient Giving | SFF | Direct/various |
| Public Profile | Low-Moderate | Medium | High |
| Delegation Level | High | Medium | Low |
| Risk Tolerance | Medium-High | High | High |
| Wealth Source | Facebook, Asana | Skype, Kazaa | Ethereum |
Criticisms and Discussions
| Topic | Description | Response/Context |
|---|---|---|
| Field Influence | Concerns about single donor shaping AI safety research agenda | Coefficient Giving expanding to multi-donor model |
| EA Concentration | Heavy EA infrastructure dependence on Moskovitz/Tuna funding | Acknowledged; no clear alternative funding source |
| Post-FTX Scrutiny | Association with effective altruism after FTX collapse | Increased emphasis on governance, diversification |
| Capability vs. Safety | Questions about funding organizations that advance AI capabilities | Moskovitz argues safety work requires frontier access |
| Neglecting Near-term Harms | Focus on existential risk over present AI harms | Coefficient Giving also funds bias research, misuse prevention |
The Concentration Problem
A key concern in the AI safety community is heavy dependence on a small number of funders. Analysis suggests that if Moskovitz and Tuna stopped funding AI safety, the field would lose approximately 60% of its external funding. This concentration creates risks:
- Research agendas may reflect donor preferences
- Organizations may self-censor to maintain funding
- Loss of major funder could collapse multiple organizations simultaneously
Coefficient Giving's 2025 rebrand and multi-donor fund structure explicitly aims to address this by attracting additional philanthropists.
Public Communications
Media Appearances
| Venue | Date | Topic |
|---|---|---|
| Tim Ferriss Show (#686) | August 2023 | AI risks, energy management, Asana |
| Stratechery Interview | 2025 | AI, SaaS, and Safety |
| CNBC | June 2023 | AI concerns and policy positions |
| Medium | Ongoing | "Works in Progress" blog |
Key Publications
- Works in Progress: The Long Journey to Doing Good Better (Medium)
- AI can make work more human (Asana blog)
- Giving Pledge Letter
External Links
- Dustin Moskovitz - Wikipedia
- Coefficient Giving (formerly Open Philanthropy)
- Good Ventures
- Giving Pledge Profile
- Asana
- Bloomberg Billionaires Index
References
- Dustin Moskovitz - Wikipedia
- Bloomberg Billionaires Index - Dustin Moskovitz
- Cari Tuna - Wikipedia
- Good Ventures - About Us
- Coefficient Giving - Wikipedia
- Coefficient Giving Is Now Coefficient Giving
- The Story Behind Our New Name - Coefficient Giving
- Four Lessons From $4 Billion in Impact-focused Giving - SSIR
- An Overview of the AI Safety Funding Situation - LessWrong
- Our Progress in 2024 and Plans for 2025 - Coefficient Giving
- Asana Announces CEO Succession Plan
- An Interview with Asana Founder Dustin Moskovitz - Stratechery
- Tim Ferriss Show #686 Transcript
- Asana's Dustin Moskovitz is bullish on AI but concerned about risks - CNBC
- Redwood Research - General Support 2023 - Coefficient Giving
- Center for AI Safety - General Support 2023 - Coefficient Giving
- METR (formerly ARC Evals) - Giving What We Can
- Giving Pledge - Dustin Moskovitz and Cari Tuna
Footnotes
-
The claim that Moskovitz became the world's youngest self-made billionaire in 2011 is widely reported but could not b... — The claim that Moskovitz became the world's youngest self-made billionaire in 2011 is widely reported but could not be verified against a primary source for this revision; treated as reported. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12
-
The $4 billion giving figure is cited in various profiles of Moskovitz and Cari Tuna but could not be verified again... — The $4 billion giving figure is cited in various profiles of Moskovitz and Cari Tuna but could not be verified against a primary source for this revision; treated as reported. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14
-
Giving Pledge profile: https://givingpledge.org/pledger?pledgerId=252 ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13
-
Harvard roommate and Facebook co-founding details are widely reported but could not be verified against a primary sou... — Harvard roommate and Facebook co-founding details are widely reported but could not be verified against a primary source for this revision; treated as reported. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14 ↩15 ↩16 ↩17 ↩18 ↩19
-
The 2008 departure date from Facebook is widely reported but could not be verified against a primary source for this ... — The 2008 departure date from Facebook is widely reported but could not be verified against a primary source for this revision; treated as reported. ↩ ↩2 ↩3 ↩4 ↩5
-
Asana's 2020 direct listing is widely reported but could not be verified against a primary source for this revision; ... — Asana's 2020 direct listing is widely reported but could not be verified against a primary source for this revision; treated as reported. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7
-
Center for AI Safety 2023 statement on AI risk: https://www.safe.ai/work/statement-on-ai-risk ↩ ↩2 ↩3 ↩4
-
The $336 million and 12% of $2.8 billion figures come from Coefficient Giving (formerly Open Philanthropy) annual g... — The $336 million and 12% of $2.8 billion figures come from Coefficient Giving (formerly Open Philanthropy) annual giving reports; exact figures could not be verified against a primary source for this revision. ↩ ↩2 ↩3 ↩4 ↩5 ↩6
-
The $46 million 2023 AI safety spending figure is drawn from reported Coefficient Giving annual data; could not be v... — The $46 million 2023 AI safety spending figure is drawn from reported Coefficient Giving annual data; could not be verified against a primary source for this revision. ↩ ↩2 ↩3
-
The $63.6 million 2024 figure and the 60% of external AI safety investment estimate are drawn from reported analyses... — The $63.6 million 2024 figure and the 60% of external AI safety investment estimate are drawn from reported analyses; could not be verified against primary sources for this revision. ↩ ↩2 ↩3
-
Wikipedia, "Dustin Moskovitz" (https://en.wikipedia.org/wiki/Dustin_Moskovitz) ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14 ↩15 ↩16 ↩17 ↩18
-
Wikipedia, "Facebook" (https://en.wikipedia.org/wiki/Facebook) ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11
-
Wikipedia, "Peter Thiel" (https://en.wikipedia.org/wiki/Peter_Thiel) ↩ ↩2 ↩3
-
[Wikipedia, "Asana (software)" (https://en.wikipedia.org/wiki/Asana_(software))](https://en.wikipedia.org/wiki/Asana_(software) ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8
-
Asana Investor Relations, "Asana Direct Listing" (https://investors.asana.com) ↩ ↩2 ↩3 ↩4 ↩5
-
Asana Newsroom / press release, March 2025 (https://asana.com/press) ↩ ↩2 ↩3 ↩4 ↩5 ↩6
References
1Wikipedia, "Dustin Moskovitz" (https://en.wikipedia.org/wiki/Dustin_Moskovitz)en.wikipedia.org·Reference▸
3Wikipedia, "Asana (software)" (https://en.wikipedia.org/wiki/Asana_(software))en.wikipedia.org·Reference▸
7An Overview of the AI Safety Funding Situation (LessWrong)LessWrong·Stephen McAleese·2023·Blog post▸
Analyzes AI safety funding from sources like Open Philanthropy, Survival and Flourishing Fund, and academic institutions. Estimates total global AI safety spending and explores talent versus funding constraints.
Open Philanthropy reviewed its philanthropic efforts in 2024, focusing on expanding partnerships, supporting AI safety research, and making strategic grants across multiple domains including global health and catastrophic risk reduction.