Skip to content

Dustin Moskovitz

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:49 (Adequate)⚠️
Importance:35 (Reference)
Last edited:2026-01-29 (3 days ago)
Words:4.0k
Structure:
📊 28📈 2🔗 7📚 338%Score: 15/15
LLM Summary:Dustin Moskovitz and Cari Tuna have given $4B+ since 2011, with ~$336M (12% of total) directed to AI safety through Coefficient Giving, making them the largest individual AI safety funders globally. In 2024, their $63.6M represented ~60% of all external AI safety investment, supporting organizations like MIRI ($20M+), Redwood ($15M+), and METR ($10M+), while maintaining balanced optimism about AI benefits and risks.
Issues (2):
  • QualityRated 49 but structure suggests 100 (underrated by 51 points)
  • Links3 links could use <R> components
DimensionAssessmentEvidence
Net Worth≈$17B (2025)Forbes, Bloomberg Billionaires Index
Lifetime Giving$4B+Through Good Ventures since 2011
AI Safety Funding≈$336MVia Coefficient Giving (2017-2024)
Primary VehicleCoefficient GivingFormerly Open Philanthropy
Public ProfileLow-to-ModerateIncreasingly vocal on AI policy
Giving Pledge2010Youngest signatories (25/26)
AttributeDetails
Full NameDustin Aaron Moskovitz
BornMay 22, 1984, Gainesville, Florida
HometownOcala, Florida
High SchoolVanguard High School (IB Diploma Program)
EducationHarvard University (economics, attended 2002-2004, did not graduate)
Net Worth≈$17.4 billion (Forbes, May 2025)
SpouseCari Tuna (married October 2013)
CompanyAsana (Co-founder, Board Chair since July 2025)
Giving VehicleGood Ventures / Coefficient Giving
Giving PledgeSigned December 2010 (youngest male signatory at 26)

Dustin Moskovitz is a Facebook co-founder who became the world’s youngest self-made billionaire in 2011 and has since become the largest individual funder of AI safety research through his philanthropy with Coefficient Giving (formerly Open Philanthropy). Together with his wife Cari Tuna, Moskovitz has given away over $4 billion since 2011 and committed to giving away the vast majority of their wealth during their lifetimes through the Giving Pledge.

Moskovitz’s path from tech entrepreneur to major philanthropist began during his time at Harvard, where he roomed with Mark Zuckerberg and helped build Facebook from a dorm-room project into a global platform. After leaving Facebook in 2008, he co-founded Asana, a work management software company that went public in 2020. His wealth derives primarily from his founding stakes in both Meta Platforms and Asana.

Unlike many tech philanthropists who maintain high public profiles, Moskovitz initially took a hands-off approach to his giving, delegating authority to professional staff. However, he has become increasingly vocal about AI risks, signing the Center for AI Safety’s 2023 statement declaring AI extinction risk a “global priority” and advocating for pre-deployment safety evaluations of advanced AI systems.

Moskovitz’s impact on AI safety is substantial. Coefficient Giving has directed approximately $336 million to AI safety since 2017 (about 12% of its $2.8 billion total giving), making it the largest external funder of AI safety research in the world. In 2023 alone, the organization spent $46 million on AI safety, and in 2024 deployed $63.6 million—nearly 60% of all external AI safety investment globally.

YearEventSignificance
2002Enrolled at HarvardEconomics major, roomed with Zuckerberg and Chris Hughes
Feb 2004Co-founded FacebookOne of four co-founders with Zuckerberg, Saverin, Hughes
Jun 2004Moved to Palo AltoLeft Harvard to work on Facebook full-time
Dec 2004Facebook reaches 1M usersAfter $500K seed from Peter Thiel
2004-2006First CTOBuilt early technical infrastructure
2006-2008VP of EngineeringFocused on scalability and growth
Oct 2008Founded AsanaWith Justin Rosenstein (former Facebook/Google engineer)
Mar 2011Youngest self-made billionaireForbes recognition based on 2.34% Facebook stake
Sep 2020Asana IPODirect listing at ≈$5.5B valuation
Mar 2025Announced CEO transitionPlanned retirement from Asana CEO role
Jul 2025Became Board ChairDan Rogers appointed as new CEO
AspectDetails
RoleCo-founder, first CTO, then VP of Engineering
PeriodFebruary 2004 - October 2008
Co-foundersMark Zuckerberg, Eduardo Saverin, Chris Hughes, Andrew McCollum
Key ContributionTechnical infrastructure, scalability
Stake at Departure2.34% (source of initial billions)

Moskovitz was Mark Zuckerberg’s freshman roommate at Harvard University. According to Zuckerberg, Moskovitz “learned programming in a few days” and joined the founding team as a programmer when Facebook (originally “thefacebook.com”) launched in February 2004. He served as the company’s first Chief Technology Officer, building much of the early technical infrastructure. In June 2004, Moskovitz, Zuckerberg, and Hughes moved to Palo Alto, hiring eight employees and receiving $500,000 in seed funding from Peter Thiel. By December 2004, Facebook had nearly 1 million users.

As VP of Engineering, Moskovitz focused on scaling the platform to handle rapid global growth. He departed in October 2008 to start Asana, taking with him the insight that internal work management tools—like those he’d seen at Facebook and Google—could be “democratized” for all organizations.

AspectDetails
RoleCo-founder, CEO (Oct 2010 - Jul 2025), Board Chair (Jul 2025-present)
Co-founderJustin Rosenstein (former Google/Facebook engineer)
FoundedOctober 3, 2008
Mission”Help humanity thrive by enabling the world’s teams to work together effortlessly”
IPOSeptember 2020 (direct listing, NYSE: ASAN)
Valuation at IPO≈$5.5 billion
2024 Revenue>$700 million annually
Customers170,000+, including 85%+ of Fortune 500
Moskovitz Stake≈53% (123M Class A/B shares as of 2024)

Moskovitz announced Asana’s founding on October 3, 2008, explaining that they were “democratizing what was the secret sauce of a lot of tech companies”—internal collaborative work management systems. Google had “Tasks” (built by co-founder Justin Rosenstein), Apple had “Radar,” and Amazon had “Simple Issue Tracker.” Asana would bring this capability to all organizations.

Rosenstein initially served as CEO, but Moskovitz took over the role in October 2010 “by necessity.” He later acknowledged that managing people was “ill-suited to his personality,” stating: “By personality, I don’t like to manage teams, and it wasn’t my intention when we started Asana.”

In March 2025, Moskovitz announced plans to transition to Board Chair, with former ServiceNow and Rubrik executive Dan Rogers becoming CEO in July 2025. Moskovitz described the CEO role as “exhausting” and expressed eagerness to focus more on philanthropy and AI safety work.

In December 2010, Dustin Moskovitz and Cari Tuna became the youngest couple to sign the Giving Pledge, the philanthropic commitment launched by Warren Buffett and Bill and Melinda Gates asking billionaires to give away at least half their wealth. Tuna was 25 and Moskovitz was 26—making them the youngest signatories in the Pledge’s history.

AspectDetails
SignedDecember 2010
CommitmentGive away majority of wealth during lifetime
Co-signers that day17 billionaires including Mark Zuckerberg
Key Quote”We will donate and invest with both urgency and mindfulness, aiming to foster a safer, healthier and more economically empowered global community”

The timing was notable: Moskovitz had just become a billionaire through Facebook’s growth, and the term “effective altruism” had not yet been coined. However, the couple was already developing the research-driven approach to philanthropy that would become their hallmark. Their full pledge letter is available on the Giving Pledge website.

In March 2011, Forbes reported Moskovitz as the world’s youngest self-made billionaire based on his 2.34% stake in Facebook. He held this distinction until around 2014, when Snapchat’s co-founders surpassed him. Notably, Moskovitz is just eight days younger than Mark Zuckerberg—making their near-simultaneous billionaire status a unique case.

Loading diagram...
AspectDetails
Founded2011
StructureGood Ventures Foundation (private foundation, 2012) + Good Ventures LLC (impact investments, 2012)
LeadershipCari Tuna (Co-founder, Chair)
StaffNo direct employees; relies on Coefficient Giving for research and grantmaking
Lifetime Giving$4B+ (2011-2025)
Major Gift$1.9B transfer from Moskovitz to Good Ventures (June 2023)

Good Ventures is the private foundation through which Moskovitz and Tuna channel their philanthropy. The organization works closely with Coefficient Giving (formerly Open Philanthropy), which provides research, analysis, and grantmaking recommendations. Good Ventures has no staff of its own—all operations are conducted through Coefficient Giving.

In June 2023, Moskovitz quietly donated $1.9 billion to Good Ventures, equivalent to the entire endowment of major legacy foundations like the Alfred P. Sloan Foundation. This gift enabled significantly expanded giving capacity.

Coefficient Giving (formerly Open Philanthropy)

Section titled “Coefficient Giving (formerly Open Philanthropy)”

The evolution of Moskovitz and Tuna’s grantmaking infrastructure reflects their deepening partnership with effective altruism:

YearDevelopment
2010Moskovitz and Tuna meet GiveWell co-founders Holden Karnofsky and Elie Hassenfeld
2011Tuna joins GiveWell board; Good Ventures founded
2012GiveWell Labs created as joint initiative
2014GiveWell Labs rebranded as Open Philanthropy Project
2017Open Philanthropy becomes independent LLC
2019Shortened to “Open Philanthropy”
2024Lead Exposure Action Fund launches ($125M) as first multi-donor fund
2025Rebranded to “Coefficient Giving” to reflect multi-donor expansion

Coefficient Giving now serves as the primary vehicle for Moskovitz’s giving:

AspectDetails
Total Giving (2017-2024)≈$2.8 billion
2024 Grants>$650 million
2025 YTD>$600 million
Staff≈100
Cause AreasGlobal health, AI safety, biosecurity, farm animal welfare, scientific research
AI Safety Total≈$336 million (12% of total)
Multi-Donor FundsLead Exposure Action Fund ($125M), Abundance & Growth Fund ($120M)

The rebrand to “Coefficient Giving” signals a strategic shift from serving primarily one anchor donor (Good Ventures) to operating multi-donor funds that other philanthropists can join. The name reflects the mathematical concept: a coefficient multiplies whatever it’s paired with, just as the organization aims to amplify philanthropic impact.

PeriodAmountKey Developments
2011-2015≈$100MGiveWell top charities, EA infrastructure
2016-2019≈$500MGrantmaking grows, AI safety begins
2020-2022≈$1BMajor AI safety scaling, pandemic response
2023≈$600MPost-FTX expansion, $1.9B transfer to Good Ventures
2024≈$650MMulti-donor fund launches
2025≈$600M+Coefficient Giving rebrand
Lifetime$4B+Through Good Ventures/Coefficient Giving
RecipientAmountFocus
Malaria Consortium$300M+Malaria prevention
Evidence Action$200M+Deworming, water treatment
Helen Keller International$100M+Vitamin A supplementation
GiveDirectly$50M+Direct cash transfers
Loading diagram...

Coefficient Giving (formerly Open Philanthropy) has become the world’s largest external funder of AI safety research:

MetricValueSource
Total AI Safety Grants≈$336M2017-2024
Share of Total Giving≈12%Of $2.8B total
2023 AI Safety Spending$46MLessWrong Analysis
2024 AI Safety Spending$63.6M≈60% of all external AI safety investment
Median Grant Size≈$257KAcross all AI safety grants
Average Grant Size$1.67MSkewed by large grants
RecipientTotal FundingFocus AreaNotable Grants
MIRI$20M+Technical alignment$7.7M general support, $4.1M (2024)
Redwood Research$15M+Interpretability, alignment$5.3M (2023), $6.2M (2024)
Center for AI Safety$12M+Advocacy, research, field-building$1.87M exit grant (2023), $1.43M philosophy fellowship
METR$10M+Evaluations$265K (2022), $10M to RAND/METR Canary project
Epoch AI$5M+AI forecastingMultiple grants
GovAI$10M+AI governance researchCore support
FAR.AI$1.3M+Alignment research$645K (Jan 2024), $680K (Jul 2024)
University Programs$30M+Academic researchBerkeley, Stanford, Oxford, Cambridge

In 2024, Coefficient Giving launched a major Request for Proposals for technical AI safety research:

AspectDetails
Initial Budget$40M over 5 months
Research Areas21 areas across 5 categories
FocusInterpretability, alignment, evaluations
Flexibility”Additional funding available depending on application quality”

Key 2024 grants under this initiative included $25 million for developing better benchmarks for LLM agent capabilities, with results already being used by the U.S. and UK governments, OpenAI, and Anthropic to measure AI systems’ potential for cyberattacks and pandemic creation assistance.

A notable success story in Coefficient Giving’s AI safety portfolio is the support for AI evaluations work:

EntityFoundedRelationship
Alignment Research Center (ARC)April 2021Founded by Paul Christiano (former OpenAI)
ARC Evals2022Founded by Beth Barnes within ARC
METRDecember 2023Spun out as independent nonprofit

METR (formerly ARC Evals) now partners with OpenAI and Anthropic to evaluate advanced AI models before release. In a notable 2023 test, ARC evaluated GPT-4’s ability to exhibit power-seeking behavior, including a test where GPT-4 successfully solved a CAPTCHA by hiring a human on TaskRabbit and deceiving them into believing it was vision-impaired.

While Coefficient Giving has not made direct large grants to Anthropic (which has raised >$7 billion in venture capital), there are significant connections:

AspectDetails
FTX Investment$500M (2022, now in bankruptcy proceedings)
Coefficient ConnectionHolden Karnofsky (Coefficient co-founder) joined Anthropic in 2025
Karnofsky’s RoleWorking on Responsible Scaling Policy
Board StructureLong-Term Benefit Trust controls 3 of 5 board seats

Note: The $500M investment commonly associated with AI safety philanthropy was actually from FTX, not Coefficient Giving. Holden Karnofsky, Coefficient Giving’s co-founder and Anthropic President Daniela Amodei’s husband, joined Anthropic’s technical staff in early 2025 to work on safety protocols.

Moskovitz and Tuna have been central figures in the effective altruism movement since before the term was coined:

YearMilestone
2010Met GiveWell founders Karnofsky and Hassenfeld
2011Tuna joined GiveWell board; Good Ventures founded
2012GiveWell partnership formalized
2014Open Philanthropy Project launched
2015+Major funding for 80,000 Hours, CEA, EA Global

According to one analysis, “It is difficult to separate them from the movement” and “They are the figureheads.” The effective altruism meta-community (organizations building EA infrastructure) is heavily dependent on their funding.

Moskovitz and Tuna’s giving follows the effective altruism “ITN” framework for cause prioritization:

CriterionDescriptionApplication
ImportanceScale of the problemAI risk: potential extinction-level
TractabilityCan progress be made?Safety research showing results
NeglectednessIs it underfunded?AI safety was ≈$50M/year before OP scaled
CharacteristicDescription
Delegated AuthorityEmpowers professional staff to make independent decisions
Research-DrivenExtensive investigation before major grants
Spend-DownAims to give away wealth during lifetime, not create perpetual foundation
Cause-NeutralWilling to shift funding based on evidence
High Risk ToleranceFunds speculative bets on transformative research
Increasingly VocalShifting from low-profile to public AI advocacy
PriorityRationaleShare of Giving
Global HealthProven, cost-effective interventions≈40%
AI SafetyPotential to prevent catastrophe≈12%
BiosecurityHigh-impact, neglected≈15%
Farm Animal WelfareEnormous scale of suffering≈15%
Scientific ResearchEnabling innovation≈10%
OtherPolicy, EA infrastructure≈8%

Unlike some AI safety funders who maintain either strong pessimism (“doomerism”) or optimism, Moskovitz explicitly rejects this binary framing. His views have evolved from early AI optimism to nuanced concern:

PeriodPosition
Early 2010sSelf-described “AI accelerationist”; invested in Vicarious
Mid-2010sBegan funding AI safety through Coefficient (then Open Philanthropy)
2020s”Neither doomer nor accelerationist”—supports safety research while remaining optimistic

Moskovitz has articulated his AI risk philosophy in several interviews:

“The people I least understand in the AI risk debate are the ones who have ~100% confidence that AI will or will not destroy us—either way, how can they really know something like that?”

“The AI safety community takes a third position: AI is going to be great and we need to mitigate some very real problems.”

On the false dichotomy he sees in AI debates:

“Opponents are deliberately creating a polarized frame that does not exist—on one side are ‘doomers who think everything is awful and want to ban math,’ and on the other are ‘libertarians who think AI is going to be amazing.’ I purposefully reject this binary.”

Moskovitz frequently uses a car safety analogy to explain his position:

“When you get into a car, you expect to go to your destination, but you put on a seatbelt and follow the rules of the road. There’s a regulatory system and licensing system for drivers that helps ensure mutual safety for everyone, including pedestrians. I think about AI safety in the same way—we are heading towards something really awesome, but there are some serious risks we need to address.”

PositionDetails
Pre-deployment Evaluations”The thing I’m most interested in is making sure state-of-the-art later generations, like GPT-5, GPT-6, get run through safety evaluations before being released”
RegulationSupports coordinated regulatory frameworks; helped craft 12-point policy list for U.S. lawmakers
CAIS StatementSigned May 2023 statement declaring AI extinction risk a “global priority”
Short Timelines”I’m pretty much a short timelines person, so I think these problems are now”

Despite his concerns, Moskovitz maintains optimism:

“I believe we will figure out a positive way forward with AI and unlock a future that is unimaginably good.”

TraitDescriptionEvidence
AnalyticalData-driven approach to givingResearch-intensive grantmaking process
Uncertainty-EmbracingAcknowledges limits of knowledgeSkeptical of 100% confidence claims
DelegatingEmpowers professional staffCoefficient Giving operates independently
Long-term FocusedThinks about future generationsAI safety, biosecurity focus
Increasingly VocalMoving from private to public rolePodcast interviews, policy advocacy
IntrovertedPrefers not to manage teamsStepped down as Asana CEO
AspectDetails
Met Cari Tuna2009, blind date arranged by mutual friend
MarriedOctober 2013
Cari’s BackgroundYale graduate, former Wall Street Journal reporter (San Francisco bureau)
Cari’s RoleCo-founder and Chair of Good Ventures and Coefficient Giving
Shared InterestsBurning Man attendance, effective altruism
ChildrenNot publicly disclosed

Cari Tuna (born October 4, 1985) deserves significant credit for the couple’s philanthropic work. While Moskovitz provided the capital, Tuna has been the driving force behind their giving strategy:

AspectDetails
EducationYale University graduate
CareerFormer Wall Street Journal reporter
Role at Good VenturesCo-founder and Chair
Role at Coefficient GivingChair
GiveWell InvolvementJoined board in 2011
RecognitionTIME100 Philanthropy 2025

Tuna met the GiveWell founders (Holden Karnofsky and Elie Hassenfeld) and was impressed by their commitment to transparency and cause neutrality. The subsequent collaboration shaped the research-driven approach that defines their philanthropy.

Comparison with Other Major AI Safety Donors

Section titled “Comparison with Other Major AI Safety Donors”
AspectMoskovitzTallinnVitalik Buterin
Net Worth≈$17B≈$500M≈$1B
Annual Giving$200M+$50MVariable
AI Safety Focus≈12%≈85%Variable
Primary VehicleCoefficient GivingSFFDirect/various
Public ProfileLow-ModerateMediumHigh
Delegation LevelHighMediumLow
Risk ToleranceMedium-HighHighHigh
Wealth SourceFacebook, AsanaSkype, KazaaEthereum
TopicDescriptionResponse/Context
Field InfluenceConcerns about single donor shaping AI safety research agendaCoefficient Giving expanding to multi-donor model
EA ConcentrationHeavy EA infrastructure dependence on Moskovitz/Tuna fundingAcknowledged; no clear alternative funding source
Post-FTX ScrutinyAssociation with effective altruism after FTX collapseIncreased emphasis on governance, diversification
Capability vs. SafetyQuestions about funding organizations that advance AI capabilitiesMoskovitz argues safety work requires frontier access
Neglecting Near-term HarmsFocus on existential risk over present AI harmsCoefficient Giving also funds bias research, misuse prevention

A key concern in the AI safety community is heavy dependence on a small number of funders. Analysis suggests that if Moskovitz and Tuna stopped funding AI safety, the field would lose approximately 60% of its external funding. This concentration creates risks:

  • Research agendas may reflect donor preferences
  • Organizations may self-censor to maintain funding
  • Loss of major funder could collapse multiple organizations simultaneously

Coefficient Giving’s 2025 rebrand and multi-donor fund structure explicitly aims to address this by attracting additional philanthropists.

VenueDateTopic
Tim Ferriss Show (#686)August 2023AI risks, energy management, Asana
Stratechery Interview2025AI, SaaS, and Safety
CNBCJune 2023AI concerns and policy positions
MediumOngoing”Works in Progress” blog
  1. Dustin Moskovitz - Wikipedia
  2. Bloomberg Billionaires Index - Dustin Moskovitz
  3. Cari Tuna - Wikipedia
  4. Good Ventures - About Us
  5. Coefficient Giving - Wikipedia
  6. Open Philanthropy Is Now Coefficient Giving
  7. The Story Behind Our New Name - Coefficient Giving
  8. Four Lessons From $4 Billion in Impact-focused Giving - SSIR
  9. An Overview of the AI Safety Funding Situation - LessWrong
  10. Our Progress in 2024 and Plans for 2025 - Coefficient Giving
  11. Asana Announces CEO Succession Plan
  12. An Interview with Asana Founder Dustin Moskovitz - Stratechery
  13. Tim Ferriss Show #686 Transcript
  14. Asana’s Dustin Moskovitz is bullish on AI but concerned about risks - CNBC
  15. Redwood Research - General Support 2023 - Open Philanthropy
  16. Center for AI Safety - General Support 2023 - Open Philanthropy
  17. METR (formerly ARC Evals) - Giving What We Can
  18. Giving Pledge - Dustin Moskovitz and Cari Tuna