Skip to content
Longterm Wiki
Navigation
Updated 2026-03-23HistoryData
Page StatusContent
Edited 5 weeks ago1.1k words1 backlinks
Content4/13
SummaryScheduleEntityEdit historyOverview
Tables2/ ~4Diagrams0Int. links17/ ~9Ext. links2/ ~5Footnotes0/ ~3References2/ ~3Quotes0Accuracy0Backlinks1
Issues1
Links2 links could use <R> components

Carnegie Endowment for International Peace

Lab

Carnegie Endowment for International Peace

A competent reference entry on Carnegie Endowment for International Peace covering its AI governance work, relationship to the AI safety community, and institutional limitations; useful as an organizational reference but offers little that isn't already publicly known about this well-documented institution. - missing-primary-sources

TypeLab
1.1k words · 1 backlinks

Quick Assessment

AttributeDetail
TypeThink Tank / Research Institution
Founded1910
HeadquartersWashington, D.C. (with global offices)
Focus AreasInternational peace, democracy, geopolitics, technology policy, AI governance
Relevance to AI SafetyAI governance, international coordination, disinformation, emerging technology risks
Notable ProgramCarnegie AI Program
SourceLink
Official Websitecarnegieendowment.org
Wikipediaen.wikipedia.org

Overview

The Carnegie Endowment for International Peace is one of the oldest and most prominent international affairs think tanks in the United States. Founded in 1910 by industrialist and philanthropist Andrew Carnegie, the organization conducts research and analysis on issues including international diplomacy, democracy and rule of law, geopolitical competition, and—increasingly—technology governance. It operates offices in Washington, D.C., Moscow, Beijing, Brussels, Beirut, and New Delhi, among others, positioning itself as a genuinely global research institution rather than a purely American one.

In recent years, the Endowment has expanded its attention to questions surrounding transformative AI and emerging technologies, producing work on AI governance frameworks, the geopolitics of AI competition between major powers, and the implications of AI for democratic institutions and international security. Its AI Program has become a notable voice in policy debates about how governments and international institutions should approach the regulation and governance of artificial intelligence.

The Carnegie Endowment occupies a mainstream foreign policy establishment position. It draws researchers from government, academia, and the private sector, and its work is oriented toward informing policymakers and diplomatic practitioners rather than primarily engaging the AI safety research community. This means its contributions tend to focus on governance, geopolitics, and institutional design rather than on technical alignment questions.

History

Andrew Carnegie established the Endowment in 1910 with an initial gift of $10 million, with the stated aim of hastening the abolition of international war. It is one of the first think tanks established in the United States and has a long history of engagement with multilateral institutions, arms control, and international law. Over the course of the twentieth century, the organization developed programs addressing nuclear nonproliferation, democratization, and global governance reform.

The Endowment's engagement with technology governance questions deepened significantly in the 2010s and 2020s as issues of cyber conflict, AI-driven disinformation, and great-power competition in emerging technologies became central to international security discourse. The creation of dedicated AI-focused programming reflects a broader institutional recognition that artificial intelligence represents a significant challenge to the international order the Endowment has historically sought to strengthen.

AI and Technology Governance Work

The Carnegie Endowment's work on AI governance sits at the intersection of its traditional foreign policy concerns—international stability, great-power competition, and institutional design—and newer questions about how advanced technologies should be governed. The Carnegie AI Program produces research on topics including AI standards and regulation, the U.S.-China technology competition, the use of AI in military applications, and the implications of algorithmic systems for democratic governance and accountability.

A consistent theme in Carnegie's AI work is the challenge of international coordination on AI governance. The institution has examined how existing international regimes and treaty structures might be adapted or extended to address AI risks, and has analyzed the obstacles to achieving meaningful multilateral agreements in an environment of strategic competition between major powers. This connects Carnegie's historical expertise in arms control and nonproliferation to contemporary questions about how powerful AI systems might be governed internationally.

Carnegie researchers have also produced analysis on AI disinformation and the implications of AI-generated content for democratic processes and media ecosystems. This work intersects with broader concerns in the AI safety community about the societal effects of capable language models and synthetic media, though Carnegie's framing tends to emphasize near-term political and institutional effects rather than longer-run existential or catastrophic risks.

The Endowment has engaged with international AI governance processes, including discussions around the International AI Safety Summit Series, and its researchers participate in policy forums where questions of AI risk, compute governance, and international compute regimes are debated. Organizations such as CSET, the Center for a New American Security, and the CSIS Wadhwani Center represent overlapping institutional communities working on similar questions from slightly different angles.

Relationship to the AI Safety Community

The Carnegie Endowment's relationship to the AI safety research community is indirect. The institution does not primarily engage with technical alignment research, and its researchers generally operate within foreign policy and political science frameworks rather than the machine learning or philosophy communities that anchor much AI safety work. Carnegie is more likely to cite work from international relations scholars, government officials, and legal experts than from researchers at organizations like the Alignment Research Center, Center for Human-Compatible AI, or Machine Intelligence Research Institute.

Nonetheless, Carnegie's work is relevant to the AI safety ecosystem in several ways. Questions of international coordination on AI governance—including how to prevent dangerous races to deploy insufficiently tested systems, how to establish shared norms around military AI applications, and how to build verification and monitoring regimes—are areas where Carnegie has genuine expertise and institutional credibility with the policymakers who would need to implement such measures. Organizations like the Simon Institute for Longterm Governance and Institute for AI Policy and Strategy work on adjacent questions with more explicit longtermist framings.

Criticisms and Limitations

As a mainstream foreign policy institution, Carnegie has been subject to critiques that apply broadly to the Washington think tank community. Critics from various directions have argued that established think tanks are too closely tied to government and donor interests to produce genuinely independent analysis, that their work tends toward incrementalism and status quo bias, and that their convening and credentialing functions can crowd out more heterodox perspectives.

From an AI safety standpoint, a distinct concern is that Carnegie's framing of AI risk prioritizes near-term geopolitical competition and democratic stability over the longer-horizon catastrophic and existential risks that motivate much of the AI safety research community. Carnegie's policy recommendations are oriented toward governance frameworks that manage competition between existing actors rather than toward structural interventions that might address scenarios involving highly autonomous or misaligned AI systems. Whether this framing reflects appropriate prioritization or insufficient engagement with tail risks is a matter of ongoing debate within and around the AI governance community.

Key Uncertainties

  • The extent to which Carnegie's policy-oriented AI governance work influences actual government and multilateral decision-making remains difficult to assess.
  • It is unclear how Carnegie's institutional framing will evolve as AI capabilities advance and as AI safety concerns gain broader recognition among policymakers.
  • Carnegie's ability to engage credibly across geopolitical lines (including with Chinese institutions) is a potential asset for international coordination work, though the depth and independence of those relationships is not fully transparent from public materials.

References

The Carnegie Endowment for International Peace is a leading think tank conducting research and policy analysis on AI governance, international coordination, and the geopolitical dimensions of AI development. It examines how nations and institutions can manage the risks of advanced AI through international frameworks and policy mechanisms.

★★★★☆

Wikipedia article describing the Carnegie Endowment for International Peace (CEIP), a nonpartisan international affairs think tank founded in 1910 by Andrew Carnegie. CEIP focuses on international cooperation, conflict reduction, and policy research including technology and international affairs. It has been ranked among the world's top think tanks and engages policymakers across the political spectrum.

★★★☆☆

Structured Data

45 facts·49 recordsView in FactBase →
Revenue
$87M
as of 2024
Founded Date
1910

Key People

17
AS
Andrew S. Weiss
James Family Chair; Vice President for Studies
DB
Dan Baer
Senior Vice President for Policy Research; Director, Europe Program
AM
Alison Markovitz
Chief Operating Officer
AR
Alison Rausch
Vice President for Development
CH
Corey Hinderstein
Vice President for Studies
AG
Alexander Gabuev
Director, Carnegie Russia Eurasia Center
EA
Evan A. Feigenbaum
Vice President for Studies
LS
Lynne Sport
Chief Human Resources and Administrative Officer
RB
Rosa Balfour
Director, Carnegie Europe
DS
Dan Shenk-Evans
Chief Information Officer
AK
Aiysha Kirmani Zafar
Chief Financial Officer
FZ
Frances Z. Brown
Vice President for Studies; Acting Director, Africa Program
MY
Maha Yahya
Director, Malcolm H. Kerr Carnegie Middle East Center
KV
Katelynn Vogt
Vice President for Communications
MM
Marwan Muasher
Vice President for Studies
DM
Damien Ma
Director, Carnegie China; Maurice R. Greenberg Director's Chair
MC
Mariano-Florentino Cuellar
President

All Facts

45
Organization
PropertyValueAs OfSource
HeadquartersWashington, DC
Founded Date1910
Financial
PropertyValueAs OfSource
Revenue$87M2024
12 earlier values
2023$60M
2022$76M
2021$73M
2020$57M
2019$51M
2018$58M
2017$47M
2016$47M
2015$43M
2014$27M
2013$39M
2012$27M
Net Assets$526M2024
12 earlier values
2023$479M
2022$474M
2021$482M
2020$346M
2019$341M
2018$332M
2017$306M
2016$274M
2015$300M
2014$298M
2013$275M
2012$239M
Annual Expenses$53M2024
12 earlier values
2023$47M
2022$40M
2021$38M
2020$39M
2019$39M
2018$37M
2017$37M
2016$35M
2015$34M
2014$33M
2013$33M
2012$30M
Other
PropertyValueAs OfSource
ProgramTechnology and International Affairs Program. Co-directors: Jon Bateman and Arthur Nelson. ~15+ staff/fellows. Four areas: AI, Information Environment, Cybersecurity, Biotechnology.Mar 2026
Employee Count3752025
Annual Revenue575748872025
PublicationAI Global Surveillance (AIGS) Index covering 176 countries, created by Steven Feldstein

Board Seats

31
MemberAppointedRole
dJLW19SQvFTrustee
lrxvjNBxr2Trustee
wtYHnrAEAgTrustee
b72hVJLH6DTrustee
XIABGkgIErTrustee
1yerbanfygTrustee
Eki269qgEbTrustee
1rOpEs5IJITrustee
WhpYnRtVMGTrustee
SCPbAwSzfgTrustee
Xvp1vx64YGTrustee
yuHk7dSm29Trustee

Publications

1
TitlePublicationTypeAuthorsUrlVenuePublishedDate
Beyond Open vs. Closed: Foundation AI Model GovernancereportCarnegie Endowmentcarnegieendowment.orgCarnegie Endowment2024-07

Related Wiki Pages

Top Related Pages

Approaches

Intervention Evaluation for Political Stability

Organizations

Machine Intelligence Research Institute (MIRI)CSET (Center for Security and Emerging Technology)Center for Human-Compatible AI (CHAI)Carnegie Endowment for International Peace AI ProgramCSIS Wadhwani Center for AI and Advanced TechnologiesInstitute for AI Policy and Strategy

Concepts

Transformative AIGovernance-Focused Worldview