Skip to content

AI Impacts

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:53 (Adequate)⚠️
Importance:65 (Useful)
Last edited:2026-02-02 (4 days ago)
Words:1.5k
Structure:
📊 2📈 0🔗 8📚 90%Score: 12/15
LLM Summary:AI Impacts is a research organization that conducts empirical analysis of AI timelines and risks through surveys and historical trend analysis, contributing valuable data to AI safety discourse. While their work provides useful evidence synthesis and expert opinion surveys, it faces inherent limitations in predicting transformative AI developments and translating research into actionable outcomes.
Issues (2):
  • QualityRated 53 but structure suggests 80 (underrated by 27 points)
  • Links3 links could use <R> components
DimensionAssessment
TypeResearch organization
Founded≈2010s
FocusEmpirical analysis of AI timelines, risks, and human-level AI impacts
Key PeopleKatja Grace (Lead Researcher), Rick Korzekwa (Director)
ApproachSurvey research, historical trend analysis, evidence synthesis
Community TiesLessWrong, Effective Altruism
SourceLink
Official Websiteaiimpacts.org

AI Impacts is a research organization dedicated to improving understanding of the likely impacts of human-level artificial intelligence (HLAI), particularly examining how contemporary choices affect long-term outcomes.1 The project organizes and presents considerations informing views on AI’s societal effects, identifies areas of disagreement among experts, and synthesizes empirical evidence through continuously revised research posts.1

The organization positions itself as addressing a gap in public discourse, which it characterizes as “highly fragmented and of limited credibility.”1 Rather than building AI systems or advocating for particular policy positions, AI Impacts focuses on assembling relevant evidence about AI progress, conducting surveys of AI researchers, and analyzing historical trends to inform estimates about AI timelines and risks.1 The project aims to help better estimate social returns on AI investment, highlight neglected research areas, inform policy decisions, and channel public interest toward credible analysis.1

AI Impacts has become a notable contributor to conversations about AI existential risk within the Effective Altruism and rationalist communities, conducting influential surveys of machine learning researchers about AI risk probabilities and timelines for human-level AI development.2

AI Impacts was founded in the 2010s by Katja Grace, who serves as the organization’s Lead Researcher.3 Grace’s motivation stemmed from a desire to organize empirical evidence on AI outcomes and provide more rigorous analysis of questions about AI’s future impacts.3 Her background spans philosophy, economics, and human ecology, with particular interests in anthropic reasoning, AI risk, and game theory.1

The organization began as a collection of research posts examining specific questions about AI development, each explicitly noting disagreements in the field and subject to ongoing revision as new evidence emerges.1 This approach reflected a commitment to presenting provisional conclusions that could be updated rather than definitive predictions.

In 2019, Rick Korzekwa joined AI Impacts as a researcher and later became Director.1 Korzekwa holds a PhD in physics from the University of Texas at Austin and focuses his research on AI progress relative to human performance, technological forecasting, and drawing lessons from historical cases of large-scale technological advances.1

AI Impacts conducts empirical research across several core areas related to artificial intelligence development and its potential consequences. The organization’s work centers on assembling data and evidence rather than building theoretical models or making confident predictions about the future.

A central focus involves surveying AI researchers about their expectations for when human-level AI might be developed. In the organization’s 2022 survey of machine learning researchers, respondents provided a median estimate of 10% probability for existential risk from AI control failure—notably higher than their median estimate for overall human extinction risk from all causes.2 This survey also found that 69% of researchers supported prioritizing AI safety research more, a significant increase from 49% in 2016.2

These surveys have been widely cited in discussions about AI risk within the Effective Altruism community and have informed debates about how much attention and resources should be devoted to AI safety work.

AI Impacts examines historical patterns in AI development, including trends in compute scaling, performance improvements on benchmarks, and comparisons of AI capabilities to human performance across various tasks. The organization tracks metrics like progress in image recognition, game-playing, and other domains where AI systems can be directly compared to human achievement.3

This historical analysis aims to identify patterns that might inform expectations about future progress, though the organization emphasizes the provisional nature of such extrapolations given the potential for discontinuous advances.

The organization positions AI risk as a top cause area due to what it characterizes as plausible existential threats from advanced AI.4 However, rather than focusing primarily on theoretical arguments about risk, AI Impacts emphasizes gathering empirical data about expert opinion, historical precedents, and measurable trends in AI capabilities.

Recent surveys conducted across AI safety and governance organizations have explored researcher views on various risk scenarios, contributing to broader understanding of how those working directly on these issues assess the severity and probability of different outcomes.5

AI Impacts occupies a distinctive niche within the AI safety ecosystem. While organizations like Anthropic and DeepMind focus on technical safety research, and others concentrate on governance and policy, AI Impacts emphasizes empirical analysis and evidence gathering.

The organization advocates for “understanding the situation” as a crucial intervention alongside technical safety work and governance efforts.4 This framing suggests that improving collective understanding of AI timelines, risks, and likely outcomes is itself valuable work that complements more direct attempts to make AI systems safer or establish governance frameworks.

Katja Grace’s connection to LessWrong—she blogs at “world spirit sock puppet,” a platform associated with the rationalist community—reflects AI Impacts’ close ties to the intellectual ecosystem examining AI existential risk.1 The organization’s work frequently appears in discussions on the EA Forum and LessWrong, where its surveys and analyses inform debates about AI timelines and risk levels.

AI Impacts’ work exists within a rapidly evolving landscape of AI safety research. Between 2017 and 2022, AI safety research grew by 315%, though it still comprises only approximately 2% of all AI research.6 Despite this small proportion, safety papers receive notably high citation rates—an average of 33 citations compared to 16 for general AI papers—suggesting significant interest in these questions among researchers.6

A 2024-2025 survey of 135 researchers at AI safety and governance organizations, which included AI Impacts, examined views on risk levels across the field.5 More broadly, recent surveys of 2,778 AI researchers found that nearly half estimate at least a 10% chance of catastrophic outcomes, including potential extinction, from advanced AI.7 This growing concern among researchers provides context for AI Impacts’ mission of rigorously analyzing these risks.

The Center for AI Safety, under director Dan Hendrycks, released a statement equating AI extinction risk to threats from pandemics and nuclear war, signed by researchers across multiple disciplines.8 A public poll found that 61% of US respondents view AI as a threat to humanity.8 Against this backdrop, AI Impacts’ emphasis on empirical analysis and evidence synthesis serves to ground discussions that might otherwise remain primarily theoretical.

Limitations and Methodological Considerations

Section titled “Limitations and Methodological Considerations”

While AI Impacts provides valuable empirical data, several limitations affect the organization’s work and the interpretation of its findings. Survey-based research on AI timelines faces inherent challenges: researchers may have limited ability to predict technological progress in their own field, and survey responses can vary significantly based on question framing.2

The organization’s 2022 survey showed divided median estimates on existential risk, with a split between 5% and 10% probability, raising questions about whether survey methodologies might inadvertently inflate estimates of extremely low-probability outcomes.2 Additionally, the relatively small sample sizes in some surveys of AI researchers—particularly those focused on safety and governance organizations—mean that results may not fully represent the broader research community’s views.

More fundamentally, AI Impacts faces the challenge that much relevant information about AI development timelines and risks remains uncertain or unknowable. The organization cannot access implicit knowledge about AI capabilities that exists within companies but is not publicly disclosed, nor can it predict potential breakthroughs or obstacles that might dramatically alter development trajectories.1 Historical precedents for technological progress may offer limited guidance for a technology as potentially transformative as human-level AI.

Several major questions remain about AI Impacts’ core research areas:

Timeline Accuracy: How accurately can surveys of AI researchers predict actual timelines for human-level AI development? Historical examples suggest expert predictions often prove unreliable, but it remains unclear whether improved methodologies can overcome these limitations.

Risk Estimation: The organization’s surveys find substantial variation in researcher estimates of existential risk from AI. Whether these probabilities reflect genuine uncertainty, differences in how risk is conceptualized, or limitations in survey methodology remains an open question.

Research Impact: To what extent does improved understanding of AI timelines and risks, as provided by AI Impacts, actually influence development priorities, funding decisions, or policy choices? The pathway from evidence synthesis to changed behavior is not straightforward.

Survey Representativeness: AI Impacts primarily surveys researchers already engaged with AI safety questions. How views among this subset compare to the broader AI research community, policymakers, or industry leaders who make key decisions about AI development remains uncertain.

Methodological Progress: Can empirical analysis of AI progress, expert surveys, and historical trend analysis significantly improve predictions about transformative AI, or are the key uncertainties fundamentally resistant to these methods?

  1. About - AI Impacts 2 3 4 5 6 7 8 9 10 11

  2. What do ML researchers think about AI in 2022? - AI Impacts 2 3 4 5

  3. About - AI Impacts 2 3

  4. Why work at AI Impacts? - AI Impacts 2

  5. AI Risk Surveys - AI Impacts Wiki 2

  6. State of Global AI Safety Research - ETO 2

  7. The Program - Catalyze Impact

  8. Press Release: AI Risk - Center for AI Safety 2