AI Impacts
- QualityRated 53 but structure suggests 80 (underrated by 27 points)
- Links3 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment |
|---|---|
| Type | Research organization |
| Founded | ≈2010s |
| Focus | Empirical analysis of AI timelines, risks, and human-level AI impacts |
| Key People | Katja Grace (Lead Researcher), Rick Korzekwa (Director) |
| Approach | Survey research, historical trend analysis, evidence synthesis |
| Community Ties | LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100, Effective Altruism |
Key Links
Section titled “Key Links”| Source | Link |
|---|---|
| Official Website | aiimpacts.org |
Overview
Section titled “Overview”AI Impacts is a research organization dedicated to improving understanding of the likely impacts of human-level artificial intelligence (HLAI), particularly examining how contemporary choices affect long-term outcomes.1 The project organizes and presents considerations informing views on AI’s societal effects, identifies areas of disagreement among experts, and synthesizes empirical evidence through continuously revised research posts.1
The organization positions itself as addressing a gap in public discourse, which it characterizes as “highly fragmented and of limited credibility.”1 Rather than building AI systems or advocating for particular policy positions, AI Impacts focuses on assembling relevant evidence about AI progress, conducting surveys of AI researchers, and analyzing historical trends to inform estimates about AI timelines and risks.1 The project aims to help better estimate social returns on AI investment, highlight neglected research areas, inform policy decisions, and channel public interest toward credible analysis.1
AI Impacts has become a notable contributor to conversations about AI existential risk within the Effective Altruism and rationalist communities, conducting influential surveys of machine learning researchers about AI risk probabilities and timelines for human-level AI development.2
History and Founding
Section titled “History and Founding”AI Impacts was founded in the 2010s by Katja Grace, who serves as the organization’s Lead Researcher.3 Grace’s motivation stemmed from a desire to organize empirical evidence on AI outcomes and provide more rigorous analysis of questions about AI’s future impacts.3 Her background spans philosophy, economics, and human ecology, with particular interests in anthropic reasoning, AI risk, and game theory.1
The organization began as a collection of research posts examining specific questions about AI development, each explicitly noting disagreements in the field and subject to ongoing revision as new evidence emerges.1 This approach reflected a commitment to presenting provisional conclusions that could be updated rather than definitive predictions.
In 2019, Rick Korzekwa joined AI Impacts as a researcher and later became Director.1 Korzekwa holds a PhD in physics from the University of Texas at Austin and focuses his research on AI progress relative to human performance, technological forecasting, and drawing lessons from historical cases of large-scale technological advances.1
Research Focus and Methodology
Section titled “Research Focus and Methodology”AI Impacts conducts empirical research across several core areas related to artificial intelligence development and its potential consequences. The organization’s work centers on assembling data and evidence rather than building theoretical models or making confident predictions about the future.
AI Timelines and Forecasting
Section titled “AI Timelines and Forecasting”A central focus involves surveying AI researchers about their expectations for when human-level AI might be developed. In the organization’s 2022 survey of machine learning researchers, respondents provided a median estimate of 10% probability for existential risk from AI controlSafety AgendaAI ControlAI Control is a defensive safety approach that maintains control over potentially misaligned AI through monitoring, containment, and redundancy, offering 40-60% catastrophic risk reduction if align...Quality: 75/100 failure—notably higher than their median estimate for overall human extinction risk from all causes.2 This survey also found that 69% of researchers supported prioritizing AI safety research more, a significant increase from 49% in 2016.2
These surveys have been widely cited in discussions about AI risk within the Effective Altruism community and have informed debates about how much attention and resources should be devoted to AI safety work.
Historical Trends and Benchmarks
Section titled “Historical Trends and Benchmarks”AI Impacts examines historical patterns in AI development, including trends in compute scaling, performance improvements on benchmarks, and comparisons of AI capabilities to human performance across various tasks. The organization tracks metrics like progress in image recognition, game-playing, and other domains where AI systems can be directly compared to human achievement.3
This historical analysis aims to identify patterns that might inform expectations about future progress, though the organization emphasizes the provisional nature of such extrapolations given the potential for discontinuous advances.
Risk Analysis and Evidence Synthesis
Section titled “Risk Analysis and Evidence Synthesis”The organization positions AI risk as a top cause area due to what it characterizes as plausible existential threats from advanced AI.4 However, rather than focusing primarily on theoretical arguments about risk, AI Impacts emphasizes gathering empirical data about expert opinionAi Transition Model MetricExpert OpinionComprehensive analysis of expert beliefs on AI risk shows median 5-10% P(doom) but extreme disagreement (0.01-99% range), with AGI forecasts compressing from 50+ years (2020) to ~5 years (2024). De...Quality: 61/100, historical precedents, and measurable trends in AI capabilities.
Recent surveys conducted across AI safety and governance organizations have explored researcher views on various risk scenarios, contributing to broader understanding of how those working directly on these issues assess the severity and probability of different outcomes.5
Position Within AI Safety Community
Section titled “Position Within AI Safety Community”AI Impacts occupies a distinctive niche within the AI safety ecosystem. While organizations like AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100 and DeepMindLabGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100 focus on technical safety research, and others concentrate on governance and policy, AI Impacts emphasizes empirical analysis and evidence gathering.
The organization advocates for “understanding the situation” as a crucial intervention alongside technical safety work and governance efforts.4 This framing suggests that improving collective understanding of AI timelines, risks, and likely outcomes is itself valuable work that complements more direct attempts to make AI systems safer or establish governance frameworks.
Katja Grace’s connection to LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100—she blogs at “world spirit sock puppet,” a platform associated with the rationalist community—reflects AI Impacts’ close ties to the intellectual ecosystem examining AI existential risk.1 The organization’s work frequently appears in discussions on the EA Forum and LessWrong, where its surveys and analyses inform debates about AI timelines and risk levels.
Research Context and Broader Trends
Section titled “Research Context and Broader Trends”AI Impacts’ work exists within a rapidly evolving landscape of AI safety research. Between 2017 and 2022, AI safety research grew by 315%, though it still comprises only approximately 2% of all AI research.6 Despite this small proportion, safety papers receive notably high citation rates—an average of 33 citations compared to 16 for general AI papers—suggesting significant interest in these questions among researchers.6
A 2024-2025 survey of 135 researchers at AI safety and governance organizations, which included AI Impacts, examined views on risk levels across the field.5 More broadly, recent surveys of 2,778 AI researchers found that nearly half estimate at least a 10% chance of catastrophic outcomes, including potential extinction, from advanced AI.7 This growing concern among researchers provides context for AI Impacts’ mission of rigorously analyzing these risks.
The Center for AI SafetyLab ResearchCAISCAIS is a research organization that has distributed $2M+ in compute grants to 200+ researchers, published 50+ safety papers including benchmarks adopted by Anthropic/OpenAI, and organized the May ...Quality: 42/100, under director Dan HendrycksResearcherDan HendrycksBiographical overview of Dan Hendrycks, CAIS director who coordinated the May 2023 AI risk statement signed by major AI researchers. Covers his technical work on benchmarks (MMLU, ETHICS), robustne...Quality: 19/100, released a statement equating AI extinction risk to threats from pandemics and nuclear war, signed by researchers across multiple disciplines.8 A public poll found that 61% of US respondents view AI as a threat to humanity.8 Against this backdrop, AI Impacts’ emphasis on empirical analysis and evidence synthesis serves to ground discussions that might otherwise remain primarily theoretical.
Limitations and Methodological Considerations
Section titled “Limitations and Methodological Considerations”While AI Impacts provides valuable empirical data, several limitations affect the organization’s work and the interpretation of its findings. Survey-based research on AI timelines faces inherent challenges: researchers may have limited ability to predict technological progress in their own field, and survey responses can vary significantly based on question framing.2
The organization’s 2022 survey showed divided median estimates on existential risk, with a split between 5% and 10% probability, raising questions about whether survey methodologies might inadvertently inflate estimates of extremely low-probability outcomes.2 Additionally, the relatively small sample sizes in some surveys of AI researchers—particularly those focused on safety and governance organizations—mean that results may not fully represent the broader research community’s views.
More fundamentally, AI Impacts faces the challenge that much relevant information about AI development timelines and risks remains uncertain or unknowable. The organization cannot access implicit knowledge about AI capabilities that exists within companies but is not publicly disclosed, nor can it predict potential breakthroughs or obstacles that might dramatically alter development trajectories.1 Historical precedents for technological progress may offer limited guidance for a technology as potentially transformative as human-level AI.
Key Uncertainties
Section titled “Key Uncertainties”Several major questions remain about AI Impacts’ core research areas:
Timeline Accuracy: How accurately can surveys of AI researchers predict actual timelines for human-level AI development? Historical examples suggest expert predictions often prove unreliable, but it remains unclear whether improved methodologies can overcome these limitations.
Risk Estimation: The organization’s surveys find substantial variation in researcher estimates of existential risk from AI. Whether these probabilities reflect genuine uncertainty, differences in how risk is conceptualized, or limitations in survey methodology remains an open question.
Research Impact: To what extent does improved understanding of AI timelines and risks, as provided by AI Impacts, actually influence development priorities, funding decisions, or policy choices? The pathway from evidence synthesis to changed behavior is not straightforward.
Survey Representativeness: AI Impacts primarily surveys researchers already engaged with AI safety questions. How views among this subset compare to the broader AI research community, policymakers, or industry leaders who make key decisions about AI development remains uncertain.
Methodological Progress: Can empirical analysis of AI progress, expert surveys, and historical trend analysis significantly improve predictions about transformative AI, or are the key uncertainties fundamentally resistant to these methods?