Skip to content
Longterm Wiki
Entity
Wiki
About
Business
Data

AI Impacts is a research organization that conducts empirical analysis of AI timelines and risks through surveys and historical trend analysis, contributing valuable data to AI safety discourse. While their work provides useful evidence synthesis and expert opinion surveys, it faces inherent limitations in survey methodology and response bias.

Key Metrics

Funding Rounds

$1.9Mtotal raised
Funding Rounds. Open Philanthropy General Support 2016 (2016): $32K raised; Open Philanthropy General Support 2018 (2018): $100K raised; Open Philanthropy General Support 2020 (2020): $50K raised; SFF General Support 2021-H1 (Jaan Tallinn) (2021): $221K raised; Open Philanthropy Expert Survey on Progress in AI (2022): $345K raised; SFF General Support 2022-H2 (Jaan Tallinn) (2022): $546K raised; Open Philanthropy General Support 2022 (2022): $620K raised. Total: $1.9M.$0$178K$357K$535K$713K20162018202020212022Open Philanthropy General Support 2016Open Philanthropy General Support 2018Open Philanthropy General Support 2020SFF General Support 2021-H1 (Jaan Tallinn)Open Philanthropy Expert Survey on Progress in AISFF General Support 2022-H2 (Jaan Tallinn)Open Philanthropy General Support 2022
Per round
Total

Other Data

Entity Assessments
6 entries
DimensionRatingAssessor
approachSurvey research, historical trend analysis, evidence synthesiseditorial
community-tiesLessWrong, Effective Altruismeditorial
focusEmpirical analysis of AI timelines, risks, and human-level AI impactseditorial
founded~2010seditorial
key-peopleKatja Grace (Lead Researcher), Rick Korzekwa (Director)editorial
typeResearch organizationeditorial

Related Wiki Pages

Top Related Pages

Analysis

Capability-Alignment Race ModelDeceptive Alignment Decomposition ModelAI Safety Research Value Model

Policy

Council of Europe Framework Convention on Artificial IntelligenceTexas Responsible AI Governance Act (TRAIGA)

Other

AI ControlDan HendrycksVidur Kapur

Organizations

AnthropicPalisade ResearchCenter for AI Safety (CAIS)Global Partnership on Artificial Intelligence (GPAI)LessWrongGoogle DeepMind

Key Debates

The Case For AI Existential RiskThe Case Against AI Existential RiskIs AI Existential Risk Real?