Back
Why work at AI Impacts? - AI Impacts
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: AI Impacts
A 2022 blog post by AI Impacts founder Katja Grace articulating the strategic rationale behind AI Impacts' research-first approach, useful for understanding the organization's philosophy and how it situates itself within the broader AI safety ecosystem.
Metadata
Importance: 35/100homepagecommentary
Summary
Katja Grace explains her reasoning for why AI Impacts is a high-impact place to work, covering the organization's mission as a research library on AI futures, and arguing that 'understanding the situation' around AI risk is currently more valuable on the margin than direct technical or governance interventions.
Key Points
- •AI Impacts maintains a hierarchical library of best-guess answers to questions about AI futures, from high-level existential questions down to tractable sub-questions.
- •Grace argues AI risk is a top cause area even under uncertainty, and that demonstrating it's not severe could redirect effort to other important problems.
- •The core thesis is that 'understanding the situation' is more valuable on the margin than additional technical safety or governance intervention work.
- •AI Impacts functions as research group, blog, and community hub for researchers interested in forecasting and analyzing AI development trajectories.
- •The post reflects a personal perspective from the founder rather than an official organizational statement.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Impacts | Organization | 53.0 |
Cached Content Preview
HTTP 200Fetched Apr 7, 202624 KB
Why work at AI Impacts? – AI Impacts
Katja Grace, 6 March 2022
AI Impacts is beginning a serious hiring round (see here for job postings), so I’d like to explain a bit why it has been my own best guess at the highest impact place for me to work for me. (As in, this is a personal blog post by Katja on the AI Impacts blog, not some kind of officialesque missive from the organization.)
But first—
What is AI Impacts?
AI Impacts is a few things:
An online library of best-guess answers to questions about the future of AI. Including big questions , like ‘ how likely is a sudden jump in AI progress at around human-level performance? ’, and sub-questions informing those answers (‘ are discontinuities common in technological trends? ’), and sub-sub questions (‘ did penicillin cause any discontinuous changes in syphilis trends? ’), and so on. Each page ideally has a high-level conclusion at the top, and reasoning supporting it below, which will often call on the conclusions of other pages. These form something like a set of trees, with important, hard, decision-relevant questions at the root and low-level, tractable, harder-to-use-on-their-own questions at the leaves. This isn’t super obvious at the moment, because a lot of the trees are very incomplete, but that’s the basic idea.
A research group focused on finding such answers , through a mixture of original research and gathering up that which has been researched by others.
A blog on these topics , for more opinionated takes, conversational guides to the research, updates, and other things that don’t fit in the main library (like this!).
A locus of events for people interested in this kind of research, e.g. dinners and workshops , a Slack with other researchers, online coffees.
Why think working on AI Impacts is among the best things to do?
1. AI risk looks like a top-notch cause area
It seems plausible to me that advanced AI poses a substantial risk to humanity’s survival. I don’t think this is clear, but I do think there’s enough evidence that it warrants a lot of attention. I hope to write more about this, see here for recent discussion. Furthermore, I don’t know of other similarly serious risks (see Ord’s The Precipice for a review), or of other intervention areas that look clearly more valuable than reducing existential risk to humanity.
I actually also think AI risk is a potentially high-impact area to work (for a little while at least) if AI isn’t a huge existential risk to humanity, because so many capable and well-intentioned people are dedicating themselves to it. Demonstrating that it wasn’t that bad could redirect mountains of valuable effort to real problems.
2. Understanding the situation beats intervening on the current margin
Within the area of mitigating AI risk, there are several broad classes of action being taken. Technical safety research focuses on
... (truncated, 24 KB total)Resource ID:
3f27a8a39aa8dbd3 | Stable ID: sid_6nJfhTq0mx