Back
80,000 Hours: Updates to Our Research About AI Risk and Careers
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: 80,000 Hours
Data Status
Not fetched
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Safety Field Building Analysis | Approach | 65.0 |
| AI Safety Field Building and Community | Crux | 0.0 |
Cached Content Preview
HTTP 200Fetched Mar 7, 20269 KB
Updates to our research about AI risk and careers | 80,000 Hours Search for: On this page:
Introduction
1 1. We now rank AI governance and policy at the top of our list of impactful career paths
2 2. New interview about California's AI bill
3 3. Catastrophic misuse of AI
4 4. Working at a frontier AI company: opportunities and downsides
5 5. Emerging approaches in AI governance
6 Learn more
This week, we’re sharing new updates on:
Top career paths for reducing risks from AI
An AI bill in California that’s getting a lot of attention
The potential for catastrophic misuse of advanced AI
Whether to work at frontier AI companies if you want to reduce catastrophic risks
The variety of approaches in AI governance
Here’s what’s new:
Table of Contents
1 1. We now rank AI governance and policy at the top of our list of impactful career paths
2 2. New interview about California’s AI bill
3 3. Catastrophic misuse of AI
4 4. Working at a frontier AI company: opportunities and downsides
5 5. Emerging approaches in AI governance
6 Learn more
1. We now rank AI governance and policy at the top of our list of impactful career paths
It’s swapped places with AI technical safety research, which is now second.
Here are our reasons for the change:
Many experts in the field have been increasingly excited about “ technical AI governance ” — people using technical expertise to inform and shape policies. For example, people can develop sophisticated compute governance policies and norms around evaluating increasingly advanced AI models for dangerous capabilities .
We know of many people with technical talent and track records choosing to work in governance right now because they think it’s where they can make a bigger difference.
It’s become more clear that policy-shaping and governance positions within key AI organisations can play critical roles in how the technology progresses.
We’re seeing a particularly large increase in the number of roles available in AI governance and policy, and we’re excited to encourage (even) more people to get involved now vs before. Governments are also more poised to take action now than they appeared to be just a few years ago.
AI governance is still a less developed field than AI safety technical research.
We now see clear efforts from the industry to push back against efforts to create risk-reducing AI policy, so it’s plausible that more work is needed to advocate for sensible approaches.
Good AI governance will be needed to reduce a range of risks from AI — not just misalignment but also catastrophic misuse (discussed below), as well as emerging societal risks, like the potential suffering of digital minds or stable totalitarianism . It’s plausible (though highly uncertain) that these other risks could make up the majority of the potential bad outcomes in worlds with transformative AI.
As AI progress ac
... (truncated, 9 KB total)Resource ID:
7d10a79dcca9750a | Stable ID: NWE5ZTliZG