Skip to content
Longterm Wiki
Back

How MATS addresses “mass movement building” concerns

web

Author

Ryan Kidd

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

Relevant to debates about optimal AI safety field-building strategy, talent pipelines, and whether growing the safety researcher base risks accelerating capabilities or diluting research quality.

Forum Post Details

Karma
63
Comments
9
Forum
lesswrong
Forum Tags
MATS ProgramAICommunity

Metadata

Importance: 38/100blog postcommentary

Summary

This post defends MATS (Machine Learning Alignment and Theory Scholars) against criticisms that AI safety movement-building programs grow the field too rapidly, risk oversupply of researchers, or inadvertently accelerate AI capabilities. MATS argues its recruitment targets already safety-motivated individuals, its scholars would enter AI/ML regardless, and the marginal safety researcher provides significant net benefit over working in capabilities.

Key Points

  • Most MATS scholars would pursue AI/ML careers regardless, so the program redirects rather than creates new AI labor supply.
  • Recruitment focuses on EA-adjacent, safety-motivated individuals rather than drawing in capability-focused researchers.
  • Program is intentionally made less financially attractive than industry alternatives to filter for genuine safety commitment.
  • MATS estimates one safety researcher offsets 5-10 capabilities researchers in terms of net impact on AI risk.
  • Alumni-founded organizations and expected ecosystem growth are cited as solutions to concerns about job scarcity for graduates.

Cited by 1 page

PageTypeQuality
MATS ML Alignment Theory Scholars programOrganization60.0

Cached Content Preview

HTTP 200Fetched Apr 10, 20267 KB
# How MATS addresses “mass movement building” concerns
By Ryan Kidd
Published: 2023-05-04
Recently, many AI safety movement-building programs have been criticized for attempting to grow the field too rapidly and thus:

1.  Producing more aspiring alignment researchers than there are jobs or training pipelines;
2.  Driving the wheel of AI hype and progress by encouraging talent that ends up furthering capabilities;
3.  Unnecessarily diluting the field’s epistemics by introducing too many naive or overly deferent viewpoints.

At [MATS](https://serimats.org), we think that these are real and important concerns and support mitigating efforts. Here’s how we address them currently.

Claim 1: There are not enough jobs/funding for all alumni to get hired/otherwise contribute to alignment
--------------------------------------------------------------------------------------------------------

How we address this:

*   Some of our alumni’s projects are attracting funding and hiring further researchers. Three of our alumni have started alignment teams/organizations that absorb talent (Vivek’s MIRI team, [Leap Labs](https://www.lesswrong.com/posts/Q44QjdtKtSoqRKgRe/introducing-leap-labs-an-ai-interpretability-startup), Apollo Research), and more are planned (e.g., a Paris alignment hub).
*   With the elevated interest in AI and alignment, we expect more organizations and [funders](https://nonlinearnetwork.org/) to enter the ecosystem. We believe it is important to install competent, aligned safety researchers at new organizations early, and our program is positioned to help capture and upskill interested talent.
*   Sometimes, it is hard to distinguish truly promising researchers in two months, hence our four-month extension program. We likely provide more benefits through accelerating researchers than can be seen in the immediate hiring of alumni.
*   Alumni who return to academia or industry are still a success for the program if they do more alignment-relevant work or acquire skills for later hiring into alignment roles.

Claim 2: Our program gets more people working in AI/ML who would not otherwise be doing so, and this is bad as it furthers capabilities research and AI hype
------------------------------------------------------------------------------------------------------------------------------------------------------------

How we address this:

*   Considering that the median MATS scholar is a Ph.D./Masters student in ML, CS, maths, or physics and only 10% are undergrads, we believe most of our scholars would have ended up working in AI/ML regardless of their involvement with the program. In general, mentors select highly technically capable scholars who are already involved in AI/ML; others are outliers.
*   Our outreach and selection processes are designed to attract applicants who are motivated by reducing global catastrophic risk from AI. We principally advertise via word-of-mouth, AI safety Slack workspaces, AGI Safety Fundamentals and 80,000

... (truncated, 7 KB total)
Resource ID: d236b5655e33e109 | Stable ID: sid_kVE949nZtJ