Skip to content
Longterm Wiki
Back

MATS Funding - Extruct

web

This Extruct-aggregated page covers funding details for the MATS program, relevant for researchers seeking financial support to enter or continue AI safety research careers.

Metadata

Importance: 35/100homepagereference

Summary

This page provides information about funding available through the ML Alignment Theory Scholars (MATS) program, which supports researchers working on AI safety and alignment. It likely outlines stipends, grants, or financial support structures for program participants pursuing technical AI safety research.

Key Points

  • MATS (ML Alignment Theory Scholars) is a research program focused on AI safety and alignment
  • The page details funding mechanisms available to MATS scholars and participants
  • Financial support helps enable researchers to pursue AI safety work full-time or part-time
  • MATS is a pipeline program aimed at growing the AI safety research talent base

Cited by 1 page

PageTypeQuality
MATS ML Alignment Theory Scholars programOrganization60.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20262 KB
ML Alignment & Theory Scholars Funding | Complete Analysis | Extruct AI Pricing 
 Research 
 Product Blog 
 Data Room 
 CRM Enrichment 
 API Docs 
 Sign Up Login Loading company information...

 ML Alignment & Theory Scholars Analysis : $3M Raised

 What is ML Alignment & Theory Scholars?

 ML Alignment & Theory Scholars (MATS) Program connects scholars with mentors in AI alignment, governance, and security. Their unique approach combines research with educational seminars and community networking. MATS empowers researchers to address the urgent challenge of unaligned artificial intelligence.

 Employees 11-50 Founded 2021 Industry EdTech, AI/ML Product Features & Capabilities

 Research and educational seminars in AI alignment
 Networking events with AI alignment community
 Workshops on research strategy
 Mentorship from leading AI alignment researchers
 Financial support for scholars.
 Use Cases

 Conduct research on AI alignment challenges; Attend workshops and seminars on AI governance; Network with professionals in AI safety; Collaborate with mentors on research projects; Pursue independent research with funding support.

 How much ML Alignment & Theory Scholars raised

 Grant - $1,008,127

 April 2022 Lead Investor: Open Philanthropy Grant - $1,538,000

 November 2022 Lead Investor: Open Philanthropy Grant - $428,942

 June 2023 Lead Investor: Open Philanthropy Other Considerations

 Supported 357 scholars and 75 mentors since 2021; Received funding from notable organizations like Open Philanthropy; Alumni have co-founded AI safety organizations.

 See something that needs updating? Suggest edits to this profile . Financial Overview

 $3M Total Raised Grant $1,008,127 April 2022 Investors: Open Philanthropy Grant $1,538,000 November 2022 Investors: Open Philanthropy Grant $428,942 June 2023 Investors: Open Philanthropy Want to research more data points on ML Alignment & Theory Scholars ? Start with Extruct Platform Links

 Follow on LinkedIn Official Website
Resource ID: efc509be661efaa0 | Stable ID: sid_c7zWIayyFx