Skip to content
Longterm Wiki
Back

Ryan Kidd - TAIS 2024

web

Speaker profile from TAIS 2024 conference; limited metadata available as page content was not retrieved. TAIS conferences bring together technical AI safety researchers to share current work.

Metadata

Importance: 25/100conference paperreference

Summary

This is a speaker profile page for Ryan Kidd at the Technical AI Safety (TAIS) 2024 conference. The page likely contains information about Kidd's talk, research focus, and background in AI safety. Without content available, details about his specific contributions to the conference agenda are limited.

Key Points

  • Ryan Kidd is a speaker at the TAIS 2024 (Technical AI Safety) conference
  • TAIS 2024 is a conference focused on technical approaches to AI safety research
  • The agenda page would typically include talk title, abstract, and speaker bio
  • Content unavailable for detailed analysis of specific research contributions

Cited by 1 page

PageTypeQuality
MATS ML Alignment Theory Scholars programOrganization60.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20262 KB
Ryan Kidd – Technical AI Safety Conference 
 
 
 
 
 
 
 

 

 

 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 
 

 

 Skip to content 
 
 
 

 Technical AI Safety Conference 

 
 

 
 

 
 
 Ryan Kidd

 キッド・ライアン

 ML Alignment & Theory Scholars Program (MATS)

 
 Ryan is Co-Director of the ML Alignment & Theory Scholars Program (since early 2022) and a Board Member and Co-Founder of the London Initiative for Safe AI (since early 2023). Previously, he completed a PhD in Physics at the University of Queensland.

 

 
 
 

 Insights from two years of AI safety field-building at MATS

 Friday, April 5th, 10:00–10:30

 The ML Alignment & Theory Scholars (MATS) Program is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment and connect them with AI safety research communities in the SF Bay Area and London. Since early 2022, MATS has run five seasonal programs, supporting 213 scholars and 47 mentors, and alumni have joined nearly every major AI safety initiative (and founded several new ones). This talk will summarize our insights into selecting and developing AI safety research talent and our plans for future projects.

 
 

 
 

 

 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 

 
 
 
 Loading Comments... 
 
 
 
 
 
 
 Write a Comment... 
 
 
 
 
 Email (Required) 
 
 
 
 Name (Required) 
 
 
 
 Website 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 Technical AI Safety Conference 
 

 Sign up 

 Log in 

 
 
 Copy shortlink 
 
 
 

 
 
 Report this content 
 

 
 Manage subscriptions
Resource ID: bf3e9e701a226d04 | Stable ID: sid_kHmJuSR9NX