Skip to content
Longterm Wiki
Back

Year One of AI Safety Tokyo – Impact Certificate

web

This is an impact certificate on Manifund for the first year of AI Safety Tokyo, a community-building organization in Japan running study groups, public talks, and events to grow the AI safety field in Asia.

Metadata

Importance: 28/100otherprimary source

Summary

AI Safety Tokyo is a special interest group launched in Japan to build an AI safety community from scratch, running 47 weekly study groups in its first year with 52 unique attendees from major tech and academic institutions. The founder is seeking retroactive funding via an impact certificate for activities funded out-of-pocket. The group also organized speaking engagements, hosted guest lecturers including Eliezer Yudkowsky, and co-organized an international technical AI safety conference.

Key Points

  • Ran 47 weekly AI safety study groups in year one, averaging 7 attendees, with 52 unique participants from Google, Amazon, University of Tokyo, RIKEN, and others.
  • Guest lecturers included Eliezer Yudkowsky, Colin Rowat, Nicky Pochinkov, and Stephen Fowler.
  • Delivered public talks at TEDxOtemachi, Shibaura Institute of Technology, and Meritas Asia on AI safety and LLM risks.
  • Co-organized the Technical AI Safety conference (TAIS 2024) in Tokyo with Noeon Research.
  • Seeking retroactive impact certificate funding on Manifund for community-building work done without prior grants.

Cached Content Preview

HTTP 200Fetched Apr 11, 202616 KB
Year one of AI Safety Tokyo | Manifund

 

 
 
 
 

 Apr
 MAY
 Jun
 

 
 

 
 15
 
 

 
 

 2024
 2025
 2026
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Common Crawl

 

 

 Web crawl data from Common Crawl.
 

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20250515135756/https://manifund.org/projects/impact-certifica

 

Manifund

Home

Login

About

People

Categories

Newsletter

Home

About

People

Categories

Login

Create

6

Year one of AI Safety Tokyo

Technical AI safety

ACX Grants 2024

AI governance

EA community

Blaine William Rogers

Active

Impact certificate

$600raised

$1,000funding goal

$60,000valuation

Sign in to trade

p]:prose-li:my-0 text-gray-900 prose-blockquote:text-gray-600 prose-a:font-light prose-blockquote:font-light font-light break-anywhere empty:prose-p:after:content-["\00a0"]">
Longer description of your proposed project

AI Safety Tokyo (https://aisafety.tokyo) is a special interest group for AI Safety in Japan. We run reading groups, social events, and generally act to get people in Japan interested in and educated about safety to the point that they could make a career move. I started AI Safety Tokyo with the aim of building a safety community in Tokyo from zero. My highest hopes were to become the AI safety hub for Asia, finding talent and funnelling it to where it can do the most good.

This proposal is for an impact certificate for the activities of the first year of AI Safety Tokyo. I did not seek funding when starting the organization (visa issues, now resolved), instead funding the project out of pocket. I would now like to sell the impact in exchange for cold hard cash.

AI Safety Tokyo’s central activity is a multidisciplinary AI safety study group (benkyoukai). You can find a selection of past topics on our site: https://aisafety.tokyo/benkyoukai. In the first year we held 47 weekly study groups, with between 4 and 13 attendees, averaging 7. We had 52 unique attendees, 31 of whom attended multiple times. Our attendees include professionals and academics from the Tokyo area (Google, Amazon, Rakuten, University of Tokyo, Shibaura IoT, RIKEN), independent safety researchers travelling through Tokyo, etc. We had four guest lecturers: Eliezer Yudkowsky, Colin Rowat, Nicky Pochinkov and Stephen Fowler.

We had three speaking engagements this year:

- TEDxOtemachi: I gave a talk to the general public on the need for caution around large language models.

- Shibaura Institute of Technology: I gave a talk to undergrads on the mathematics behind large language models, touching on safety topics in the process.

- Meritas Asia: I gave a talk to legal professionals on intuitions behind generative AI, what applications are more or less risky, and how to mitigate robustness issues in large language models to use them effectively in your professional li

... (truncated, 16 KB total)
Resource ID: d6dff73d36b4f92e | Stable ID: sid_xl5qnE3hBw