AI Safety Knowledge Base
A structured reference covering risks, technical approaches, governance, organizations, and key people shaping the future of AI safety.
Explore by topic
AI Safety
Governance
Recently updated
View all →AI Misuse Risk Cruxes
Comprehensive analysis of AI misuse cruxes with quantified evidence across bioweapons (RAND bio study found no significant difference; novice uplif...
Bioweapons Attack Chain Model
Multiplicative attack chain model estimates catastrophic bioweapons probability at 0.02-3.6%, with state actors (3.0%) showing highest estimated ri...
AI Uplift Assessment Model
Quantitative assessment estimating AI provides modest knowledge uplift for bioweapons (1.0-1.2x per RAND 2024) but more substantial evasion capabil...
Dustin Moskovitz
Dustin Moskovitz and Cari Tuna have given \$4B+ since 2011, with ~\$336M (12% of total) directed to AI safety through Coefficient Giving (formerly ...
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
California's SB 1047 required safety testing, shutdown capabilities, and third-party audits for AI models exceeding 10^26 FLOP or \$100M training c...
Forecasting
Predictions and forecasts about AI development and safety
History
Timeline of AI safety as a field
Knowledge Base
Comprehensive documentation of AI safety risks, responses, organizations, and key debates
679 pages · Continuously updated