Philanthropic investment firm established in 2004 by eBay founder Pierre Omidyar and his wife Pam. The organization makes grants and investments to support responsible technology, digital infrastructure, and governance innovation. Focus areas include beneficial AI, platform accountability, and strengthening democratic institutions against technology-driven threats.
AI Safety CasesApproachAI Safety CasesSafety cases are structured arguments adapted from nuclear/aviation to justify AI system safety, with UK AISI publishing templates in 2024 and 3 of 4 frontier labs committing to implementation. Apo...Quality: 91/100AI EvaluationApproachAI EvaluationComprehensive overview of AI evaluation methods spanning dangerous capability assessment, safety properties, and deception detection, with categorized frameworks from industry (Anthropic Constituti...Quality: 72/100Third-Party Model AuditingApproachThird-Party Model AuditingThird-party auditing organizations (METR, Apollo, UK/US AISIs) now evaluate all major frontier models pre-deployment, discovering that AI task horizons double every 7 months (GPT-5: 2h17m), 5/6 mod...Quality: 64/100
Analysis
OpenAI Foundation Governance ParadoxAnalysisOpenAI Foundation Governance ParadoxThe OpenAI Foundation holds Class N shares giving it exclusive power to appoint/remove all OpenAI Group PBC board members. However, 7 of 8 Foundation board members also serve on the for-profit boar...Quality: 75/100Long-Term Benefit Trust (Anthropic)AnalysisLong-Term Benefit Trust (Anthropic)Anthropic's Long-Term Benefit Trust represents an innovative but potentially limited governance mechanism where financially disinterested trustees can appoint board members to balance public benefi...Quality: 70/100US Government Authority Over Commercial AI InfrastructureAnalysisUS Government Authority Over Commercial AI InfrastructureSurveys US legal authority (DPA, IEEPA, CLOUD Act, FISA 702) over $700B+ in commercial AI infrastructure concentrated in 5-6 companies, concluding the government has extensive but not unlimited pow...Quality: 64/100
Organizations
US AI Safety InstituteOrganizationUS AI Safety InstituteThe US AI Safety Institute (AISI), established November 2023 within NIST with $10M budget (FY2025 request $82.7M), conducted pre-deployment evaluations of frontier models through MOUs with OpenAI a...Quality: 91/100OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100
Risks
AI Authoritarian ToolsRiskAI Authoritarian ToolsComprehensive analysis documenting AI-enabled authoritarian tools across surveillance (350M+ cameras in China analyzing 25.9M faces daily per district), censorship (22+ countries mandating AI conte...Quality: 91/100AI-Driven Institutional Decision CaptureRiskAI-Driven Institutional Decision CaptureComprehensive analysis of how AI systems could capture institutional decision-making across healthcare, criminal justice, hiring, and governance through systematic biases. Documents 85% racial bias...Quality: 73/100Compute ConcentrationRiskCompute ConcentrationAll six major AI infrastructure spenders (Amazon, Alphabet, Microsoft, Meta, Oracle, xAI) are US companies subject to CLOUD Act and FISA 702, giving the US government effective legal access to the ...Quality: 70/100AI Development Racing DynamicsRiskAI Development Racing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100AI-Driven Concentration of PowerRiskAI-Driven Concentration of PowerDocuments how AI development is concentrating in ~20 organizations due to $100M+ compute costs, with 5 firms controlling 80%+ of cloud infrastructure and projections reaching $1-10B per model by 20...Quality: 65/100
Concepts
EA Shareholder Diversification from AnthropicConceptEA Shareholder Diversification from AnthropicThe EA ecosystem faces extreme portfolio concentration risk with $27-76B in risk-adjusted capital tied to Anthropic stock. This page analyzes diversification strategies across three time horizons: ...Quality: 60/100Governance-Focused WorldviewConceptGovernance-Focused WorldviewThis worldview argues governance/coordination is the bottleneck for AI safety (not just technical solutions), estimating 10-30% P(doom) by 2100. Evidence includes: compute export controls reduced H...Quality: 67/100
Historical
International AI Safety Summit SeriesEventInternational AI Safety Summit SeriesThree international AI safety summits (2023-2025) achieved first formal recognition of catastrophic AI risks from 28+ countries, established 10+ AI Safety Institutes with $100-400M combined budgets...Quality: 63/100
Key Debates
Open vs Closed Source AICruxOpen vs Closed Source AIComprehensive analysis of open vs closed source AI debate, documenting that open model performance gap narrowed from 8% to 1.7% in 2024, with 1.2B+ Llama downloads by April 2025 and DeepSeek R1 dem...Quality: 60/100Government Regulation vs Industry Self-GovernanceCruxGovernment Regulation vs Industry Self-GovernanceComprehensive comparison of government regulation versus industry self-governance for AI, documenting that US federal AI regulations doubled to 59 in 2024 while industry lobbying surged 141% to 648...Quality: 54/100
Other
Dustin MoskovitzPersonDustin MoskovitzDustin Moskovitz and Cari Tuna have given $4B+ since 2011, with ~$336M (12% of total) directed to AI safety through Coefficient Giving (formerly Open Philanthropy), making them the largest individu...Quality: 49/100Yoshua BengioPersonYoshua BengioComprehensive biographical overview of Yoshua Bengio's transition from deep learning pioneer (Turing Award 2018) to AI safety advocate, documenting his 2020 pivot at Mila toward safety research, co...Quality: 39/100