Longterm Wiki
Updated 2026-03-13HistoryData
Citations verified2 accurate
Page StatusContent
Edited today971 wordsUpdated quarterlyDue in 13 weeks
30QualityDraft •6ImportancePeripheral6ResearchMinimal
Content8/13
LLM summaryScheduleEntityEdit history1Overview
Tables4/ ~4Diagrams0Int. links14/ ~8Ext. links2/ ~5Footnotes0/ ~3References2/ ~3Quotes2/2Accuracy2/2RatingsN:3 R:3 A:3 C:4
Change History1
Add Astralis Foundation wiki page#1544 weeks ago

Created a new wiki page for the Astralis Foundation, a Swedish AI safety and governance philanthropy. Added the organization entity to YAML with connections to Longview Philanthropy, Rethink Priorities, Beth Barnes, and METR.

Issues2
QualityRated 30 but structure suggests 80 (underrated by 50 points)
Links2 links could use <R> components

Astralis Foundation

Funder

Astralis Foundation

Astralis Foundation is a Swedish philanthropic organization focused on AI safety and governance, operating the Shared Horizons Fund and supporting initiatives like the International Dialogues on AI Safety (IDAIS) and the Nordics AI Safety Summit. Led by CEO Jona Glade and President Vilhelm Skoglund, it maintains offices in Stockholm, London, New York, and Vienna.

TypeFunder
Related
Organizations
Longview PhilanthropyRethink PrioritiesMETR
People
Beth Barnes
971 words

Quick Assessment

DimensionAssessmentEvidence
ScaleEmergingOperates the Shared Horizons Fund; total grantmaking volume not publicly disclosed
RoleFunder + ConvenerFunds AI safety initiatives, convenes multi-stakeholder dialogues
FocusAI safety and governanceWest-Asia AI governance bridges, European AI safety leadership, risk communication
Team≈6 core staff + advisorsStockholm HQ; offices in London, New York, Vienna
Key PeopleJona Glade (CEO), Vilhelm Skoglund (President)Board connections to Longview Philanthropy and Rethink Priorities

Organization Details

AttributeDetails
Full NameAstralis Foundation
TypePhilanthropic foundation
LeadershipJona Glade (CEO), Vilhelm Skoglund (President)
HeadquartersStockholm, Sweden
Other OfficesLondon, New York, Vienna
Key ProgramsShared Horizons Fund, support for IDAIS, Nordics AI Safety Summit
Funding PolicyDoes not accept unsolicited funding requests
Websiteastralisfoundation.org

Overview

Astralis Foundation is a philanthropic organization with the stated vision of "a flourishing world with secure and beneficial AI for all."1 The foundation unites funders, experts, and entrepreneurs to develop and expand interventions in AI safety and governance, providing funding, strategic guidance, and network access to steer AI development toward beneficial outcomes.

The foundation's operational approach emphasizes venture-style high-risk/high-reward philanthropic bets, multi-funder collaboration, evidence-based decision-making, and expert-driven strategy.1 Astralis operates from four offices across Stockholm, London, New York, and Vienna, reflecting its global orientation.

Astralis has notable connections to the broader effective altruism and AI safety ecosystem. CEO Jona Glade serves on the board of Longview Philanthropy, and Senior Advisor Tim Shavers sits on the board of Rethink Priorities. Board member Beth Barnes founded METR (formerly ARC Evals), one of the leading AI model evaluation organizations.

Focus Areas

Astralis organizes its work around three primary focus areas:1

Building West-Asia Bridges

The foundation supports efforts to establish global governance structures for trustworthy AI innovation. Its flagship initiative in this area is support for the International Dialogues on AI Safety (IDAIS), which brings together AI scientists and policymakers from Western nations and China to discuss safety standards and cooperation. According to the foundation, IDAIS is now in its fourth session.1

European AI Safety Leadership

Astralis works to strengthen Europe's capacity for safe AI development while preventing catastrophic risks. In this area, the foundation supported the Langsikt Centre in producing policy recommendations for Norwegian decision-makers on AI safety and governance.1

AI Risk and Opportunity Communication

The foundation aims to inform stakeholders and decision-makers about AI progress and associated dangers. Astralis co-hosted the 2024 Nordics AI Safety Summit and the 2025 Nordic AI Retreat in Stockholm.12

Shared Horizons Fund

Astralis operates the Shared Horizons Fund, a multi-funder vehicle for AI safety grantmaking. The fund is led by Senior Advisor Tim Shavers.3 Specific details about the fund's size, grantees, and disbursement strategy have not been publicly disclosed.

Leadership and Team

Jona Glade (CEO)

Glade provides strategic counsel on AI safety to decision-makers and leads both Astralis and cFactual, an organization focused on high-impact philanthropy. According to the foundation, Glade has led over 20 strategic initiatives focused on AI safety and high-impact philanthropy. He serves on the board of Longview Philanthropy and previously consulted at BCG and founded Consultants for Impact. He studied organizational psychology at the University of Vienna.3

Vilhelm Skoglund (President)

Skoglund leads funder relationships for the foundation. He co-founded Impact Academy, which prepares talent to address catastrophic risks, and serves as a board member of the FROS Foundation. He also runs a Swedish family office. His background includes law, developmental economics, and sustainability studies at Uppsala University and Cornell University.3

Senior Advisors

NameRoleBackground
Abraham RoweSenior AdvisorFormer COO at Rethink Priorities (scaled from 10 to 100+ staff); founded Good Structures (operations consultancy); co-founded Wild Animal Initiative
Tim ShaversSenior AdvisorLeads grantmaking and Shared Horizons Fund; nearly 20 years at McKinsey; 10+ years in venture capital; board member of Rethink Priorities

Core Staff

NameRoleBackground
Henri ThunbergChief of StaffFormer Development Officer at Rethink Priorities; founded Bigheart and Ge Effektivt
Nathaniel AndreaeExecutive Assistant & OperationsResearch and writing experience at The Cricketer and The Times

Board Members and Advisors

The foundation's board and advisory circle includes Simran Dhaliwal, Samuel Hargestam, Oliver Edholm, Beth Barnes (founder of METR), and Andreas Ehn (first CTO of Spotify).3

Ecosystem Connections

Astralis occupies a distinctive niche in the AI safety funding ecosystem, bridging Scandinavian philanthropic networks with the global AI safety community. Several team members have prior experience at Rethink Priorities (Abraham Rowe as COO, Tim Shavers as board member, Henri Thunberg as Development Officer), and CEO Jona Glade's board seat at Longview Philanthropy positions the foundation near major longtermist funding flows.

The foundation's focus on European and West-Asia governance bridges distinguishes it from US-centric AI safety funders. Its support for IDAIS and the Nordics AI Safety Summit suggests a strategy centered on international cooperation and policy influence rather than direct technical research funding.

Key Uncertainties

  • Grantmaking scale: Astralis has not publicly disclosed its total funding deployed or annual budget, making it difficult to assess its philanthropic weight relative to other AI safety funders.
  • Track record: As a relatively newer entrant in the AI safety funding space, there is limited public information about the outcomes and impact of its grants.
  • Shared Horizons Fund specifics: The fund's size, grantee portfolio, and decision-making process are not publicly documented.
  • Relationship to cFactual: CEO Jona Glade leads both Astralis and cFactual, but the operational relationship between the two organizations is unclear.

Sources and Further Reading

Footnotes

  1. Claim reference cr-a3f0 (data unavailable — rebuild with wiki-server access) 2 3 4 5 6

  2. Longview Philanthropy wiki page - reference to Astralis Foundation as co-host of Nordic AI Retreat in Stockholm 2025.

  3. Claim reference cr-a2ec (data unavailable — rebuild with wiki-server access) 2 3 4

References

1Astralis Foundation - Our Peopleastralisfoundation.org
Claims (1)
The fund is led by Senior Advisor Tim Shavers. Specific details about the fund's size, grantees, and disbursement strategy have not been publicly disclosed.
Accurate100%Feb 22, 2026
Tim leads our grantmaking and Shared Horizons Fund, while also advising Halcyon Ventures, our partner fund investing in impact-first for-profits.
2Astralis Foundation websiteastralisfoundation.org
Claims (1)
[Astralis Foundation](https://astralisfoundation.org) is a philanthropic organization with the stated vision of "a flourishing world with secure and beneficial AI for all." The foundation unites funders, experts, and entrepreneurs to develop and expand interventions in AI safety and governance, providing funding, strategic guidance, and network access to steer AI development toward beneficial outcomes.
Accurate100%Feb 22, 2026
Our vision is a flourishing world with secure and beneficial AI for all. Mission Our mission is to help navigate transformative AI by uniting funders, experts and entrepreneurs to seed and scale high-impact interventions. We back exceptional people and ideas with the funding, strategic guidance, and networks they need to steer transformative AI toward beneficial outcomes.
Citation verification: 2 verified of 2 total

Structured Data

1 factView full profile →

All Facts

General
PropertyValueAs OfSource
Websitehttps://astralisfoundation.org

Related Pages

Top Related Pages

Safety Research

Anthropic Core Views

Approaches

AI-Human Hybrid Systems

Analysis

Planning for Frontier Lab ScalingElon Musk (Funder)

Key Debates

AI Safety Solution Cruxes

Concepts

EA Shareholder Diversification from AnthropicAgentic AISelf-Improvement and Recursive Enhancement

Other

Dustin Moskovitz (AI Safety Funder)Eli Lifland

Organizations

US AI Safety InstituteAnthropicStanford UniversitySaferAILondon Initiative for Safe AIBlueDot Impact

Risks

SchemingDeceptive AlignmentAI-Induced IrreversibilityAI Capability Sandbagging