Skip to content
Longterm Wiki

FlexHEG (Flexible Hardware-Enabled Guarantees)

active

Research project led by Yoshua Bengio developing open-source hardware-enabled guarantees for AI governance. Proposes a "Guarantee Processor" that monitors accelerator usage and verifies compliance, combined with a "Secure Enclosure" providing physical tamper protection. Funded with $4.1M from the Survival and Flourishing Fund (2024). Published interim report (September 2024), technical options paper (arXiv:2506.03409), and international security applications paper (arXiv:2506.15100). Interlock-based design gives the Guarantee Processor direct access to the accelerator data path. Estimated 3.7-7.9 years for integrated hardware deployment.

Organizations

2
CHIPS AllianceOpen-source hardware development organization under the Linux Foundation. Hosts the Caliptra project (open-source silicon Root of Trust) with contributors including AMD, Google, Microsoft, NVIDIA, and Samsung. Develops open standards for chip security and interoperability.
Epoch AIEpoch AI is a research organization dedicated to producing rigorous, data-driven forecasts and analysis about artificial intelligence progress, with particular focus on compute trends, training datasets, algorithmic efficiency, and AI timelines.

People

2
Yoshua BengioYoshua Bengio is a pioneer of deep learning who shared the 2018 Turing Award with Geoffrey Hinton and Yann LeCun for their foundational work on neural networks. As Scientific Director of Mila, the Quebec AI Institute, he leads one of the world's largest academic AI research centers. His technical contributions include fundamental work on neural network optimization, recurrent networks, and attention mechanisms. In recent years, Bengio has increasingly focused on AI safety and governance. He was an early signatory of the 2023 Statement on AI Risk and has become a prominent voice arguing that frontier AI development requires more caution and oversight. His concerns span both near-term harms (misinformation, job displacement) and longer-term risks from systems that might become difficult to control. Unlike some AI researchers who dismiss existential risk concerns, Bengio has engaged seriously with these arguments. Bengio's research agenda has evolved to include safety-relevant directions like causal representation learning, which could help AI systems develop more robust and generalizable understanding of the world. He has advocated for international governance mechanisms for AI, including proposals for compute governance and safety standards. His position as one of the founding figures of modern AI gives his safety advocacy significant weight with policymakers and the broader research community.
Aaron ScherResearcher at MIRI leading SPAR (Summer Program on Applied Rationality) projects on hardware-based verification. Assessing existing chip security for treaty-grade verification, interconnect limits for networking equipment, and US-China bilateral AI agreement feasibility.

Related Projects

1
CaliptraOpen-source silicon Root of Trust (RoT) security subsystem designed for integration into SoCs. Provides identity, measured boot, and attestation capabilities. Founded by Microsoft, AMD, Google, and NVIDIA in 2022 under the CHIPS Alliance. Version 2.1 adds quantum-resilient cryptography (ML-DSA for post-quantum signatures, ML-KEM for key exchange). ~1.6M logic gates with dual RISC-V cores. Potentially the most promising cross-vendor foundation for hardware-enabled AI governance.

Related Wiki Pages

Top Related Pages

Concepts

International Compute RegimesCompute ThresholdsGovernance-Focused Worldview

Risks

AI-Enabled Authoritarian TakeoverAI-Enabled Biological RisksAI ProliferationAI-Driven Trust Decline

Organizations

Epoch AI

Approaches

Compute MonitoringAI Governance Coordination TechnologiesAI-Era Epistemic InfrastructureOpen Source AI SafetyAI Content Authentication

Key Debates

AI Safety Solution CruxesOpen vs Closed Source AIAI Risk Critical Uncertainties Model

Analysis

Authentication Collapse Timeline ModelCaliptra

Policy

AI Safety Institutes (AISIs)

Other

Aaron Scher