Skip to content
Longterm Wiki
Back

Mithril Security – Confidential AI & Model Provenance Tools

web
mithrilsecurity.io·mithrilsecurity.io

Mithril Security is a company building confidential AI infrastructure tools, including cryptographic model provenance (AICert) and privacy-preserving LLM deployment (BlindLlama), relevant to AI safety through supply chain transparency and data confidentiality.

Metadata

Importance: 42/100homepagetool

Summary

Mithril Security develops privacy-first AI infrastructure tools that use secure hardware (TPMs, secure enclaves) to provide cryptographic proof of model provenance and data confidentiality. Their flagship products include AICert for verifiable AI training provenance and BlindLlama for secure open-source LLM deployment. They are supported by the OpenAI Cybersecurity Grant Program.

Key Points

  • AICert creates cryptographic 'ID cards' for AI models, binding model hashes to training procedure hashes to prove provenance and detect tampering.
  • BlindLlama enables privacy-preserving LLM deployment using secure enclaves, ensuring user data is never exposed to the model provider.
  • Uses hardware-backed security (TPMs, secure enclaves) to simultaneously protect user privacy, developer IP, and prevent misuse.
  • Open-source and independently audited, with support from OpenAI Cybersecurity Grant and collaboration with Future of Life Institute.
  • Addresses AI supply chain security risks including model poisoning (demonstrated via PoisonGPT experiment on Hugging Face).

4 FactBase facts citing this source

Cached Content Preview

HTTP 200Fetched Apr 7, 20265 KB
Mithril Security 

 

 mithril security 
 
 Resources 
 
 Community Get started Contact us Bringing Transparency and Privacy in AI

 Mithril Security helps AI providers build models which users can trust with a secure supply chain offering provenance traceability, model protection and data confidentiality. 

 Book a call We are supported by OpenAI Cybersecurity Grant Program to build Confidential AI Tooling. Learn more We cannot see your data. 

 Privacy-first AI

 AICert for verifiablbe training of ai models

 AICert is the first AI provenance solution to provide cryptographic proof that a model is the result of the application of a specific algorithm on a specific training set.

‍ AICert uses secure hardware, such as TPMs, to create unforgeable ID cards for AI that cryptographically bind a model hash to the hash of the training procedure. 

This ID card serves as irrefutable proof to trace the provenance of a model to ensure it comes from a trustworthy and unbiased training procedure.

 Read our docs What our clients say "Sometimes cryptographic techniques allow us to have our cake and eat it too . With hardware-backed compute governance we can hope to protect the privacy of AI users, the intellectual property of AI developers, and the public interest by preventing misuse — all at the same time. We are very excited to be working with the experts at Mithril to try to make this happen."

 - Anthony Aguirre, Executive Director of Future of Life Institute

 View customer story What our clients say "Mithril Security appears to offer a solution to the privacy issues that come with creating AI-assisted tools for clinicians in areas such as the NHS."

 - CEO of Stealth healthcare AI startup

 View customer story What our clients say "By using BlindBox, we can now leverage LLMs to help investigators and reviewers analyze documents and speed up investigations."

 - CEO of Avian Digital forensics

 View customer story What our clients say "Mithril Security provided a way to deploy our model on-premise while ensuring our IP was protected thanks to their secure enclaves"

 - Louis Combaldieu, CTO of Auxlia, editor of an AI to detect dangerous objects at airports

 What our clients say "Mithril Security confidential Al seemed one obvious choice to develop our Zero Trust search solution"

 - Thierry Leblond, CEO of Scille, editor of PARSEC.
A zero trust & zero knowledge solution for sharing sensitive data on the Cloud

 BlindLlama to deploy large language model 

 Effortless Open-Source LLM Integration with Secure, Transparent APIs and End-to-End Data Protection

 Confidentiality We serve AI models in a hardened environment that ensures data is never exposed as all external access are removed

 + Verifiability We use secure hardware to provide cryptographic proof so that you can have irrefutable proof your data will remain confidential

 Learn more Why trust us?

 Open source

 Our solution is open-source and can be publicly audited. Audit report

 Our produc

... (truncated, 5 KB total)
Resource ID: kb-5bd45afc076a0e72 | Stable ID: sid_GQI9JbmAnc