Skip to content
Longterm Wiki
Back

Center for AI Standards and Innovation (CAISI)

government

Credibility Rating

5/5
Gold(5)

Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.

Rating inherited from publication venue: NIST

CAISI is the institutional home for NIST's AI safety and standards work, directly relevant to AI governance, evaluation frameworks, and policy efforts; a key U.S. government body for understanding official AI safety infrastructure.

Metadata

Importance: 62/100homepage

Summary

CAISI is NIST's dedicated center serving as the U.S. government's primary interface with industry on AI testing, security standards, and evaluation. It develops voluntary AI safety and security guidelines, conducts evaluations of AI capabilities posing national security risks (including cybersecurity and biosecurity threats), and represents U.S. interests in international AI standardization efforts.

Key Points

  • Serves as the primary U.S. government point of contact for industry collaboration on AI testing and security standards
  • Conducts unclassified evaluations of AI capabilities that pose national security risks, including cybersecurity and biosecurity threats
  • Develops voluntary standards and guidelines for AI system security in coordination with federal agencies
  • Assesses vulnerabilities in foreign AI systems to protect against adversarial AI threats
  • Represents U.S. interests in international AI standards bodies to maintain technological competitiveness

Cited by 7 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 20264 KB
Center for AI Standards and Innovation (CAISI) | NIST 
 
 
 
 

 

 
 
 
 Skip to main content
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 Official websites use .gov 
 

 A .gov website belongs to an official government organization in the United States.
 

 
 
 
 
 
 
 Secure .gov websites use HTTPS 
 

 A lock ( 
 
 Lock 
 A locked padlock 
 
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
 

 
 
 
 
 
 
 

 
 
 
 
 https://www.nist.gov/caisi

 
 

 

 
 
 
 
 

 

 

 
 
 
 

 
 

 
 

 

 
 
 
 Artificial intelligence 

 
 
 
 
 
 
 Center for AI Standards and Innovation (CAISI)

 
 
 
 
 

 
 
 

 
 
 
 
 
 
 

 

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 The Center for AI Standards and Innovation (CAISI) will serve as industry’s primary point of contact within the U.S. government to facilitate testing and collaborative research related to harnessing and securing the potential of commercial AI systems. To that end, CAISI will:

 Work with NIST organizations to develop guidelines and best practices to measure and improve the security of AI systems, and work with NIST staff to assist industry to develop voluntary standards.
 Establish voluntary agreements with private sector AI developers and evaluators, and lead unclassified evaluations of AI capabilities that may pose risks to national security. In conducting these evaluations, CAISI will focus on demonstrable risks, such as cybersecurity, biosecurity, and chemical weapons.
 Lead evaluations and assessments of capabilities of U.S. and adversary AI systems, the adoption of foreign AI systems, and the state of international AI competition.
 Lead evaluations and assessments of potential security vulnerabilities and malign foreign influence arising from use of adversaries’ AI systems, including the possibility of backdoors and other covert, malicious behavior.
 Coordinate with other federal agencies and entities, including the Department of Defense, the Department of Energy, the Department of Homeland Security, the Office of Science and Technology Policy, and the Intelligence Community, to develop evaluation methods, as well as conduct evaluations and assessments.
 Represent U.S. interests internationally to guard against burdensome and unnecessary regulation of American technologies by foreign governments and collaborate with NIST staff to ensure U.S. dominance of international AI standards.
 Read the statement from Secretary of Commerce Howard Lutnick about the Center for AI Standards and Innovation.

 

 

 
 
 
 
 CAISI Research Blog

 
 
 
 
 

 
 
 

 

 
 
 
 
 AI security red-teaming competitions – in which participants compete to develop new attacks against AI models and defenses – provide a unique way to assess how

 
 
 
 
 
 
 

 
 
 
 
 In December, CAISI published a write-up on how AI models can cheat on agentic evaluations, including lessons from our experience building and using AI-enabled

 
 
 
 
 
 
 

 
 
 
 
 Building gol

... (truncated, 4 KB total)
Resource ID: 84e0da6d5092e27d | Stable ID: ZjdiNWQzND