Back
IAPS 2025 Year in Review — Institute for AI Policy and Strategy
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Institute for AI Policy and Strategy
IAPS (Institute for AI Policy and Strategy) is a think tank focused on AI governance; this annual review summarizes their 2025 work, useful for tracking policy-side AI safety institutional activity.
Metadata
Importance: 35/100organizational reportnews
Summary
This page appears to be the Institute for AI Policy and Strategy's (IAPS) annual review for 2025, summarizing the organization's key activities, research outputs, and milestones in AI governance and policy work over the year. IAPS focuses on bridging technical AI safety research with policy recommendations for governments and international bodies.
Key Points
- •IAPS is a policy-focused think tank working at the intersection of AI safety and governance
- •The year-in-review likely covers major research publications, policy engagements, and institutional growth in 2025
- •IAPS work typically spans compute governance, frontier AI regulation, and international AI coordination
- •The organization contributes analysis to inform government and multilateral AI policy decisions
1 FactBase fact citing this source
| Entity | Property | Value | As Of |
|---|---|---|---|
| Institute for AI Policy and Strategy | media-coverage | 40+ features in the Economist, NYT, Real Clear Politics, TIME in 2025 | 2025 |
Cached Content Preview
HTTP 200Fetched Apr 7, 20267 KB
IAPS 2025 Year in Review — Institute for AI Policy and Strategy
0
IAPS 2025 Year in Review
Dec 31
Written By Institute for AI Policy and Strategy
Dear Friends,
2025 has been an extraordinary year for AI—in technology, policy, and for IAPS.
We started with DeepSeek's R1 shaking up the conversation in January and ended with Gemini 3, GPT-5.2, and Opus 4.5 releasing within weeks of each other. Models became faster, more efficient, and more capable of genuine reasoning. Highly capable Chinese models now dominate open source, driving urgent debates about U.S.-China competition. The policy landscape shifted just as fast, with emphasis moving toward innovation, infrastructure, and diffusion.
Something else shifted: public awareness. Over 50% of Americans now use AI tools —adopting them faster for personal use than for work. Friends and family ask me about the latest models in ways they never did before. And yet only a small proportion are paying attention to what's happening at the frontier.
2025 was supposed to be "the year of the agent." We got mixed delivery. AI can automate software engineering to a very large degree, but still struggles with computer use and isn’t yet fulfilling the personal assistant role some had expected. Nonetheless, it is clear that this time last year AI didn’t even search the web and now agentic AI systems string together complex tasks and operate with increasing autonomy. These capabilities are transformative, and we want this growth to continue. AI has enormous potential to solve hard problems, accelerate discovery, and improve lives.
Progress is spiky. One week brings a breakthrough; the next reveals unexpected limitations. And the same capabilities enabling scientific advances can be misused. We need visibility into what these systems can do. We need institutions prepared for capabilities advancing faster than our understanding. We need protections against misuse and dangerous concentrations of power.
None of this is straightforward. We're navigating a landscape where updates come fast and sometimes contradict each other, and where the same technology can be both beneficial and risky depending on how it's used. There are trade-offs all the way down.
This is what IAPS does. We provide technically grounded research that helps policymakers make sense of emerging capabilities and tradeoffs —what they mean today and what they signal for tomorrow. Below, you'll find highlights from our recent work: research on agentic AI, chain-of-thought monitoring, cybersecurity, and more.
As we head into 2026, the questions are only getting harder. How should oversight be divided between federal, state, and private actors? How do we govern AI in critical infrastructure wit
... (truncated, 7 KB total)Resource ID:
kb-480d23d4a2b20c49