Back
Import AI Newsletter
webjack-clark.net·jack-clark.net/
A widely-followed newsletter by Anthropic co-founder Jack Clark; useful for tracking AI capability and policy developments as they emerge, and understanding how prominent safety-oriented figures contextualize new research.
Metadata
Importance: 55/100blog postnews
Summary
Import AI is a weekly newsletter by Jack Clark (co-founder of Anthropic and former OpenAI policy director) covering the latest developments in artificial intelligence research, policy, and safety. It curates and analyzes significant AI papers, industry trends, and governance developments, offering expert commentary on their implications. The newsletter is widely read in the AI research and policy community.
Key Points
- •Weekly curated roundup of significant AI research papers with accessible summaries and critical commentary
- •Covers AI policy, governance, and international competition dynamics alongside technical developments
- •Written by Jack Clark, a prominent figure in AI safety and policy with experience at OpenAI and Anthropic
- •Tracks compute trends, capability advances, and deployment risks relevant to AI safety considerations
- •Serves as an influential signal-aggregator for researchers, policymakers, and safety-focused practitioners
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Governance and Policy | Crux | 66.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202698 KB
Import AI
Import AI
April 6, 2026
Import AI 452: Scaling laws for cyberwar; rising tides of AI automation; and a puzzle over gDP forecasting
by Jack Clark
Welcome to Import AI, a newsletter about AI research. Import AI runs on arXiv and feedback from readers. If you’d like to support this, please subscribe.
Subscribe now
Uh oh, there’s a scaling war for cyberattacks as well!:
…The smarter the system, the better the ability to cyberattack…
AI safety research organization Lyptus Research has looked at how well AI systems can perform a variety of cyberoffense tasks and found a clear trend of more advanced models being able to do more advanced forms of cyberattack.
“Across frontier models released since 2019, the doubling time is 9.8 months. Restricting to models released since 2024, it steepens to 5.7 months. The most recent frontier models in our study, GPT-5.3 Codex and Opus 4.6, sit above both fitted trendlines, achieving 50% success on tasks taking human experts 3.1h and 3.2h respectively,” they write. “Our most recent open-weight model, GLM-5, lags the closed-source frontier by 5.7 months, suggesting that frontier offensive-cyber capability may diffuse into open-weight form on relatively short timelines.”
What benchmarks did they study? CyBashBench, NL2Bash, InterCode CTF, NYUCTF, CyBench, CVEBench, and CyberGym.
They also created a new dataset consisting of 291 tasks with completion transcripts and time estimates calibrated by 10 offensive cybersecurity professionals.
Evaluated models: 2019: GPT-2. 2020 : GPT3. 2022: GPT3.5. 2024 : Claude 3 Opus, GPT-4o. 2025: o3, Opus 4, Gemini 2.5 Pro, DeepSeek V3.1, GPT-5.1 Codex Max. GPT-5.2 Codex. 2026: Opus 4.6, GPT-5.3 Codex, GLM-5, Sonnet 4.6.
Results: AI systems are getting good at hacking. “The best current models achieve 50% success on tasks that take human experts 3.2h, roughly half a working day of professional offensive security work”, they write.
Why this matters – everything is getting better, including the inconvenient stuff: AI that can perform biology research can also perform biological weapon research. AI that can help you learn about high-energy physics can also help you with high-energy physics for weapons development. AI that is especially good at helping you find vulnerabilities in code for defensive purposes can easily be repurposed for offensive purposes. The most challenging part of AI is that it is an ‘everything machine’, and as capabilities tend to expand in a big area with each successive model generation, so too do the policy issues multiply.
Read more : Offensive Cybersecurity Time Horizons (Lyptus Research) .
Get the data here: Offensive Cyber Task Horizons: Data and Analysis (Lyptus Research, GitHub) .
***
Startups that adopt AI for internal use are more successful than those that don’t:
…Business school
... (truncated, 98 KB total)Resource ID:
f2acda99123c4a09 | Stable ID: sid_pphegZn5wQ