Back
open-source models closed to within 1.70%
webred-line.ai·red-line.ai/p/state-of-open-source-ai-2025
Relevant to AI governance discussions around open-weight model release policies, export controls, and the geopolitical dimensions of AI development; provides empirical data on the shifting open-model ecosystem as of late 2025.
Metadata
Importance: 45/100blog postanalysis
Summary
A 2025 year-end analysis of open-model AI trends showing China surpassing the US in Hugging Face downloads for the first time, with Chinese models like Qwen and DeepSeek gaining significant ground. The piece examines shifts in open-weight vs. open-source dynamics, the rise of small language models, and geopolitical implications for AI governance and export controls.
Key Points
- •China surpassed the US in Hugging Face model downloads in November 2025 (17.1% vs 15.8%), with Qwen and DeepSeek accounting for 14% combined.
- •Closed models still dominate with 80% of token usage and 95% of revenue, but open-weight models set competitive baselines that affect the entire field.
- •US firms released fewer open-weight models in 2024-25 citing commercial and safety constraints, creating space for Chinese labs pursuing open-weight leadership as a catch-up strategy.
- •Open-source models closed to within 1.70% of closed model performance benchmarks, narrowing the capability gap significantly.
- •Export controls show limited effectiveness as model weights, once released, enable broad access and reverse knowledge transfer to foreign developers.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Safety Multi-Actor Strategic Landscape | Analysis | 79.0 |
| AI Proliferation | Risk | 60.0 |
Cached Content Preview
HTTP 200Fetched Apr 7, 202613 KB
State of Open-Source AI, 2025 - by Arwen Smit - Red Line
Subscribe Sign in State of Open-Source AI, 2025
The Brief: China ↑; US ↓; SLM/LLM ↑; open source ↓; open weight ↑; non-industry developers ↑; industry developers ↓
Arwen Smit Dec 04, 2025 1 1 Share The Situation: Open Model 2025 Trends Are In
As 2025 ends, open-model dynamics show a clear pattern: US ↓, China ↑, SLM/LLM ↑, open-source ↓, open-weight ↑, non-industry developers ↑, and corporate developers ↓.
Closed models dominate 80% of token usage and 95% of revenue. 1 Why does it matter what happens in open models?
Because open weights determine who catches up, when, and at what cost.
The Red Line: China Has Momentum In Open Models
2025 Trend I: US ↓ | China ↑
Until 2022, the open ecosystem was highly concentrated and US-led. Around 60% of open models originated in the US, with Google, Meta and OpenAI accounting for 40-60% of cumulative downloads. From 2022 onwards this share fell sharply. 2
Llama 3 outperformed Chinese open models until mid-2024. From late 2024 China’s position strengthened.
In November 2025, Hugging Face data showed China surpassing the US in downloads for the first time (17.1% compared with 15.8%).
Qwen and DeepSeek together accounted for 14%.
Alongside these, model families such as Mistral (France), Gemma (US), Phi (US) and Yi (China) now anchor the open ecosystem.
Paper by MIT and Hugging Face: Economies of Open Intelligence , November 2025 China ↑: who leads in open models sets the pace.
Why this matters
The leading open model sets the baseline that levels the field
In 2024-25, US firms released fewer open-weight models, citing commercial and safety constraints. Llama’s licences became more restrictive. This created space for Chinese laboratories, which treated open-weight leadership as a deliberate catch-up strategy.
Rising Western use of Chinese models
Startups use open models to avoid lock-in, reduce cost runway, and retain model governance control. Some US venture capital estimates suggest that around 80% of new start-ups use Chinese alternatives at some point. This produces reverse knowledge transfer.
Open-models feed on global innovation
Large user bases provide bug reports, fine-tuning and rapid iteration. DeepSeek’s January release showed how a frontier open model can accelerate domestic innovation and narrow foreign advantage.
Limited effectiveness of export controls
Weights, once released, can be freely used by others, including parties that may not have been able to train a cutting-edge model themselves because of export controls on hardware.
... (truncated, 13 KB total)Resource ID:
42b42eecf63e696b | Stable ID: sid_EvvoYxejey