Skip to content

Open vs Closed Source AI

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:60 (Good)
Importance:67.5 (Useful)
Last edited:2026-01-29 (3 days ago)
Words:2.2k
Structure:
📊 7📈 1🔗 0📚 419%Score: 11/15
LLM Summary:Comprehensive analysis of open vs closed source AI debate, documenting that open model performance gap narrowed from 8% to 1.7% in 2024, with 1.2B+ Llama downloads by April 2025 and DeepSeek R1 demonstrating 90-95% cost reduction. Research shows fine-tuning can remove safety guardrails in hours, while NTIA 2024 found insufficient evidence to restrict open weights and EU AI Act exempts non-systemic open models below 10²⁵ FLOPs.
Critical Insights (4):
  • ClaimResearch shows that safety guardrails in AI models are superficial and can be easily removed through fine-tuning, making open-source releases inherently unsafe regardless of initial safety training.S:4.0I:4.5A:4.0
  • Counterint.The open vs closed source AI debate creates a coordination problem where unilateral restraint by Western labs may be ineffective if China strategically open sources models, potentially forcing a race to the bottom.S:3.5I:4.5A:3.5
  • ClaimThe risk calculus for open vs closed source varies dramatically by risk type: misuse risks clearly favor closed models while structural risks from power concentration favor open source, creating an irreducible tradeoff.S:3.0I:4.0A:4.0
Issues (1):
  • Links12 links could use <R> components
See also:LessWrong
DimensionAssessmentEvidence
Market TrajectoryOpen models closing gap rapidlyPerformance difference narrowed from 8% to 1.7% in one year (Stanford HAI 2025)
Adoption Scale1.2B+ Llama downloads by April 2025Meta reports 53% growth in Q1 2025; 50%+ Fortune 500 experimenting
Enterprise ShareOpen source declining slightly11-13% enterprise workloads use open models, down from 19% in 2024 (Menlo Ventures)
Cost EfficiencyOpen dramatically cheaperDeepSeek R1 runs 20-50x cheaper than comparable closed models; 90-95% training cost reduction
Safety GuardrailsSignificant vulnerabilityFine-tuning can remove safety training in hours; “uncensored” variants appear within days of release
Regulatory StatusCautiously permissiveNTIA 2024: insufficient evidence to restrict; EU AI Act: exemptions for non-systemic open models
Geopolitical ImpactComplicates Western restraintDeepSeek demonstrates frontier capabilities from China; unilateral restrictions less effective
Key Crux

Open vs Closed Source AI

QuestionShould frontier AI model weights be released publicly?
StakesBalance between safety, innovation, and democratic access
Current TrendMajor labs increasingly keeping models closed

One of the most heated debates in AI: Should powerful AI models be released as open source (weights publicly available), or kept closed to prevent misuse? The debate intensified following Meta’s Llama releases, Mistral’s emergence as a European open-weights champion, and DeepSeek’s 2025 disruption demonstrating Chinese open models at the frontier.

ArgumentFor Open WeightsFor Closed Models
SafetyEnables external scrutiny and vulnerability discovery; “security through transparency” parallels open-source softwarePrevents removal of safety guardrails; maintains ability to revoke access; enables monitoring for misuse
InnovationAccelerates research through global collaboration; enables startups and academics to build on frontier workControlled deployment allows careful capability assessment before wider release
SecurityDistributed development reduces single points of failurePrevents adversaries from accessing and weaponizing capabilities
Power ConcentrationPrevents AI monopoly by a few corporations; LeCun argues concentration is “a much bigger danger than everything else”Responsible actors can implement safety measures that open release cannot
AccountabilityPublic weights enable third-party auditing and bias detectionClear liability chain; developers can update, patch, and control deployment
Misuse PotentialKnowledge democratization; misuse happens regardless of opennessRAND research shows weights are theft targets; “uncensored” derivatives appear within days of release
Loading diagram...
StakeholderPositionKey RationaleEvidence
Meta (Yann LeCun)Strong openPower concentration is the real existential risk; open source enables safety through scrutinyReleased Llama 2, Llama 3 (8B-405B parameters)
Anthropic (Dario Amodei)Cautious closedIrreversibility of release; responsible scaling requires controlClaude models closed; Responsible Scaling Policy
OpenAI (Sam Altman)Closed (shifted)Safety concerns grew with capabilities; GPT-4 too capable for open releaseShifted from GPT-2 open to GPT-4 closed
Mistral AIStrong openEuropean AI sovereignty; innovation through opennessMistral 7B/8x7B/Large released with minimal restrictions
DeepSeek (China)Strategic openDemonstrates Chinese frontier capabilities; signed AI Safety Commitments alongside 16 Chinese firmsDeepSeek-R1 open weights, though with documented censorship and security issues
U.S. Government (NTIA)Cautiously pro-open2024 report found insufficient evidence to restrict open weights; recommends monitoringCalled for research and risk indicators, not immediate restrictions
EU RegulatorsRisk-basedAI Act applies stricter rules to “foundation models” including open onesFoundation models face transparency and safety testing requirements
Eliezer YudkowskyStrongly closedOpen-sourcing powerful AI is existential riskPublic advocacy against any frontier model release

Open weights (often called “open source” though technically distinct) means releasing model weights so anyone can download, modify, and run the model locally. Meta clarified in 2024 that Llama models are “open weight” rather than fully open source, as the training data and code remain proprietary. Examples include Llama 2/3, Mistral, Falcon, and DeepSeek-R1. As of April 2025, Llama models alone had been downloaded over 1.2 billion times, with 20,000+ derivative models published on Hugging Face. Once released, weights cannot be recalled or controlled, and anyone can fine-tune for any purpose—including removing safety features. Research shows that “jailbreak-tuning” can remove essentially all safety training within hours using modest compute (FAR.AI 2024). Within days of Meta releasing Llama 2, “uncensored” versions appeared on Hugging Face with safety guardrails stripped away.

Closed source means keeping weights proprietary, providing access only via API. Examples include GPT-4, Claude, and Gemini. Labs maintain control and can monitor usage patterns, update models, revoke access for policy violations, and refuse harmful requests. However, this concentrates power in a small number of corporations.

The landscape shifted dramatically with DeepSeek’s January 2025 release of R1, demonstrating that Chinese labs could produce frontier-competitive open models. Before DeepSeek, Meta’s Llama family dominated the open-weights ecosystem, with models ranging from 7B to 405B parameters.

MetricValueSourceTrend
Open source model downloads30,000-60,000 new models/month on Hugging FaceRed HatExponential growth
Llama cumulative downloads1.2 billion (April 2025)Meta+53% in Q1 2025
Enterprise open source share11-13% of LLM workloadsMenlo VenturesDown from 19% in 2024
Performance gap (open vs closed)1.7% on Chatbot ArenaStanford HAINarrowed from 8% in Jan 2024
Global AI spending$17B in 2025Menlo Ventures3.2x YoY increase from $11.5B
DeepSeek R1 training costUnder $1 millionWorld Economic Forum90-95% below Western frontier models
Fortune 500 Llama adoption50%+ experimentingMetaIncluding Spotify, AT&T, DoorDash
📊
Open vs Closed Models
NameOpennessAccessSafetyCustomizationCostControl
GPT-4/4oClosedAPI onlyStrong guardrails, monitoredLimited fine-tuning via APIPay per tokenOpenAI maintains full control
Claude 3/3.5ClosedAPI onlyConstitutional AI, monitoredLimitedPay per tokenAnthropic maintains full control
Llama 3.1 405BOpen weightsDownload and run locallyResponsible Use Guide (often ignored)Full fine-tuning possibleFree (need substantial compute)No control after release
Mistral Large 2Open weightsDownload and run locallyTransparent 'no moderation mechanism'Full fine-tuning possibleFree (need own compute)No control after release
DeepSeek-R1Open weightsDownload and run locallyCensors Chinese-sensitive topics; security vulnerabilities on political promptsFull fine-tuning possibleFree (need own compute)Subject to Chinese regulatory environment
(6 perspectives)

Where different actors stand on releasing model weights

Dario Amodei (Anthropic)
High confidence

Demis Hassabis (Google DeepMind)
Medium confidence

Eliezer Yudkowsky
High confidence

Sam Altman (OpenAI)
High confidence

Stability AI
High confidence

Yann LeCun (Meta)
High confidence

Research on open model safety reveals significant challenges in maintaining guardrails once weights are released.

FactorFindingImplicationSource
Guardrail bypass techniquesEmoji smuggling achieves 100% evasion against some guardrailsEven production-grade defenses can be bypassedarXiv
Fine-tuning vulnerability”Jailbreak-tuning” enables removal of all safety trainingEvery fine-tunable model has an “evil twin” potentialFAR.AI
Open model guardrail scoresBest open model (Phi-4): 84/100; Worst (Gemma-3): 57/100Wide variance in baseline safetyADL
Larger models more vulnerableTested 23 LLMs: larger models more susceptible to poisoningCapability-safety tradeoff worsens at scaleFAR.AI
Time to “uncensored” variantsHours to days after releaseCommunity rapidly removes restrictionsHugging Face observations
Multilingual guardrailsOpenGuardrails supports 119 languagesSafety coverage possible but not universalHelp Net Security
Key Questions (4)
  • Can safety guardrails be made robust to fine-tuning?
  • Will open models leak or be recreated anyway?
  • At what capability level does open source become too dangerous?
  • Do the benefits of scrutiny outweigh misuse risks?

Several proposals aim to capture benefits of both approaches while mitigating risks:

ApproachDescriptionAdoption StatusEffectiveness Estimate
Staged Release6-12 month delay after initial deployment before open releaseProposed; not yet implemented at scaleMedium (allows risk monitoring)
Structured AccessWeights provided to vetted researchers under agreementGPT-2 XL initially; some academic partnershipsMedium-High for research
Differential AccessSmaller models open, frontier models closedCurrent de facto standardMedium (capability gap narrows)
Safety-Contingent ReleaseRelease only if safety evaluations pass thresholdsAnthropic RSP (for deployment, not release)High if thresholds appropriate
Hardware ControlsRelease weights but require specialized hardware to runNot implementedLow-Medium (hardware becomes accessible)
Capability ThresholdsOpen below certain compute/parameter thresholdsEU AI Act: 10²⁵ FLOPs as “systemic risk” cutoffUncertain (thresholds may become obsolete)

The geopolitical calculus shifted dramatically in 2025. DeepSeek’s R1 release demonstrated that keeping Western models closed does not prevent capable open models from emerging globally. The market impact was immediate: NVIDIA reportedly lost $100 billion in market capitalization in a single day, and by month’s end DeepSeek had overtaken ChatGPT as the most downloaded free app on the Apple App Store in the US.

DeepSeek’s Impact: DeepSeek’s January 2025 release sent “shockwaves globally” by demonstrating frontier capabilities in an open Chinese model at a fraction of Western costs—reportedly under $1 million in training costs compared to hundreds of millions for comparable Western models. The model runs 20-50x cheaper at inference than OpenAI’s comparable offerings. However, NIST/CAISI evaluations found significant issues: DeepSeek models were 12x more susceptible to agent hijacking attacks than U.S. frontier models, and CrowdStrike research showed the model produces insecure code when prompted with politically sensitive terms (Tibet, Uyghurs). Several countries including Italy, Australia, and Taiwan have banned government use of DeepSeek.

If US/Western labs stay closed:

  • May slow dangerous capabilities domestically
  • But China has demonstrated strategic open-sourcing (DeepSeek)
  • Could lose innovation race and talent to more open ecosystems
  • Does not prevent proliferation given global competition

If US/Western labs open source:

  • Loses monitoring capability over deployment
  • But levels playing field globally and enables allies
  • Benefits developing world and academic research
  • May shape global norms through responsible release practices

Coordination problem:

  • Optimal if all major powers coordinate on release thresholds
  • Carnegie research notes emerging convergence on risk frameworks
  • Unilateral Western restraint may simply cede ground to less safety-conscious actors
  • DeepSeek’s signing of AI Safety Commitments suggests potential for Chinese engagement

The open vs closed question has different implications for different risks:

Misuse risks (bioweapons, cyberattacks):

  • Clear case for closed: irreversibility, removal of guardrails
  • Open source dramatically increases risk once capabilities cross danger thresholds
  • However, the March 2024 “ShadowRay” attack on Ray (an open-source AI framework used by Uber, Amazon, OpenAI) showed that open ecosystems create additional attack surfaces

Accident risks (unintended behavior):

  • Mixed: Open source enables external safety research and red-teaming
  • But also enables less careful deployment by actors who may not understand risks
  • Depends on whether scrutiny benefits or proliferation risks dominate

Structural risks (power concentration):

  • Clear case for open: prevents AI monopoly by a few corporations
  • But only if open source is actually accessible (frontier models require substantial compute)
  • LeCun’s concern: “a very bad future in which all of our information diet is controlled by a small number of companies”

Race dynamics:

  • Open source may accelerate race (lower barriers to entry)
  • But also may reduce duplicated effort (can build on shared base)
  • DeepSeek’s cost-efficient training suggests open release may not slow capability development

U.S. Policy: The NTIA’s July 2024 report concluded that evidence is “insufficient to definitively determine either that restrictions on such open-weight models are warranted, or that restrictions will never be appropriate in the future.” It recommended monitoring and research rather than immediate restrictions.

California SB-1047: In September 2024, Governor Newsom vetoed this bill which would have imposed liability requirements on AI developers. The veto cited concerns about stifling innovation without meaningfully improving safety.

EU AI Act: Takes a risk-based approach, entered into force August 2024 with GPAI model obligations applicable from August 2025. Open-source models receive exemptions from transparency obligations if they use permissive licenses and publicly share architecture information—but models with “systemic risk” (training compute exceeding 10²⁵ FLOPs) face full compliance requirements regardless of openness. France, Germany, and Italy initially opposed applying strict rules to open models, citing innovation concerns.

Emerging Consensus: Carnegie Endowment research in July 2024 found it is “no longer accurate to cast decisions about model and weight release as an ideological debate between rigid ‘pro-open’ and ‘anti-open’ camps.” Instead, different camps have begun to converge on recognizing open release as a “positive and enduring feature of the AI ecosystem, even as it also brings potential risks.”

JurisdictionPolicy StanceOpen Model TreatmentEnforcement Status
United States (NTIA)Cautiously pro-openNo restrictions recommended without clearer risk evidenceMonitoring via AISI
EU AI ActRisk-basedExemptions for non-systemic models; full rules above 10²⁵ FLOPsApplicable August 2025
California (SB-1047)Proposed liabilityWould have imposed developer liability; vetoed September 2024Not enacted
ChinaStrategic opennessState-backed labs releasing competitive open models (DeepSeek, Qwen)Active support
UKLight-touchNo specific open model restrictions; voluntary commitmentsMonitoring via AISI