Claude (language model) - Wikipedia
referenceCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Wikipedia
Wikipedia overview of Claude, Anthropic's LLM series, covering its Constitutional AI training approach, safety alignment techniques, and governance controversies including refusal to enable mass surveillance and autonomous weapons use.
Metadata
Summary
This Wikipedia article describes Claude, Anthropic's large language model series, detailing its training methodology including RLHF and Constitutional AI. It covers the evolution of Claude's 'constitution' from 2022 to 2026, and notable governance events such as Anthropic's refusal to allow Claude's use for mass surveillance or autonomous weapons, resulting in a DoD supply chain risk designation.
Key Points
- •Claude uses Constitutional AI, a training technique developed by Anthropic to improve ethical and legal compliance without extensive human feedback.
- •The 2026 constitution grew to 23,000 words (from 2,700 in 2023), explaining rationale behind guidelines like refraining from undermining democracy.
- •Anthropic refused to remove contractual prohibitions on use for mass domestic surveillance and fully autonomous weapons, leading to DoD 'supply chain risk' designation.
- •The constitution is authored by philosopher Amanda Askell with contributions from AI safety researchers including Chris Olah and Holden Karnofsky, released under CC0.
- •Claude models are generative pre-trained transformers fine-tuned with RLHF and Constitutional AI to enforce ethical guidelines.
1 FactBase fact citing this source
| Entity | Property | Value | As Of |
|---|---|---|---|
| Claude | Wikipedia | https://en.wikipedia.org/wiki/Claude_(language_model) | — |
Cached Content Preview
Claude (language model) - Wikipedia
Jump to content
From Wikipedia, the free encyclopedia
Large language model developed by Anthropic
Claude Developer Anthropic Initial release March 2023 ; 3 years ago  ( 2023-03 ) Stable release Claude Opus 4.6 /
February 5, 2026 ; 2 months ago  ( 2026-02-05 )
Claude Sonnet 4.6 /
February 17, 2026 ; 48 days ago  ( 2026-02-17 )
Claude Haiku 4.5 /
October 15, 2025 ; 5 months ago  ( 2025-10-15 )
Platform Cloud computing platforms Type
Large language model
Generative pre-trained transformer
Foundation model
License Proprietary Website claude .ai
Claude is a series of large language models developed by Anthropic and first released in 2023. Its name has been described both as a tribute to Claude Shannon , who pioneered information theory , and as a friendly, male-gendered counterpart to AI assistants like Alexa and Siri . [ 1 ]
Claude is used for software development via Claude Code. [ 2 ] Claude uses constitutional AI, a training technique that was developed by Anthropic to improve ethical and legal compliance ( AI alignment ).
US federal agencies started phasing out the use of Claude after Anthropic refused to remove contractual prohibitions on the use of Claude for mass domestic surveillance and fully autonomous weapons . [ 3 ] [ 4 ] Following the refusal, the Department of Defense designated the company a "supply chain risk" and barred all U.S. military private contractors, suppliers, and partners from doing business with the firm. On March 26, 2026, a federal judge issued a temporary injunction against the ban.
Training
[ edit ]
See also: Anthropic § Legal issues
Claude models are generative pre-trained transformers that have been trained to predict the next word in large amounts of text. Then, they have been fine-tuned using reinforcement learning from human feedback (RLHF) and constitutional AI in an attempt to enforce ethical guidelines. [ 5 ] [ 6 ] ClaudeBot searches the web for content. It respects a site's robots.txt but was criticized by iFixit in 2024, before they added their robots.txt, for placing excessive load on their site by scraping content. [ 7 ]
Constitutional AI
[ edit ]
Anthropic introduced an approach to AI alignment called "Constitutional AI". The constitution is a document used for training Claude to be harmless and helpful without relying on extensive or expensive human feedback. [ 8 ] The original version was a list of principles, whereas the 2026 constitution explains more thoroughly how Claude is intended to behave and why. Anthropic said it intends Claude's constitution to be a model followed by others in the industry. [ 9 ]
The first
... (truncated, 45 KB total)kb-813110fb37d94418 | Stable ID: sid_W2qfEtK9Ze