Longterm Wiki
All Publications

OpenAI

Company BlogHigh(4)

GPT developer, leading AI lab

Credibility Rating

4/5
High(4)
High quality. Established institution or organization with editorial oversight and accountability.
62
Resources
109
Citing pages
1
Tracked domains

Tracked Domains

openai.com

Resources (62)

Citing Pages (109)

AI Accident Risk CruxesAgentic AIAGI DevelopmentAGI TimelineAI-Assisted AlignmentAI AlignmentAlignment EvaluationsApollo ResearchAlignment Research CenterAuthentication CollapseBioweapons RiskAI Uplift Assessment ModelCenter for AI SafetyCapabilities-to-Safety Pipeline ModelCapability ElicitationAI Capability Threshold ModelAutonomous CodingAI-Driven Concentration of PowerConstitutional AICorporate AI Safety ResponsesCorrigibility FailureCyberweapons RiskDangerous Capability EvaluationsDeceptive AlignmentAI Safety Defense in Depth ModelAI DisinformationEmergent CapabilitiesEpistemic SycophancyEU AI ActEval Saturation & The Evals GapAI EvaluationsEvals-Based Deployment GatesAI EvaluationGoal Misgeneralization Probability ModelAI Governance and PolicyHeavy Scaffolding / Agentic SystemsAI-Human Hybrid SystemsInstrumental ConvergenceInstrumental Convergence FrameworkIs Interpretability Sufficient for Safety?AI Safety Intervention Effectiveness MatrixAI Safety Intervention PortfolioAI Knowledge MonopolyAI Lab Safety CultureLarge Language ModelsLarge Language ModelsAI Value Lock-inLong-Horizon Autonomous TasksMesa-OptimizationMesa-Optimization Risk AnalysisMetaculusMinimal ScaffoldingAI Misuse Risk CruxesThird-Party Model AuditingMultipolar Trap Dynamics ModelOpenAIOptimistic Alignment WorldviewPaul ChristianoShould We Pause AI Development?Persuasion and Social ManipulationProcess SupervisionAI ProliferationAI Proliferation Risk ModelAI Development Racing DynamicsRacing Dynamics Impact ModelReasoning and PlanningRed TeamingAI Alignment Research AgendasResponsible Scaling Policies (RSPs)Reward HackingReward Hacking Taxonomy and Severity ModelAI Risk Activation Timeline ModelAI Risk Cascade Pathways ModelAI Risk Interaction Network ModelRLHFResponsible Scaling PoliciesAI Safety CasesAI Safety Research Allocation ModelAI Safety Research Value ModelAI Safety Researcher Gap ModelSam AltmanAI Capability SandbaggingSandboxing / ContainmentScalable Eval ApproachesScalable OversightIs Scaling All You Need?AI Scaling LawsSchemingScheming & Deception DetectionScheming Likelihood AssessmentSelf-Improvement and Recursive EnhancementSharp Left TurnSituational AwarenessSleeper Agent DetectionAI Safety Solution CruxesSparse Autoencoders (SAEs)AI Model SteganographySycophancyAI Safety Technical Pathway DecompositionTechnical AI Safety ResearchCompute ThresholdsTool Use and Computer UseTreacherous TurnVoluntary AI Safety CommitmentsAI Risk Warning Signs ModelWeak-to-Strong GeneralizationWhy Alignment Might Be HardWorld Models + PlanningWorldview-Intervention Mapping
Publication ID: openai
OpenAI | Publications | Longterm Wiki | Longterm Wiki