Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusContentDashboard
Edited today
Content0/12
LLM summaryScheduleEntityEdit history
Tables0Diagrams0Int. links0/ ~5Ext. links0Footnotes0References0Quotes0Accuracy0

Resources

External resources (papers, articles, reports) tracked in data/resources/*.yaml. 4948 total resources.

431
Full text fetched
9%
133
Metadata only
3%
4,384
Unfetched
89%
418
With summary
8%
382
With review
8%
4,315
Cited by pages
87%
4,948 resources
Content
NIST AI Risk Management FrameworkgovernmentUnfetched-SRKNIST5/540
AnthropicwebUnfetched-SRKAnthropic4/538
Anthropic's Work on AI SafetypaperFull text2025-12-28SRKAnthropic4/536
metr.orgwebUnfetched-SRKMETR4/533
**Future of Humanity Institute**webUnfetched-SRKFuture of Humanity Institute4/532
CAIS SurveyswebFull text2025-12-28SRKCenter for AI Safety4/527
FLI AI Safety Index Summer 2025webFull text2025-12-28SRKFuture of Life Institute3/527
AI Safety InstitutegovernmentUnfetched-SRKUK AI Safety Institute4/527
Partnership on AIwebFull text2025-12-28SRK--26
OpenAIwebUnfetched-SRKOpenAI4/524
EU AI ActwebFull text2025-12-28SRK--21
paperUnfetched-SRK--21
International AI Safety Report 2025webFull text2025-12-28SRK--21
CSET: AI Market DynamicswebFull text2025-12-28SRKCSET Georgetown4/521
AISI Frontier AI TrendsgovernmentFull text2025-12-28SRKUK AI Safety Institute4/520
RANDwebFull text2025-12-28SRKRAND Corporation4/519
OpenAI Preparedness FrameworkwebUnfetched-SRKOpenAI4/519
paperUnfetched-SRK--19
RAND: AI and National SecuritywebUnfetched-SRKRAND Corporation4/518
miri.orgwebUnfetched-SRKMIRI3/517
Stanford HAI: AI Companions and Mental HealthwebUnfetched-SRKStanford HAI4/517
Risks from Learned OptimizationpaperUnfetched-SRKarXiv3/517
GovAIgovernmentFull text2025-12-28SRKCentre for the Governance of AI4/517
Redwood Research: AI ControlwebFull text2025-12-28SRK--16
Epoch AIwebFull text2025-12-28SRKEpoch AI4/515
Apollo ResearchwebUnfetched-SRKApollo Research4/515
webUnfetched-SRK--15
OpenAI: Model BehaviorpaperFull text2025-12-28SRKOpenAI4/515
Google DeepMindwebUnfetched-SRKGoogle DeepMind4/514
Responsible Scaling PolicywebUnfetched-SRKAnthropic4/514
C2PA Explainer VideoswebFull text2025-12-28SRK--14
alignment.orgwebUnfetched-SRK--13
AI Alignment ForumblogUnfetched-SRKAlignment Forum3/513
CNASwebUnfetched-SRKCNAS4/513
OpenAI Safety UpdateswebUnfetched-SRKOpenAI4/513
AI Safety Index Winter 2025webFull text2025-12-28SRKFuture of Life Institute3/513
Center for Human-Compatible AIwebFull text2025-12-28SRK--13
Anthropic's 2024 alignment faking studywebUnfetched-SRKAnthropic4/513
MetaculuswebFull text2025-12-28SRKMetaculus3/513
paperUnfetched-SRK--13
Anthropic's follow-up research on defection probeswebUnfetched-SRKAnthropic4/512
UK AISIgovernmentUnfetched-SRKUK Government4/512
Frontier Models are Capable of In-Context SchemingwebUnfetched-SRKApollo Research4/512
EU AI OfficewebUnfetched-SRKEuropean Union4/511
Anthropic: Recommended Directions for AI Safety ResearchwebFull text2025-12-28SRKAnthropic Alignment4/511
More capable models scheme at higher rateswebUnfetched-SRKApollo Research4/511
METR's analysis of 12 companieswebUnfetched-SRKMETR4/511
Stanford AI Index 2025webFull text2025-12-28SRKStanford HAI4/511
Open Philanthropy grants databasewebFull text2025-12-28SRK--11
Constitutional AI: Harmlessness from AI FeedbackpaperFull text2025-12-28SRKAnthropic4/511
Rows per page:
Page 1 of 99