Auto-Update News
Shows individual news items pulled from RSS feeds and web searches, grouped by source. Use this to check what the auto-update system is seeing, verify routing decisions (which pages each item was matched to), and spot sources that are returning low-quality or irrelevant results. Sources are configured in data/auto-update/sources.yaml.
News items discovered by the auto-update pipeline and how they were routed to wiki pages. 200 items across 9 runs, 200 high-relevance, 31 routed to pages.
News Items
| Score↓ | Title↕ | Source↕ | Published↕ | Routed To↕ | Run↕ |
|---|---|---|---|---|---|
| 95 | Introducing OpenAI OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to g | openai-blog | Fri, 11 Dec 2015 08:00:00 GMT | Alignment Robustness Trajectorypolish | 2026-04-19 |
| 95 | OpenAI technical goals OpenAI’s mission is to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible. | openai-blog | Mon, 20 Jun 2016 07:00:00 GMT | not routed | 2026-04-19 |
| 95 | Our approach to alignment research We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alig | openai-blog | Wed, 24 Aug 2022 07:00:00 GMT | Alignment Robustness Trajectorypolish | 2026-04-19 |
| 95 | Advancing independent research on AI alignment OpenAI commits $7.5M to The Alignment Project to fund independent AI alignment research, strengthening global efforts to address AGI safety and security risks. | openai-blog | Thu, 19 Feb 2026 10:00:00 GMT | Alignment Robustness Trajectorypolish | 2026-04-19 |
| 95 | Taking a responsible path to AGI We’re exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community. | deepmind-blog | Wed, 02 Apr 2025 13:31:00 +0000 | not routed | 2026-03-17 |
| 95 | Introducing OpenAI OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to g | openai-blog | Fri, 11 Dec 2015 08:00:00 GMT | not routed | 2026-03-17 |
| 95 | Concrete AI safety problems We (along with researchers from Berkeley and Stanford) are co-authors on today’s paper led by Google Brain researchers, Concrete Problems in AI Safety. The paper explores many research problems around | openai-blog | Tue, 21 Jun 2016 07:00:00 GMT | not routed | 2026-03-17 |
| 95 | Why responsible AI development needs cooperation on safety We’ve written a policy research paper identifying four strategies that can be used today to improve the likelihood of long-term industry cooperation on safety norms in AI: communicating risks and bene | openai-blog | Wed, 10 Jul 2019 07:00:00 GMT | not routed | 2026-03-17 |
| 95 | Safety Gym We’re releasing Safety Gym, a suite of environments and tools for measuring progress towards reinforcement learning agents that respect safety constraints while training. | openai-blog | Thu, 21 Nov 2019 08:00:00 GMT | not routed | 2026-03-17 |
| 95 | OpenAI Microscope We’re introducing OpenAI Microscope, a collection of visualizations of every significant layer and neuron of eight vision “model organisms” which are often studied in interpretability. Microscope make | openai-blog | Tue, 14 Apr 2020 07:00:00 GMT | not routed | 2026-03-17 |
| 95 | Governance of superintelligence Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI. | openai-blog | Mon, 22 May 2023 07:00:00 GMT | not routed | 2026-03-17 |
| 95 | Superalignment Fast Grants We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, a | openai-blog | Thu, 14 Dec 2023 08:00:00 GMT | not routed | 2026-03-17 |
| 95 | Preparing for future AI risks in biology Advanced AI can transform biology and medicine—but also raises biosecurity risks. We’re proactively assessing capabilities and implementing safeguards to prevent misuse. | openai-blog | Wed, 18 Jun 2025 10:00:00 GMT | Bioweapons Attack Chain Modelstandard | 2026-03-17 |
| 95 | Concrete AI safety problems We (along with researchers from Berkeley and Stanford) are co-authors on today’s paper led by Google Brain researchers, Concrete Problems in AI Safety. The paper explores many research problems around | openai-blog | Tue, 21 Jun 2016 07:00:00 GMT | AI Accident Risk Cruxesstandard | 2026-03-15 |
| 95 | Our approach to alignment research We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alig | openai-blog | Wed, 24 Aug 2022 07:00:00 GMT | not routed | 2026-03-15 |
| 95 | Planning for AGI and beyond Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. | openai-blog | Fri, 24 Feb 2023 08:00:00 GMT | not routed | 2026-03-15 |
| 95 | Governance of superintelligence Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI. | openai-blog | Mon, 22 May 2023 07:00:00 GMT | not routed | 2026-03-15 |
| 95 | Frontier risk and preparedness To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge. | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | not routed | 2026-03-15 |
| 95 | Introducing OpenAI OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to g | openai-blog | Fri, 11 Dec 2015 08:00:00 GMT | not routed | 2026-03-14 |
| 95 | Lessons learned on language model safety and misuse We describe our latest thinking in the hope of helping other AI developers address safety and misuse of deployed models. | openai-blog | Thu, 03 Mar 2022 08:00:00 GMT | Large Language Modelsstandard | 2026-03-14 |
| 95 | Planning for AGI and beyond Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. | openai-blog | Fri, 24 Feb 2023 08:00:00 GMT | not routed | 2026-03-14 |
| 95 | Governance of superintelligence Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI. | openai-blog | Mon, 22 May 2023 07:00:00 GMT | not routed | 2026-03-14 |
| 95 | OpenAI’s Approach to Frontier Risk An Update for the UK AI Safety Summit | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | Existential Risk from AIstandard | 2026-03-14 |
| 95 | OpenAI and Anthropic share findings from a joint safety evaluation OpenAI and Anthropic share findings from a first-of-its-kind joint safety evaluation, testing each other’s models for misalignment, instruction following, hallucinations, jailbreaking, and more—highli | openai-blog | Wed, 27 Aug 2025 10:00:00 GMT | not routed | 2026-03-14 |
| 95 | Navigating AI Risks — Homepage A Substack publication offering **news and analysis about the governance of transformative AI risks**, aimed at policymakers, tech enthusiasts, and engaged citizens. | navigating-ai-risks | 2026-03-13 | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | Taking a responsible path to AGI We’re exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community. | deepmind-blog | Wed, 02 Apr 2025 13:31:00 +0000 | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | Introducing OpenAI OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to g | openai-blog | Fri, 11 Dec 2015 08:00:00 GMT | not routed | 2026-03-13 |
| 95 | Aligning language models to follow instructions We’ve trained language models that are much better at following user intentions than GPT-3 while also making them more truthful and less toxic, using techniques developed through our alignment researc | openai-blog | Thu, 27 Jan 2022 08:00:00 GMT | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | Lessons learned on language model safety and misuse We describe our latest thinking in the hope of helping other AI developers address safety and misuse of deployed models. | openai-blog | Thu, 03 Mar 2022 08:00:00 GMT | Large Language Modelsstandard | 2026-03-13 |
| 95 | Frontier risk and preparedness To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge. | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | OpenAI’s Approach to Frontier Risk An Update for the UK AI Safety Summit | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | Estimating worst case frontier risks of open weight LLMs In this paper, we study the worst-case frontier risks of releasing gpt-oss. We introduce malicious fine-tuning (MFT), where we attempt to elicit maximum capabilities by fine-tuning gpt-oss to be as ca | openai-blog | Tue, 05 Aug 2025 00:00:00 GMT | not routed | 2026-03-13 |
| 95 | Advancing independent research on AI alignment OpenAI commits $7.5M to The Alignment Project to fund independent AI alignment research, strengthening global efforts to address AGI safety and security risks. | openai-blog | Thu, 19 Feb 2026 10:00:00 GMT | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | Planning for AGI and beyond Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. | openai-blog | Fri, 24 Feb 2023 08:00:00 GMT | not routed | 2026-03-10 |
| 95 | Governance of superintelligence Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI. | openai-blog | Mon, 22 May 2023 07:00:00 GMT | not routed | 2026-03-10 |
| 95 | Detecting and reducing scheming in AI models Apollo Research and OpenAI developed evaluations for hidden misalignment (“scheming”) and found behaviors consistent with scheming in controlled tests across frontier models. The team shared concrete | openai-blog | Wed, 17 Sep 2025 00:00:00 GMT | Why Alignment Might Be Hardstandard | 2026-03-10 |
| 95 | Advancing independent research on AI alignment OpenAI commits $7.5M to The Alignment Project to fund independent AI alignment research, strengthening global efforts to address AGI safety and security risks. | openai-blog | Thu, 19 Feb 2026 10:00:00 GMT | not routed | 2026-03-10 |
| 92 | AI Safety Newsletter #69: Department of War, Anthropic, and National Security Also, Anthropic Removes a Core Safety Commitment | cais-newsletter | Fri, 13 Mar 2026 14:15:54 GMT | not routed | 2026-03-15 |
| 92 | OpenAI’s Approach to Frontier Risk An Update for the UK AI Safety Summit | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | not routed | 2026-03-15 |
| 92 | Frontier risk and preparedness To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge. | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | not routed | 2026-03-14 |
| 90 | Last Week in AI #336 - Sonnet 4.6, Gemini 3.1 Pro, Anthropic vs Pentagon Anthropic releases Sonnet 4.6, Google Rolls Out Latest AI Model Gemini 3.1 Pro, Pentagon threatens to cut off Anthropic in AI safeguards dispute | last-week-in-ai | Tue, 24 Feb 2026 11:43:23 GMT | not routed | 2026-04-19 |
| 90 | Last Week in AI #338 - Anthropic sues Trump, xAI starting over, Iran AI Fakes Anthropic sues Trump administration in AI dispute with Pentagon, ‘Not built right the first time’ — Musk’s xAI is starting over again, again, Cascade of A.I. Fakes About War Wi | last-week-in-ai | Mon, 16 Mar 2026 04:18:14 GMT | not routed | 2026-04-19 |
| 90 | Taking a responsible path to AGI We’re exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community. | deepmind-blog | Wed, 02 Apr 2025 13:31:00 +0000 | Alignment Robustness Trajectorypolish | 2026-04-19 |
| 90 | Strengthening our Frontier Safety Framework We’re strengthening the Frontier Safety Framework (FSF) to help identify and mitigate severe risks from advanced AI models. | deepmind-blog | Thu, 23 Oct 2025 23:44:10 +0000 | not routed | 2026-04-19 |
| 90 | Measuring progress toward AGI: A cognitive framework We’re introducing a framework to measure progress toward AGI, and launching a Kaggle hackathon to build the relevant evaluations. | deepmind-blog | Tue, 17 Mar 2026 16:03:47 +0000 | not routed | 2026-04-19 |
| 90 | Statement from Dario Amodei on Discussions with the Department of War *(Published: February 26, 2026)* Anthropic was the first frontier AI company to deploy its models in the U.S. government's classified networks and at the National Laboratories, with Claude extensivel | anthropic-blog | 2026-04-19 | not routed | 2026-04-19 |
| 90 | Concrete AI safety problems We (along with researchers from Berkeley and Stanford) are co-authors on today’s paper led by Google Brain researchers, Concrete Problems in AI Safety. The paper explores many research problems around | openai-blog | Tue, 21 Jun 2016 07:00:00 GMT | not routed | 2026-04-19 |
| 90 | AI safety via debate We’re proposing an AI safety technique which trains agents to debate topics with one another, using a human to judge who wins. | openai-blog | Thu, 03 May 2018 07:00:00 GMT | not routed | 2026-04-19 |
| 90 | Why responsible AI development needs cooperation on safety We’ve written a policy research paper identifying four strategies that can be used today to improve the likelihood of long-term industry cooperation on safety norms in AI: communicating risks and bene | openai-blog | Wed, 10 Jul 2019 07:00:00 GMT | not routed | 2026-04-19 |
| 90 | openai-blog | Thu, 27 Jan 2022 08:00:00 GMT | not routed | 2026-04-19 |
Configured Sources
30 sources configured (27 enabled). Edit data/auto-update/sources.yaml to change.
| Status↓ | Name↕ | Type↕ | Frequency↕ | Reliability↕ | Categories↕ | Last Fetched↕ |
|---|---|---|---|---|---|---|
| ON | OpenAI Blog | rss | daily | high | ai-labs, models, safety, policy | — |
| ON | Anthropic Blog / Research | web-search | daily | high | ai-labs, safety, models, interpretability | — |
| ON | Google DeepMind Blog | rss | daily | high | ai-labs, models, safety, research | — |
| ON | Meta AI Blog | web-search | daily | high | ai-labs, models, open-source | — |
| ON | Alignment Forum | rss | daily | high | safety, alignment, research | — |
| ON | LessWrong | rss | daily | medium | safety, alignment, rationality, research | — |
| ON | EA Forum | rss | daily | medium | safety, policy, governance, funding | — |
| ON | AI Safety Policy News | web-search | daily | medium | policy, governance, regulation | — |
| ON | AI Executive Orders & Legislation | web-search | daily | medium | policy, governance, regulation | — |
| ON | ML Safety Newsletter | rss | daily | high | safety, alignment, research | — |
| ON | AI Safety Newsletter (CAIS) | rss | daily | high | safety, alignment, policy, research | — |
| ON | Last Week in AI | rss | daily | medium | ai-labs, models, industry, research | — |
| ON | Navigating AI Risks | web-search | daily | medium | safety, governance, policy, risk | — |
| ON | arXiv cs.AI (Artificial Intelligence) | rss | daily | high | research, safety, alignment, models, interpretability | — |
| ON | arXiv cs.CL (Computation and Language) | rss | daily | high | research, models, interpretability, capabilities | — |
| ON | arXiv cs.LG (Machine Learning) | rss | daily | high | research, models, safety, alignment | — |
| ON | AI Industry News | web-search | daily | medium | compute, industry, funding | — |
| ON | Jeffrey Epstein AI Researcher Connections | web-search | weekly | medium | safety, funding, history, governance | — |
| ON | AI Policy in Congress | web-search | daily | medium | policy, governance, legislation | — |
| ON | AI PAC & Election Spending | web-search | weekly | medium | policy, funding, governance | — |
| ON | State AI Legislation | web-search | weekly | medium | policy, governance, legislation | — |
| ON | Biosecurity Policy | web-search | weekly | medium | policy, governance, biosecurity | — |
| ON | Anthropic System Cards | web-search | weekly | high | ai-labs, models, safety, system-cards | — |
| ON | OpenAI System Cards | web-search | weekly | high | ai-labs, models, safety, system-cards | — |
| ON | Google DeepMind Model Cards | web-search | weekly | high | ai-labs, models, safety, system-cards | — |
| ON | Meta Llama Model Cards | web-search | weekly | high | ai-labs, models, open-source, system-cards | — |
| ON | xAI Model Cards | web-search | weekly | medium | ai-labs, models, system-cards | — |
| OFF | Import AI Newsletter | rss | daily | high | ai-labs, models, policy, research | — |
| OFF | The Gradient | rss | daily | high | research, models, safety | — |
| OFF | Zvi Mowshowitz (Don't Worry About the Vase) | rss | daily | high | safety, policy, models, ai-labs, governance | — |