Skip to content
Longterm Wiki
Back

Help keep AI under human control: Palisade Research 2026 fundraiser

web

Authors

Jeffrey Ladish·benwr·Eli Tyre·John Steidley

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

This is a 2026 fundraiser post for Palisade Research, an AI safety organization; useful for understanding the AI safety funding landscape and the types of control-focused technical research being pursued by smaller safety-focused groups.

Metadata

Importance: 35/100news

Summary

Fundraiser announcement for Palisade Research, an AI safety organization focused on maintaining human control over AI systems. The post outlines their 2026 research agenda and makes the case for supporting their work on technical AI safety and control. It serves as both an organizational overview and a public appeal for financial contributions.

Key Points

  • Palisade Research focuses on technical research aimed at ensuring AI systems remain under meaningful human control.
  • The fundraiser presents their research priorities and accomplishments to justify continued community financial support.
  • Their work likely includes red-teaming, evaluation, and control-focused safety research relevant to near-term and frontier AI systems.
  • The post is part of the broader AI safety community fundraising ecosystem on LessWrong.
  • Supporting such organizations is framed as a concrete way for individuals to contribute to AI safety outcomes.

Cached Content Preview

HTTP 200Fetched Apr 10, 202615 KB
# Help keep AI under human control: Palisade Research 2026 fundraiser 
By Jeffrey Ladish, benwr, Eli Tyre, John Steidley
Published: 2025-12-18
**TL;DR:** Please consider donating to Palisade Research this year, especially if you care about reducing catastrophic AI risks via research, science communications, and policy. [SFF](https://survivalandflourishing.fund/2025/recommendations) is matching donations to Palisade 1:1 up to $1.1 million! You can donate via [Every.org](https://www.every.org/palisade-research) or reach out at [donate@palisaderesearch.org](mailto:donate@palisaderesearch.org).

Who We Are
==========

[Palisade Research](https://palisaderesearch.org/) is a nonprofit focused on reducing civilization-scale risks from agentic AI systems. We conduct empirical research on frontier AI systems, and inform policymakers and the public about AI capabilities and the risks to human control.

This year, we found that some frontier AI agents [resist being shut down](https://arxiv.org/abs/2509.14260) even when instructed otherwise—and that they sometimes [cheat at chess](https://arxiv.org/abs/2502.13295) by hacking their environment. These results were covered in [Time](https://time.com/7259395/ai-chess-cheating-palisade-research/), [The Wall Street Journal](https://www.wsj.com/opinion/ai-is-learning-to-escape-human-control-technology-model-code-programming-066b3ec5), [Fox News](https://www.youtube.com/watch?v=R9WpHc7l2V8), [BBC Newshour](https://www.bbc.com/audio/play/w172zssbc6lhkd3), and [MIT Technology Review](https://www.technologyreview.com/2025/04/04/1114228/cyberattacks-by-ai-agents-are-coming/).

We've also built relationships in Washington, briefing officials in the executive branch and members of the House and Senate. We've introduced policymakers to key evidence like METR's [capability trend lines](https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/) and Apollo’s [antischeming.ai](http://antischeming.ai) chains of thought. Our own research has been [cited](https://www.youtube.com/live/wKkk-uWi7HM?si=I4mStjrKsZiiV7c2&t=7791) repeatedly by members of Congress and in congressional hearings.

With additional funding, we'll grow our research team—both continuing to evaluate frontier model behavior and beginning more systematic investigation into what drives and motivates AI systems. We're building out a communications team to bring the strategic picture to the public through video and other media. And we’ll continue to brief policymakers on the evolving state of the AI risk landscape.

We have matching grants from [the Survival and Flourishing Fund](https://survivalandflourishing.fund/2025/recommendations) that will double every donation up to $1,133,000. Right now we have about seven months of runway. Achieving our matching goal will help us maintain operations through 2026, hire 2–4 additional research engineers, and bring on 2–3 people for science communication.

2025 track record
=================

Research
----

... (truncated, 15 KB total)
Resource ID: db8efd724b178326 | Stable ID: sid_qzeRvvMDzt