Skip to content
Longterm Wiki
Back

AI Safety Workshop @ EA Hotel: Autostructures — Interface Design for Conceptual Sensemaking

web

A Manifund crowdfunding project for an AI safety workshop at EA Hotel focused on developing novel interface design and sensemaking tools for conceptual AI safety research, exploring adaptive methodologies for navigating transformative AI futures.

Metadata

Importance: 18/100othertool

Summary

This project seeks to create culture and technology around AI interfaces for conceptual sensemaking, particularly for research methodologies suited to a world with widely-deployed AI systems. It focuses on 'live theory'—adaptive theoretical frameworks powered by near-term AI to help researchers navigate extreme AI transformation. The project aims to partner with alignment researchers, build sensemaking tools, and generate engineering and cultural outputs toward a new organization.

Key Points

  • Develops AI interface design paradigms for conceptual AI safety research and sensemaking in a world with widely-adopted AI systems.
  • Introduces 'live theory'—adaptive theories enabled by near-term AI to sensemake increasingly complex AI-driven futures.
  • Goals include validating hypotheses around live theory, building sensemaking tools, and partnering with conceptual alignment researchers.
  • Explores non-abusive tech design and threat models where tools themselves can be harmful or frustrating to users.
  • Crowdfunding project on Manifund with $8,555 raised toward a $17,815 goal, linked to EA Hotel workshop.

Cached Content Preview

HTTP 200Fetched Apr 11, 20269 KB
[AI Safety Workshop @ EA Hotel] Autostructures | Manifund

 

 
 
 
 

 Jan
 FEB
 Mar
 

 
 

 
 07
 
 

 
 

 2025
 2026
 2027
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Common Crawl

 

 

 Web crawl data from Common Crawl.
 

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20260207072349/https://manifund.org/projects/ai-safety-workshop--ea-hotel-autostructures

 

Manifund

Home

Login

About

People

Categories

Newsletter

Home

About

People

Categories

Login

Create

3

[AI Safety Workshop @ EA Hotel] Autostructures

🐝

Sahil

Active

Grant

$8,555raised

$17,815funding goal

Donate

Sign in to donate

p]:prose-li:my-0 text-gray-900 prose-blockquote:text-gray-600 prose-a:font-light prose-blockquote:font-light font-light break-anywhere empty:prose-p:after:content-["\00a0"]">
See the LessWrong post: Live Machinery: Interface Design Workshop for AI Safety @ EA Hotel — AI Alignment Forum

Project summary

This is a project for creating culture and technology around AI interfaces for conceptual sensemaking.

Specifically, creating for the near future where our infrastructure is embedded with realistic levels of intelligence (ie. only mildly creative but widely adopted) yet full of novel, wild design paradigms anyway. 

The focus is on interfaces especially for new sensemaking and research methodologies that can feed into a rich and wholesome future.

Huh?

It’s a project for AI interfaces that don’t suck, for the purposes of (conceptual AI safety) research that doesn’t suck.

Wait, so you think AI can only be mildly intelligent?

Nope.

But you only care about the short term, of “mild intelligence”?

Nope, the opposite. We expect AI to be very, very, very transformative. And therefore, we expect intervening periods to be very, very transformative. Additionally, we expect even “very, very transformative” intervening periods to be crucial, and quite weird themselves. 

In preparing for this upcoming intervening period, we want to work on the newly enabled design ontologies of sensemaking that can keep pace with a world replete with AIs and their prolific outputs. Using the near-term crazy future to meet the even crazier far-off future is the only way to go. 

(As you’ll see in the FAQ we will specifically move towards adaptive sensemaking meeting even more adaptive phenomena.)

So you don’t care about risks?

Nope, the opposite. This is all about research methodological opportunities meeting risks of infrastructural insensitivity.

More.

See the rest in the LessWrong post at the top, with lots of examples. Highly recommended if you like weird but fleshed out approaches to alignment. Or watch a 10 minute video here for a little more background: Scaling What Doesn’t Scale: Teleattention Tech.

***

What are this project's goals? How will you achieve them?

Some goals:

Validate so

... (truncated, 9 KB total)
Resource ID: 50d2354050058851 | Stable ID: sid_10VpW2Ew5M