Skip to content
Longterm Wiki
Back

Measuring the Epistemic Cost-Effectiveness of AI Safety Content

web

A Manifund grant project proposing to measure the epistemic cost-effectiveness of AI safety communication content using a Quality-Adjusted Viewer Minute (QAVM) framework, combining watch-party experiments and human rater evaluations to help funders assess which communication efforts genuinely change minds.

Metadata

Importance: 32/100othertool

Summary

This project seeks to establish a practical, evidence-based framework for measuring the epistemic impact of AI safety communication by introducing the Quality-Adjusted Viewer Minute (QAVM) metric. It combines structured watch-party sessions in Berkeley, human rater evaluations, and expert validation to calibrate a quantitative model linking engagement metrics to genuine understanding. The resulting open-source methodology aims to help funders compare AI safety communication projects by epistemic value per dollar.

Key Points

  • Introduces the Quality-Adjusted Viewer Minute (QAVM) framework to measure not just viewership but depth of understanding and reflective engagement with AI safety content.
  • Plans eight structured watch-party sessions at UC Berkeley with 10 participants each to collect behavioral data linking engagement metrics to genuine curiosity and learning.
  • Employs three trained human raters and four domain experts to score content for epistemic accuracy, fidelity, and reflective depth.
  • Aims to produce open-source calibration data and replication scripts to enable transparent, reproducible measurement of epistemic effectiveness.
  • Results intended to inform AI safety content fellowships (e.g., Mox Populi, Signal Creators) and serve as a decision-support tool for field-building funders.

Cached Content Preview

HTTP 200Fetched Apr 12, 20269 KB
Measuring the Epistemic Cost-Effectiveness of AI Safety Content | Manifund

 

 
 
 
 

 Oct
 NOV
 Dec
 

 
 

 
 19
 
 

 
 

 2024
 2025
 2026
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Common Crawl

 

 

 Web crawl data from Common Crawl.
 

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - https://web.archive.org/web/20251119110903/https://manifund.org/projects/measuring-the-epistemic-cost-effectiveness-of-ai-safety-content

 

Manifund

Home

Login

About

People

Categories

Newsletter

Home

About

People

Categories

Login

Create

5

Measuring the Epistemic Cost-Effectiveness of AI Safety Content

EA community

Aditya Mehta

Active

Grant

$6,060raised

$14,800funding goal

Donate

Sign in to donate

p]:prose-li:my-0 text-gray-900 prose-blockquote:text-gray-600 prose-a:font-light prose-blockquote:font-light font-light break-anywhere empty:prose-p:after:content-["\00a0"]">
Project summary

Funders spend millions on AI safety communication—videos, podcasts, explainer content—but lack a consistent way to measure which efforts actually change minds or build epistemic engagement. 

This project establishes a practical, evidence-based way to measure the epistemic impact of AI-safety communication. Quality-Adjusted Viewer Minute (QAVM) framework introduced by Austin Chen and Marcus Abramovitch to capture not just how many people watch AI-safety content, but how deeply they understand, reflect on, and continue engaging with it. 

The study will collect new human data through small, structured watch parties in Berkeley, paired with expert and rater evaluations of message accuracy and reflective depth. These data will calibrate a quantitative model (read more at https://drive.google.com/file/d/16KxeA1ZdFrKRAa10owLStunk5huAAyLr/view?usp=sharing), allowing future funders and researchers to assess communication efforts using a transparent, reproducible measure of epistemic effectiveness.

Beyond the immediate research contribution, the results can also directly inform upcoming AI-safety content fellowships such as Mox Populi and Signal Creators by helping evaluate training outcomes. 

In this way, the project dual-acts as both a research tool and a decision-support system for emerging field-building initiatives.

What are this project's goals? How will you achieve them?

Goals

To produce a human-grounded model that estimates the epistemic cost-effectiveness of AI-safety media.

To generate open-source calibration data that link YouTube engagement metrics to genuine understanding and reasoning depth.

To provide funders with a tool to compare communication projects based on epistemic value per dollar.

How these goals will be achieved

Run eight watch-party sessions in Berkeley, observing how real viewers engage with AI-safety content and capturing follow-up curiosity and comprehension.

Employ three trained huma

... (truncated, 9 KB total)
Resource ID: d579aa4700c7e6db