Skip to content
Longterm Wiki
Back

Help a Bootstrapped AI Risk Literacy Founder Get To IASEAI 2026 in Paris

web

A crowdfunding project on Manifund seeking travel funding for an AI risk literacy advocate to attend the IASEAI 2026 conference in Paris, focused on AI transparency frameworks and expanding AI safety discourse in underserved markets like India.

Metadata

Importance: 18/100othernews

Summary

Aashka Patel seeks $3,036 in travel funding to attend the IASEAI 2026 conference in Paris, where she was invited to present her 'AI Nutrition Labels' framework for consumer-facing AI transparency. The project also aims to expand AI risk literacy outreach through her podcast 'On AIR with Aashka,' targeting audiences in India where AI safety discourse is nascent. The project was fully funded.

Key Points

  • Proposes 'AI Nutrition Labels' framework to make AI system transparency accessible to everyday consumers, analogous to food nutrition labels.
  • Targets AI safety outreach to Indian audiences (56% of her podcast base), where AI safety discourse is described as nearly absent.
  • Conference attendance at IASEAI 2026 (invitation-only, Nobel Laureates and leading researchers) to validate framework and build governance connections.
  • Podcast 'On AIR with Aashka' has previously redirected students toward AI safety careers, aiming to scale this pipeline.
  • Project was fully funded at $3,036, covering registration, flights, accommodation, visa, and travel insurance.

Cached Content Preview

HTTP 200Fetched Apr 11, 202610 KB
Jan
 FEB
 Mar
 

 
 

 
 17
 
 

 
 

 2025
 2026
 2027
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Common Crawl

 

 

 Web crawl data from Common Crawl.
 

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - https://web.archive.org/web/20260217113803/https://manifund.org/projects/help-a-bootstrapped-ai-risk-literacy-founder-get-to-paris-iaseai-2026

 

Manifund

Home

Login

About

People

Categories

Newsletter

Home

About

People

Categories

Login

Create

Help a Bootstrapped AI Risk Literacy Founder Get To IASEAI 2026 in Paris | Manifund

8

Help a Bootstrapped AI Risk Literacy Founder Get To IASEAI 2026 in Paris

Science & technology

Technical AI safety

AI governance

EA community

Global catastrophic risks

Aashkaben Kalpesh Patel

Active

Grant

$3,036raised

$3,036funding goal

Fully funded and not currently accepting donations.

p]:prose-li:my-0 text-gray-900 prose-blockquote:text-gray-600 prose-a:font-light prose-blockquote:font-light font-light break-anywhere empty:prose-p:after:content-["\00a0"]">
Project summary:

I'm seeking funding to attend the International Association for Safe and Ethical AI (IASEAI) Conference 2026 in Paris (February 24-26), where I was invited based on my talk proposal "AI Nutrition Labels For Everyone". This invitation-only conference convenes Nobel Laureates and leading AI safety and ethics researchers, committed to ensuring that AI technologies are safe, ethical, and beneficial.

What are this project's goals? How will you achieve them?

Two interconnected goals:

1. Research Advancement - AI Nutrition Labels

Just as nutrition labels transformed food safety through consumer choice, my AI Nutrition Labels framework aims to do the same for AI governance. Currently, everyday consumers cannot make informed choices about AI systems shaping their lives, 80-page model cards remain inaccessible, creating an accountability vacuum. At IASEAI'26, I will:

Validate my framework with AI transparency researchers and refine Performance Value, Safety Value, and environmental impact measurements

Connect with governance experts, ISO specialists, policymakers, and practitioners to identify regulatory implementation pathways as IASEAI intends to become a liaison organization for standards-forming bodies

When people can choose AI products based on understandable labels, companies compete on safety and sustainability, not just capability. This consumer-driven accountability reduces catastrophic risk alongside technical safety work.

2. Expanding AI Risk Literacy Network:

As an AI Product Manager, I've witnessed "safety later" culture at US AI startups and "blinded optimism" in India. Through "On AIR with Aashka," I'm building AI risk literacy infrastructure for audiences primarily in India (56%), where AI safety discourse barely exists. IASEAI'26 concentrates ex

... (truncated, 10 KB total)
Resource ID: 348106369ff905f2 | Stable ID: sid_LQE47uo4Wk