Skip to content

QURI (Quantified Uncertainty Research Institute)

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:48 (Adequate)⚠️
Importance:28.5 (Peripheral)
Last edited:2026-01-29 (3 days ago)
Words:4.4k
Structure:
📊 21📈 1🔗 1📚 7025%Score: 13/15
LLM Summary:QURI develops Squiggle (probabilistic programming language with native distribution types), SquiggleAI (Claude-powered model generation producing 100-500 line models), Metaforecast (aggregating 2,100+ forecasts from 10+ platforms), and related epistemic tools. Founded 2019 by Ozzie Gooen with $850K+ funding (SFF $650K, Future Fund $200K), primarily serving EA/rationalist community for cost-effectiveness analysis and Fermi estimation.
Issues (2):
  • QualityRated 48 but structure suggests 87 (underrated by 39 points)
  • Links4 links could use <R> components
DimensionAssessmentEvidence
InnovationHighSquiggle is unique probabilistic language with native distribution algebra; SquiggleAI generates 100-500 line models
Practical ImpactGrowingGiveWell CEA quantification projects (≈300 hours of work), AI timeline models, EA cause prioritization
Open SourceFullyAll projects MIT licensed on GitHub; monorepo includes Squiggle, Hub, and Metaforecast
FundingStable$850K+ total: SFF ($650K through 2022), Future Fund ($200K, 2022), LTFF (ongoing)
Team SizeSmall≈3-5 core contributors including Ozzie Gooen (founder), with fiscal sponsorship from Rethink Priorities
AI IntegrationActiveSquiggleAI uses Claude Sonnet 4.5 (20K token context cached); RoastMyPost uses Claude + Perplexity
CommunityNiche but EngagedEA Forum presence, Forecasting & Epistemics Slack (#squiggle-dev), $300 Fermi competitions
Data ScaleSignificantMetaforecast indexes 2,100+ forecasts; Guesstimate models: 17,000+ public
AttributeDetails
Full NameQuantified Uncertainty Research Institute
Founded2019 (evolved from Guesstimate, founded 2016)
Founder & Executive DirectorOzzie Gooen
LocationBerkeley, California (primarily remote)
Status501(c)(3) nonprofit; fiscally sponsored by Rethink Priorities
EIN84-3847921
Websitequantifieduncertainty.org
GitHubgithub.com/quantified-uncertainty (monorepo structure)
Substackquri.substack.com
Primary FundersSurvival and Flourishing Fund (SFF), Long-Term Future Fund (LTFF), Future Fund (pre-FTX)
Total Funding$850,000+ through 2022

The Quantified Uncertainty Research Institute (QURI) is a nonprofit research organization focused on developing tools and methodologies for probabilistic reasoning and forecasting. Founded in 2019 by Ozzie Gooen, QURI aims to make uncertainty quantification more accessible and rigorous, particularly for decisions affecting the long-term future of humanity. The organization evolved from Gooen’s earlier work on Guesstimate, a spreadsheet tool for Monte Carlo simulations that demonstrated strong demand for accessible uncertainty tools in the effective altruism community.

QURI’s flagship project is Squiggle, a domain-specific programming language designed specifically for probabilistic estimation. Unlike general-purpose languages, Squiggle provides first-class support for probability distributions, enabling analysts to express complex uncertainties naturally. The language powers Fermi estimates, cost-effectiveness analyses, and forecasting models used throughout the effective altruism and rationalist communities. QURI’s experience building tools like Squiggle and Guesstimate has revealed a significant challenge: even highly skilled domain experts frequently struggle with basic programming requirements and often make errors in their probabilistic models.

The organization has expanded beyond Squiggle to encompass a suite of epistemic tools: Squiggle Hub for collaborative model sharing (hosting 17,000+ Guesstimate models), Metaforecast for aggregating predictions across 10+ platforms, SquiggleAI for LLM-assisted model generation, and RoastMyPost for AI-powered blog post evaluation. Together, these tools form an ecosystem aimed at improving quantitative reasoning in high-stakes domains.

Organizations in the effective altruism and rationalist communities regularly rely on cost-effectiveness analyses and Fermi estimates to guide their decisions. QURI’s mission is to make these probabilistic tools more accessible and reliable for altruistic causes, bridging the gap between sophisticated quantitative methods and practical decision-making.

Ozzie Gooen is the founder and Executive Director of QURI, with a background spanning engineering, effective altruism, and forecasting research.

PeriodRoleFocus
2008-2012Harvey Mudd CollegeB.S. General Engineering (Economics concentration)
≈2014Founding Engineer, ZenTrustEstate planning web service
2015-2016.impact co-founderEA community building infrastructure
2016Guesstimate founderMonte Carlo spreadsheet tool
2017-2019Research Scholar, Future of Humanity InstituteForecasting infrastructure research
2019-presentExecutive Director, QURIEpistemic tools development

Gooen discovered effective altruism in college after learning he “sounded like a utilitarian.” His path from engineering to forecasting research reflects a consistent interest in applying quantitative methods to important decisions. At FHI, he focused on forecasting infrastructure research, exploring how prediction markets and aggregation methods could improve institutional decision-making.

Beyond QURI, Gooen serves on the boards of Rethink Charity and Rethink Priorities. He has contributed to discussions on AI governance, forecasting methodology, and quantitative cause prioritization through extensive writing on the EA Forum and LessWrong.

  • Guesstimate: Created the first widely-used Monte Carlo spreadsheet, which runs 5,000 simulations per model and has accumulated 17,000+ public models
  • Forecasting Research: Developed proposals for prediction market improvements and forecast aggregation methods
  • QURI Tools Ecosystem: Oversaw development of Squiggle, Metaforecast, SquiggleAI, and RoastMyPost
  • EA Community: Helped build early EA infrastructure through .impact and ongoing community engagement
Loading diagram...

Squiggle is QURI’s primary project—a domain-specific programming language for probabilistic estimation that runs in the browser via JavaScript. It can be useful to think of Squiggle as similar to SQL, Excel, or probabilistic programming languages like WebPPL: there are simple ways to declare variables and write functions, but don’t expect classes, inheritance, or monads.

Squiggle is meant for intuitively-driven quantitative estimation rather than data analysis or data-driven statistical techniques. It’s designed for situations where there is very little data available, and most variables will be intuitively estimated by domain experts. The syntax is forked from Guesstimate and Foretold, optimized for readable probabilistic expressions.

FeatureDescriptionExample
Distribution TypesNative support for normal, lognormal, uniform, beta, cauchy, gamma, logistic, exponential, bernoulli, triangularnormal(10, 2), beta(2, 8)
The “to” SyntaxIntuitive lognormal creation using 5th/95th percentiles1 to 10 equals lognormal({p5: 1, p95: 10})
Distribution AlgebraMathematical operations propagate through distributionsnormal(10, 2) * uniform(0.8, 1.2)
Multiple ParameterizationsSame distribution, different inputsnormal({p5: 5, p95: 15}), normal({mean: 10, stdev: 3})
Functions with DomainsType-checked parameter rangesf(x: [1,10]) = x * 2 fails if called with f(15)
Three RepresentationsSample Set (default, supports correlations), Point Set, SymbolicSym.normal(10, 2) for symbolic
Monte Carlo SamplingAutomatic sampling and propagationBuilt into runtime
VisualizationNative plotting of distributionsIntegrated in playground

Squiggle offers flexible distribution creation with multiple parameterization options:

// Normal distribution - multiple ways to specify
normal(10, 2) // mean, standard deviation
normal({mean: 10, stdev: 2}) // explicit parameters
normal({p5: 5, p95: 15}) // percentile-based
normal({p10: 6, p90: 14}) // different percentiles
normal({p25: 8, p75: 12}) // quartile-based
// Lognormal distribution - the workhorse for positive quantities
lognormal(2, 0.5) // mu, sigma (log-scale parameters)
lognormal({mean: 10, stdev: 5}) // arithmetic mean/stdev
5 to 50 // shorthand for lognormal({p5: 5, p95: 50})
// Beta distribution - for probabilities between 0 and 1
beta(2, 8) // alpha, beta parameters (~20% mean)
beta({mean: 0.3, stdev: 0.1}) // mean/stdev parameterization
// Other distributions
uniform(5, 15) // uniform between bounds
triangular(5, 10, 15) // min, mode, max
exponential(0.1) // rate parameter

Basic Cost-Effectiveness Model:

// Cost-effectiveness model for AI safety intervention
interventionCost = lognormal(1e6, 1.5) // \$1M median, high uncertainty
probabilityOfSuccess = beta(2, 8) // ~20% base rate
valueIfSuccessful = lognormal(1e12, 2) // High but uncertain value
expectedValue = probabilityOfSuccess * valueIfSuccessful
costEffectiveness = expectedValue / interventionCost

Function with Domain Constraints:

// Calculate expected value with bounded probability
calculateEV(probability: [0, 1], value) = {
adjustedProb = probability * normal(1, 0.1) // Add estimation uncertainty
truncate(adjustedProb, 0, 1) * value
}
// Use the function
projectValue = calculateEV(0.3, lognormal(1e9, 2))

Multi-Variable Fermi Estimate:

// Estimate: Number of piano tuners in Chicago
chicagoPopulation = 2.7M to 2.9M
householdsPerPerson = 0.35 to 0.45
pianoOwnershipRate = 0.02 to 0.05
tuningsPerYear = 0.5 to 2
hoursPerTuning = 1.5 to 2.5
workingHoursPerYear = 1800 to 2200
totalTunings = chicagoPopulation * householdsPerPerson *
pianoOwnershipRate * tuningsPerYear
tunerCapacity = workingHoursPerYear / hoursPerTuning
numberOfTuners = totalTunings / tunerCapacity
VersionDateKey Changes
0.10.0January 2025SqProject rewrite, Web Workers by default, compile-time type inference, unit type annotations, UI overhaul
0.9.4-0.9.52024Experimental Web Worker runner, version selection in playground
0.8.62024Import/export support, multi-model projects
0.8.x2023Performance improvements, Squiggle Hub integration
0.7.02023SquiggleAI integration foundations
Early Access2020Initial public release

The January 2025 release of Squiggle 0.10.0 represented six months of development with significant architectural changes:

Web Workers: All Squiggle code now runs in a separate Web Worker thread by default, with results marshaled back asynchronously. This prevents UI freezes during complex calculations.

Type System: New compile-time type inference transforms AST to typed AST, enabling earlier error detection. The pipeline now includes semantic analysis for type checks.

Unit Type Annotations (experimental, contributed by Michael Dickens): Variables can be annotated with physical units like kilograms, dollars, or compound units like m/s^2.

UI Changes: The output viewer now defaults to collapsed variables. Use the @startOpen decorator to expand variables by default.

Strengths:

  • Simple, readable syntax optimized for probabilistic math
  • Fast for small-to-medium models
  • Strong for rapid prototyping
  • Optimized for numeric and symbolic approaches, not just Monte Carlo
  • Embeddable in JavaScript applications
  • Free and open-source (MIT license)

Limitations:

  • Does not support Bayesian inference (cannot do backwards inference from data)
  • Much slower than languages like Stan or PyMC on large models
  • Limited scientific computing ecosystem
  • Beta distributions display poorly when alpha or beta are below 1.0

SquiggleAI integrates large language models to assist with probabilistic model creation, addressing a key challenge QURI identified: even highly skilled domain experts frequently struggle with basic programming requirements for probabilistic modeling.

SquiggleAI uses prompt caching to cache approximately 20,000 tokens of information about the Squiggle language, ensuring the LLM has deep knowledge of syntax, distributions, and best practices. Most workflows complete within 20 seconds to 3 minutes, producing models typically 100-500 lines long depending on the model used.

ModelStatusOutput SizeSpeedBest For
Claude Sonnet 4.5Primary≈500 linesMediumComplex multi-variable models
Claude Haiku 4.5Available≈150 linesFastQuick prototypes
Grok Code Fast 1Available≈200 linesFastAlternative provider
Claude Sonnet 3.5Legacy≈200 linesMediumStable fallback
CapabilityDescriptionExample Use Case
Model GenerationDescribe a problem in natural language, receive executable Squiggle code”Estimate the cost-effectiveness of hiring an AI safety researcher”
Iterative RefinementConversation-based model improvement”Add more uncertainty to the timeline assumptions”
Fermi EstimationGenerate complete uncertainty models from vague questions”How many piano tuners are in Chicago?”
Code DebuggingIdentify and fix syntax and logic errorsFix distribution domain mismatches
Model ExplanationExplain what existing Squiggle code doesDocument inherited models

SquiggleAI outputs on Squiggle Hub are private by default. Users who want to share models or make them public can explicitly do so by creating new public models. This privacy-first approach encourages experimentation without concern about incomplete drafts being visible.

SquiggleAI is directly accessible from within Squiggle Hub, allowing users to generate models and immediately save, version, and share them through the platform. The tight integration reduces friction between ideation and publication.

Squiggle Hub is a platform for creating, sharing, and collaborating on Squiggle models. It was announced in 2024 as a central repository for probabilistic models in the EA and rationalist communities.

FeatureDescriptionDetails
Model HostingPublish and share Squiggle modelsPublic or private visibility options
VersioningTrack model history and changesGit-like version control for models
Imports/ExportsMulti-model projectsImport functions and distributions between models
CollaborationMultiple contributors per projectGroup-based permissions
EmbeddingEmbed models in external sitesReact components for documentation
Version SelectionPick Squiggle version per modelTest on different versions, avoid breaking changes
SquiggleAI IntegrationAI-assisted model creationDirect access within the platform

Squiggle Hub organizes models into groups for easier discovery:

  • Meta-Squiggle: Models about Squiggle itself and QURI operations
  • Innovation: Models exploring novel estimation techniques
  • User Collections: Individual user portfolios

The platform hosts various types of probabilistic models:

CategoryExample ModelsTypical Complexity
Cost-EffectivenessGiveWell charity evaluations, AI safety interventions200-500 lines
Fermi EstimatesPopulation estimates, market sizing50-150 lines
ForecastingElection probabilities, technology timelines100-300 lines
Decision ModelsCareer choice analysis, funding allocation150-400 lines

Squiggle Hub also provides access to 17,000+ public models from Guesstimate, Gooen’s earlier Monte Carlo spreadsheet tool. This archive represents years of probabilistic modeling by the EA community and serves as a reference library for common estimation patterns.

Metaforecast aggregates forecasts from multiple prediction platforms into a single searchable interface. The initial version was created by Nuño Sempere, with help from Ozzie Gooen, at QURI.

Forecasting is a public good, but platform fragmentation reduces its utility. Metaforecast addresses this by combining data from 10+ platforms into a unified search interface. Data is fetched daily, showing immediate forecasts without historical data—optimized for quick lookups rather than trend analysis.

PlatformTypeQuestionsNotes
MetaculusReputation-based≈1,200 (55% of total)Largest source; research-focused
ManifoldPlay money marketVariesResearch-focused, no profit motive
PolymarketReal money (crypto)VariesRose to prominence in 2024 US election
Good Judgment OpenSuperforecaster platform≈100-200Connected to Good Judgment Project
PredictItPolitical market≈50-100US political focus
KalshiCFTC-regulated market≈100-200Regulatory compliance focus
Insight PredictionPrediction marketVariesAdditional source
INFERPolicy forecastingVariesGovernment-adjacent

Scale: Currently indexes approximately 2,100 active forecasting questions across all platforms, plus access to 17,000+ public Guesstimate models.

FeatureDescriptionTechnical Details
Unified SearchFind forecasts across all platformsElasticsearch-powered
Platform ComparisonSee estimates from different sourcesSide-by-side probability views
GraphQL APIProgrammatic accessAvailable for integrations
Daily UpdatesFresh data fetched automaticallyScheduled scraping pipeline
Open SourceGitHub repositoryPart of Squiggle monorepo

Metaforecast has been integrated with several external services:

  • Twitter: Bot posting notable forecasts
  • Fletcher: Discord bot integration
  • GlobalGuessing: Forecasting community platform
  • Elicit (previously): AI research assistant

The open-source nature and GraphQL API enable custom integrations for research and analysis workflows.

During the 2024 US presidential election, Polymarket (one of Metaforecast’s sources) demonstrated the value of prediction market aggregation. Polymarket strongly favored Trump’s victory even as traditional polls showed a closer race. The market’s prediction proved correct, with reports of approximately $30 million in bets from individual traders helping shape the odds.

RoastMyPost is a QURI application that uses LLMs and code to evaluate blog posts and research documents. It was announced as a tool for improving writing quality in the EA and rationalist communities.

Evaluators Available:

EvaluatorFocusTechnical Implementation
Fact CheckVerify claims against sourcesPerplexity API via OpenRouter
Spell CheckIdentify typos and errorsProgrammatic checks
Fallacy CheckDetect logical fallaciesClaude Sonnet 4.5
Math CheckVerify calculationsClaude + symbolic verification
Link CheckValidate URLsHTTP requests
Forecast CheckEvaluate probabilistic claimsSquiggle integration
Custom EvaluatorsUser-submitted checksCommunity contributions

How It Works:

  1. Import document via markdown text or URL (optimized for EA Forum and LessWrong)
  2. Select evaluators (system-recommended or custom)
  3. Processing takes 1-5 minutes
  4. Review flagged issues for human judgment

Use Cases:

  • Draft polishing: Catch errors before publication
  • Public trust signaling: Link to evaluations in blog posts (like GitHub badges)
  • LLM-assisted workflows: Automated quality checks on AI-generated content

Limitations: The false positive rate for error detection is significant. RoastMyPost is best for flagging issues for human review, not for treating results as authoritative. Fact Check and Fallacy Check work best on fact-dense, rigorous articles.

QURI runs periodic $100 Fermi Model Competitions to encourage creative Fermi estimation using AI tools. The February 2025 competition used Claude 3.5 Sonnet and the QURI team as judges.

Evaluation Criteria:

CriterionWeightDescription
Surprise40%How unexpected/novel are the findings?
Topic Relevance20%Relevance to rationalist/EA communities
Robustness20%Reliability of methodology and assumptions
Other20%Presentation, creativity, clarity

The competition explicitly encourages novel approaches over exhaustively researched calculations, promoting experimentation with AI-assisted quantitative reasoning.

Squigglepy is a Python package developed by Rethink Priorities that implements Squiggle-like functionalities for those who prefer working in the Python ecosystem. While Squiggle is implemented in JavaScript, Squigglepy allows integration with Python’s scientific computing stack (NumPy, SciPy, pandas).

SourceAmountPeriodNotes
Survival and Flourishing Fund$650,000+2019-2022Primary early funder; Jaan Tallinn’s philanthropic vehicle
Future Fund$200,0002022Pre-FTX collapse; lost future commitments
Long-Term Future FundOngoing2023-presentCurrent primary funder
Individual DonorsVariousOngoingVia Every.org and direct giving
Total Historical$850,000+Through 2022Documented funding

The Survival and Flourishing Fund (SFF) is financed primarily by Jaan Tallinn (Skype co-founder, AI safety advocate). In 2025, SFF allocated $34.33 million to organizations working on existential risk, with ~86% ($29M) going to AI-related projects. QURI represents one of several epistemic infrastructure investments in SFF’s portfolio.

The Future Fund’s 2022 grant was affected by the FTX collapse, which eliminated potential future funding from that source. QURI has since diversified funding through LTFF and individual donors.

AspectDetails
Legal Status501(c)(3) nonprofit (EIN: 84-3847921)
Fiscal SponsorRethink Priorities Special Projects Program
Sponsorship ModelQURI maintains autonomy; RP provides operational support, fiduciary oversight, tax-exempt status
Team Size≈3-5 core contributors
Key ContributorsOzzie Gooen (founder), Sam Nolan, Nuño Sempere (Metaforecast), Michael Dickens (unit types)
CommunicationEA Forecasting & Epistemics Slack (#squiggle-dev), QURI Substack
Code RepositoryMonorepo at github.com/quantified-uncertainty/squiggle

QURI works with Rethink Priorities under their Special Projects Program, which provides:

  • Fiscal sponsorship and tax-exempt status
  • Financial administration and payroll
  • Recruitment support
  • Operational infrastructure

This model allows QURI to focus on tool development while RP handles administrative overhead. The arrangement is common in the EA ecosystem for small research organizations.

QURI collaborated with Arb Research on their 2025 technical AI safety review, building the interactive shallowreview.ai website. The Shallow Review is “a shallow-by-design review of technical AI safety research in 2025: 800+ papers and posts across 80+ research agendas.”

The website was created by QURI (Ozzie and Tomáš) to make the research more navigable. The review covers alignment, control, capability restraint, and risk awareness work, processing every arXiv paper on alignment and all Alignment Forum posts. The project was funded by Coefficient Giving.

GiveWell Cost-Effectiveness Quantification

Section titled “GiveWell Cost-Effectiveness Quantification”

Multiple projects have used Squiggle to add uncertainty quantification to GiveWell’s cost-effectiveness analyses:

ProjectAuthorsEffortKey Finding
GiveDirectly CEASam Nolan≈40 hoursMean cost to double consumption: $469 (95% CI: $131-$1,185)
Against Malaria FoundationVariousResearch projectGiveWell point estimate $7,759 vs. Squiggle mean $6,980
GiveWell Change Our Mind SubmissionSam Nolan, Hannah Rokebrand, Tanae Rao≈300 hoursFull CEA uncertainty quantification

These projects demonstrate Squiggle’s utility for making cost-effectiveness analyses more transparent and explorable.

QURI tools are widely used across the effective altruism and rationalist communities:

Use CaseTools UsedExample Organizations
Cost-effectiveness analysesSquiggle, Squiggle HubGiveWell evaluators, EA Funds
Fermi estimates for cause prioritizationSquiggleAI, SquiggleCoefficient Giving, 80,000 Hours
AI timeline modelingSquiggleAI safety researchers
Forecast aggregationMetaforecastForecasting community
Writing quality checksRoastMyPostEA Forum authors
ToolFocusStrengthsLimitationsLearning Curve
SquiggleProbabilistic estimationNative distributions, web-based, readable syntaxNo Bayesian inference, smaller ecosystemLow-Medium
GuesstimateSpreadsheet Monte CarloFamiliar spreadsheet UI, 5,000 simulationsLess programmable, limited functionsLow
StanBayesian inferencePowerful MCMC, HMC samplingSteep learning curve, slower iterationHigh
PyMCBayesian PythonFull Python ecosystem, Theano/JAX backendRequires Python expertiseMedium-High
WebPPLProbabilistic programmingInference, conditioningAcademic focus, limited toolingMedium
ExcelGeneral spreadsheetsUbiquitous, familiarPoor uncertainty support, no distributionsLow
CausalBusiness modelingScenario planning, team collaborationLess probabilistic focusLow

Use Squiggle when:

  • You need intuition-driven estimation without much data
  • Rapid prototyping of uncertainty models is priority
  • Web-based sharing and collaboration is important
  • You want readable, auditable probabilistic code

Use Stan/PyMC when:

  • You have data and need Bayesian inference
  • Model complexity requires advanced MCMC methods
  • Performance on large models is critical

Use Guesstimate when:

  • Spreadsheet interface is preferred
  • Quick one-off calculations
  • Non-programmers need to contribute

Organizations use Squiggle for intervention comparisons with explicit uncertainty:

Use CaseExampleTypical Model Size
Charity evaluationGiveDirectly, AMF cost per life saved200-500 lines
AI safety interventionsResearch funding, field-building ROI150-400 lines
Policy cost-benefitRegulation impacts, safety standards200-600 lines
Career decisionsExpected value of different paths100-300 lines

The GiveWell CEA quantification project demonstrated that Squiggle can make charity evaluations more transparent by showing full probability distributions rather than point estimates.

Squiggle enables structured forecasts with explicit uncertainty:

  • AI timeline models: When will specific capabilities emerge?
  • Technology adoption curves: S-curves with uncertainty bounds
  • Risk scenario analysis: Probability-weighted outcome trees
  • Market sizing: Fermi estimates for business planning

Squiggle models embedded in publications enable:

  • Transparent assumptions: Every input visible and adjustable
  • Reproducible calculations: Same code produces same outputs
  • Interactive exploration: Readers can modify parameters
  • Sensitivity analysis: Identify which inputs matter most

While primarily used in EA/rationalist communities, Squiggle has potential applications in:

  • Academic research requiring uncertainty quantification
  • Policy analysis requiring transparent assumptions
  • Risk assessment in regulated industries
  • Teaching probabilistic reasoning
YearMilestoneSignificance
2016Guesstimate launchedFirst Monte Carlo spreadsheet for EA community
2017-2019Gooen at FHIForecasting infrastructure research
2019QURI founded501(c)(3) nonprofit established
2020Squiggle Early AccessFirst public release of Squiggle language
2021Metaforecast launchedForecast aggregation across 10+ platforms
2022$650K+ SFF fundingMajor funding milestone
2022Future Fund grant ($200K)Expansion funding (pre-FTX)
2023Squiggle 0.8.xPerformance improvements, Hub integration
2024Squiggle Hub launchCollaborative model platform
2024SquiggleAI releasedLLM-powered model generation
2024RoastMyPost launchLLM blog evaluation tool
2025Squiggle 0.10.0Major release with type inference, Web Workers
2025shallowreview.ai collaborationArb Research AI safety review website
2025Fermi Competition$300 prizes for creative Fermi estimates
  • Purpose-built tools: Each product addresses specific epistemic needs
  • Accessible: Browser-based, no installation needed
  • Open Source: All code MIT licensed, community contributions welcome
  • AI Integration: SquiggleAI and RoastMyPost leverage frontier LLMs
  • Community: Active EA/rationalist user base, responsive to feedback
  • Ecosystem approach: Tools work together (Squiggle + Hub + AI + Metaforecast)
  • Niche adoption: Limited use outside EA/rationalist communities
  • Small team: ~3-5 core contributors limits development velocity
  • Funding dependency: Reliant on EA-adjacent funders (SFF, LTFF)
  • Learning curve: New syntax requires investment to learn
  • Documentation gaps: Some features under-documented
  • Ecosystem size: Fewer libraries than general-purpose languages
ChallengeDescriptionMitigation
Adoption ceilingEA community is finiteExplore academic/policy applications
LLM competitionGeneral LLMs can do Fermi estimatesIntegrate LLMs (SquiggleAI) rather than compete
SustainabilitySmall team, concentrated fundingFiscal sponsorship, diverse funders
Feature scopePressure to add features vs. maintain simplicityClear design philosophy prioritizing estimation

Based on QURI’s development trajectory and stated priorities:

  • Squiggle language improvements: Continued type system development, performance optimization
  • SquiggleAI expansion: Support for more LLM providers, longer context windows
  • Squiggle Hub features: Better collaboration tools, API improvements
  • Community building: More Fermi competitions, educational content
  • Academic adoption: Partnerships with universities for teaching probabilistic reasoning
  • Policy applications: Tools for government cost-benefit analysis
  • Integration depth: Tighter connections between Squiggle, Metaforecast, and RoastMyPost
  • Can QURI expand beyond the EA community without losing focus?
  • How should the tools evolve as general LLMs become better at estimation?
  • What role should QURI play in AI safety evaluation/forecasting infrastructure?