Issa Rice
- QualityRated 45 but structure suggests 80 (underrated by 35 points)
Quick Assessment
Section titled “Quick Assessment”| Aspect | Assessment |
|---|---|
| Primary Role | Independent researcher and knowledge infrastructure developer |
| Key Contributions | Timelines WikiTimelines WikiTimelines Wiki is a specialized MediaWiki project documenting chronological histories of AI safety and EA organizations, created by Issa Rice with funding from Vipul Naik in 2017. While useful as a...Quality: 45/100, AI WatchAi WatchAI Watch is a tracking database by Issa Rice that monitors AI safety organizations, people, funding, and publications as part of his broader knowledge infrastructure ecosystem. The article provides...Quality: 23/100, Org WatchOrg WatchOrg Watch is a tracking website by Issa Rice that monitors EA and AI safety organizations, but the article lacks concrete information about its actual features, scope, or current status. The piece ...Quality: 23/100, LessWrong/EA Forum readers |
| Focus Areas | AI safety, effective altruism, organizational tracking, timeline research |
| Work Model | Contract work (primarily with Vipul NaikVipul NaikVipul Naik is a mathematician and EA community member who has funded ~$255K in contract research (primarily to Sebastian Sanchez and Issa Rice) and created the Donations List Website tracking $72.8...Quality: 63/100), self-funded projects |
| Community Status | Respected data aggregator and researcher in EA/rationalist communities |
| Activity Level | Reduced since ≈2022 due to health issues; maintains existing projects |
Key Links
Section titled “Key Links”| Source | Link |
|---|---|
| Official Website | issarice.com |
| Wikipedia | en.wikipedia.org |
| EA Forum | forum.effectivealtruism.org |
| GitHub | github.com |
| Timelines Wiki | timelines.issarice.com |
| AI Watch | aiwatch.issarice.com |
Overview
Section titled “Overview”Issa Rice is an independent researcher, writer, and software developer who has created extensive knowledge infrastructure for the effective altruism and AI safety communities. His work centers on building tools and databases that make information about organizations, people, and ideas more accessible and trackable. Rather than producing traditional academic publications, Rice focuses on creating public resources like timeline databases, organizational trackers, and custom wiki readers that serve as reference materials for researchers and community members.1
Rice’s projects reflect a distinctive approach to knowledge work: systematic documentation of fields and movements through structured data collection, timeline construction, and tool-building. His major contributions include Timelines Wiki (hosting chronological histories of AI safety organizations and concepts), AI Watch (tracking people and organizations in the AI safety field), and custom readers for LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 and the EA Forum.2 These projects emerged from personal frustration with unsubstantiated claims and information gaps—for example, he created AI Watch after encountering vague assertions like “only around 60 AI safety researchers exist” that lacked transparent sourcing.3
From 2015 through at least 2018, Rice worked primarily as a contractor for Vipul Naik, earning approximately $80,000 total for writing, programming, data collection, and worker recruitment.4 This arrangement allowed him to build savings while developing his portfolio of open-source projects. Since 2022, Rice has experienced significant health challenges that limit his capacity for work, living on past savings and parental support while maintaining minimal contract work.5
Education and Early Career
Section titled “Education and Early Career”Rice attended the University of Washington beginning in autumn 2014 as a computer science and math major.6 He originally planned to graduate in 2017 but went on indefinite leave starting in spring 2016, shifting focus to independent research and contract work.7
His involvement with effective altruism and rationalist communities predates his formal university studies. Rice describes his engagement with EA in phases: from 2011-2013, he was primarily a “lurker” more focused on LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 and rationality than EA itself; 2014-2015 marked a period of high excitement when he attended Seattle EA meetups and studied canonical EA texts; from 2016 onward, his views became more refined and critical of certain EA dynamics.8
Major Projects and Contributions
Section titled “Major Projects and Contributions”Timelines Wiki
Section titled “Timelines Wiki”Timelines Wiki represents Rice’s most extensive timeline research, hosting chronological histories across multiple domains. The project includes timelines of AI safety (covering events from 2003 onward), major organizations like the Machine Intelligence Research InstituteOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100 and Future of Humanity InstituteOrganizationFuture of Humanity InstituteThe Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving...Quality: 51/100, and concepts like decision theory and Bitcoin.9 These timelines synthesize information from multiple sources into structured chronological narratives, providing researchers with consolidated historical references.
Rice’s timeline work extends beyond AI safety to include libertarian organizations (Niskanen Center, Ludwig von Mises Institute, Cato Institute) and global health entities like the Against Malaria Foundation.10 The effort levels vary significantly—some timelines represent minimal work (under 8 hours), while others like the MIRI timeline involved approximately 75 hours of research.11
AI Watch
Section titled “AI Watch”AI Watch emerged from Rice’s desire for greater transparency about who works on AI safety and what technical agendas exist in the field. The platform tracks people, organizations, products, and technical approaches in AI safety, alignment, and existential risk communities.12 According to Rice’s stated motivations, he created AI Watch to support his own career decision-making about whether to work on AI safety and to meet observed demand for field landscape overviews similar to annual AI safety literature reviews.13
The database explicitly covers individuals affiliated with organizations like the Future of Humanity InstituteOrganizationFuture of Humanity InstituteThe Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving...Quality: 51/100, Centre for the Study of Existential Risk, and Center for Security and Emerging TechnologyCsetCSET is a $100M+ Georgetown center with 50+ staff conducting data-driven AI policy research, particularly on U.S.-China competition and export controls. The center conducts hundreds of annual gover...Quality: 43/100, providing structured information about researchers’ organizational affiliations and focus areas.14
Org Watch and Other Infrastructure Tools
Section titled “Org Watch and Other Infrastructure Tools”Org Watch serves as a centralized information hub about effective altruism organizations, designed to help researchers and job seekers understand the organizational landscape.15 Rice also developed custom readers for LessWrong 2.0LesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 and the EA Forum, improving access to rationalist and EA content through alternative interfaces.16
His portfolio includes numerous other projects: Issawiki (personal notes and articles), Machine Learning Subwiki pages, the Cause Prioritization Wiki, a daily AI safety learning blog, and an Analysis Solutions blog working through Terence Tao’s mathematical texts.17 Many of these projects remain accessible through public GitHub repositories with visible version histories, reflecting Rice’s commitment to transparent knowledge work.18
Contract Work and Data Collection
Section titled “Contract Work and Data Collection”From early 2016 onward, Rice worked regularly for Vipul Naik on the Donations List WebsiteDonations List WebsiteComprehensive documentation of an open-source database tracking $72.8B in philanthropic donations (1969-2023) across 75+ donors, with particular coverage of EA/AI safety funding. The page thoroughl...Quality: 52/100, scraping grants data, adding features, and authoring tutorials.19 His contract work extended to recruitment and management of other workers, with monthly earnings reaching approximately $1,816-$1,895 during peak periods in late 2016 and early 2017.20 Rice was not directed or reviewed by Naik except in broad terms, choosing specifics of his work independently, and crucially was not paid for pageviews on Wikipedia pages he wrote.21
Much of this contract work involved topics Rice did not personally prioritize—particularly global health research for the Development Economics Subwiki. Rice openly described this work as “emotionally challenging” and “demotivating” because he was not a “true believer” in global health as the most important cause area, instead prioritizing existential risk reduction.22 He viewed the experience as valuable for research skill-building and financial savings despite the misalignment with his core priorities.
Views and Priorities
Section titled “Views and Priorities”Rice explicitly considers unaligned artificial intelligence “the biggest problem in the world right now,” a worldview that significantly shapes his research priorities and project selection.23 This focus on AI existential risk led him away from global health work despite its prevalence in his contract obligations, reflecting a strong commitment to cause prioritization based on personal assessment of importance.
His views on effective altruism evolved from initial excitement to more nuanced criticism. By 2016 and beyond, Rice came to see EA as driven by a small number of “Serious People” with most participants as followers, and critiqued what he perceived as an “irrefutable” definition that hindered substantive criticism of the movement.24 He noted that outsiders often struggle to parse EA arguments but valued the movement for its concentration of people with argument-following capacity.25
Rice has written critically about prediction calibration in rationalist and EA communities, noting that without standardized questions (unlike formal forecasting tournaments), individuals can manipulate their apparent calibration by selectively predicting obvious outcomes.26 He observed that calibration graphs on platforms like PredictionBook may not reflect “real” calibration because they assume all predictions are independent.27
Community Reception and Activity
Section titled “Community Reception and Activity”Within EA and rationalist communities, Rice is regarded as a respected data aggregator and reliable compiler of resources. Vipul Naik credited him with donation data collation that aids landscape understanding, and MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/100 referenced his EA Forum karma leaderboard as authoritative.28 His EA links page aggregating discussions by topic (existential risks, cause prioritization) and his maintenance of various leaderboards demonstrate sustained contributions to community infrastructure.29
Rice maintains active participation on the EA Forum with thoughtful engagement on topics like AI timelines, Shapley value applications, and forum norms. He has criticized certain trends like aggressive disagree-voting on questions.30 His GitHub account hosts 168 repositories, indicating substantial technical output beyond his most visible projects.31
Community members view Rice primarily through his utility as a knowledge infrastructure provider rather than as a thought leader or theorist. His projects fill practical gaps—providing structured data where previously only scattered information existed—rather than advancing new theoretical frameworks or research agendas.
Current Status and Recent Developments
Section titled “Current Status and Recent Developments”Since 2022, Rice has dealt with a mysterious chronic illness that consumes the majority of his time managing symptoms.32 This substantially limits his capacity for work and has shifted his activity level from regular production to maintenance of existing projects. As of 2023, he does not earn substantial income, living on savings from past contract work and parental support while continuing only minimal work for Vipul Naik each month.33
Despite reduced capacity, Rice maintains his existing projects and continues some online discussion participation. His daily AI safety learning blog and other activity feeds remain accessible, though the volume of new content has declined significantly from peak productivity periods in 2016-2017.34 The health challenges represent a major constraint on what was previously a highly productive knowledge infrastructure development practice.
Criticisms and Limitations
Section titled “Criticisms and Limitations”Rice’s work, while valued for data aggregation, has limitations inherent to its scope and methodology. His approach prioritizes breadth of coverage and accessibility over depth of analysis—timelines and organizational databases provide structured information but typically do not include critical evaluation or synthesis of competing perspectives. The projects serve as reference materials rather than analytical contributions to debates within AI safety or effective altruism.
His timeline research varies significantly in rigor, with some timelines representing minimal effort (under 8 hours) that may lack comprehensive coverage of significant events.35 The focus on documenting observable events and organizational developments means Rice’s projects capture easily trackable information while potentially missing informal influence networks, private communications, or subtle intellectual developments that don’t leave clear public traces.
Rice’s stated personal biases—particularly his strong conviction that AI existential risk is “the biggest problem in the world”—could influence how he selects, frames, and prioritizes information in projects like AI Watch.36 While his work provides transparency about who works in the field, the framing of these resources around “AI safety/alignment/AI existential risk communities” reflects particular assumptions about how to categorize this work that not all researchers in the field would endorse.
His community participation shows limited engagement with global health topics despite significant past contract work in this area, suggesting selective allocation of volunteer effort toward causes he personally prioritizes.37 This creates potential gaps in his knowledge infrastructure coverage, with extensive resources for AI safety but minimal parallel development for other cause areas that remain important to many EA community members.
Key Uncertainties
Section titled “Key Uncertainties”Several aspects of Rice’s work and influence remain uncertain or incompletely documented:
-
Impact assessment: While Rice’s projects are frequently cited and used by community members, systematic data on usage patterns, influence on decision-making, or comparative value versus alternative resources does not appear in available sources. The actual impact of his knowledge infrastructure work on research directions or career choices remains largely anecdotal.
-
Methodological rigor: The processes Rice uses for determining timeline completeness, selecting which organizations to track in AI Watch, and ensuring accuracy of aggregated data are not fully transparent. Different researchers might make different inclusion/exclusion decisions, but documentation of his selection criteria appears limited.
-
Sustainability: Given Rice’s health challenges and reduced capacity since 2022, the long-term maintenance and updating of his projects remains uncertain. Many of his resources derive value from being current, but mechanisms for ongoing updates or community contribution are unclear.
-
Financial sustainability: Rice’s reliance on past savings and parental support raises questions about the long-term viability of his independent research model, particularly given his minimal current income and ongoing health costs.38
Sources
Section titled “Sources”Footnotes
Section titled “Footnotes”-
EA Forum and Vipul Naik sources - Multiple references ↩