Skip to content

AI Watch

📋Page Status
Page Type:ResponseStyle Guide →Intervention/response page
Quality:23 (Draft)⚠️
Importance:35 (Reference)
Last edited:2026-02-03 (3 days ago)
Words:1.7k
Structure:
📊 2📈 0🔗 31📚 40%Score: 11/15
LLM Summary:AI Watch is a tracking database by Issa Rice that monitors AI safety organizations, people, funding, and publications as part of his broader knowledge infrastructure ecosystem. The article provides useful context about Rice's systematic approach to documentation but lacks concrete details about AI Watch's current functionality, impact, or accessibility.
Issues (1):
  • QualityRated 23 but structure suggests 73 (underrated by 50 points)
AttributeAssessment
CreatorIssa Rice
URLaiwatch.issarice.com
Primary FocusTracking AI safety organizations, people, funding, and publications
Related ProjectsTimelines Wiki, Org Watch
Funding SourceLikely Vipul Naik (pattern from other Rice projects)
PurposeKnowledge infrastructure for AI safety research
SourceLink
Official Websiteaiwatch.issarice.com
Related: Timelines Wikitimelines.issarice.com
Related: Org Watchorgwatch.issarice.com

AI Watch is a tracking database created by Issa Rice that monitors the AI safety field, including organizations, people, funding flows, and publications. The project is part of Rice’s broader ecosystem of knowledge infrastructure tools, which includes Timelines Wiki (documenting chronological histories) and Org Watch (tracking organizational information). Together, these tools provide systematic documentation of the AI safety and effective altruism communities.

According to user-provided context, AI Watch focuses on what key entities in the field are doing, how funding flows between organizations, who works where, and what research is being published. This makes it a reference tool for researchers, funders, and community members seeking to understand the structure and activity of the AI safety ecosystem.

The project reflects Rice’s characteristic approach to knowledge work: systematic data collection, transparent documentation, and creation of public goods that serve research communities. Like his other projects, AI Watch likely receives funding from Vipul Naik, who has supported Rice’s contract work with approximately $80,000 in funding for various tracking and documentation projects.

Role in AI Safety Knowledge Infrastructure

Section titled “Role in AI Safety Knowledge Infrastructure”

AI Watch serves as a complement to other tracking tools in the AI safety space. While Timelines Wiki focuses on chronological histories and sequences of events, AI Watch appears to emphasize current-state tracking of entities and relationships. Org Watch provides organizational information, while AI Watch may offer broader coverage including individuals, publications, and funding patterns.

For AI safety researchers, these tracking tools reduce the effort required to understand who is working on what, which organizations are active, how funding flows through the ecosystem, and what research outputs are being produced. This systematic documentation is particularly valuable in a rapidly evolving field where organizational landscapes and research priorities shift frequently.

The project fits within the broader category of “epistemic tools” for the AI safety community—infrastructure that helps researchers and practitioners understand the field itself, track developments, and identify relevant work and organizations. Other examples include the AI Alignment Forum for technical research discussion and various AI safety newsletters that curate developments.

Relationship to Issa Rice’s Other Projects

Section titled “Relationship to Issa Rice’s Other Projects”

AI Watch is part of a coordinated set of tracking and documentation tools created by Issa Rice:

Timelines Wiki documents chronological histories of AI safety organizations like MIRI and the Center for Applied Rationality, as well as broader topics like the Timeline of AI Safety. Created in March 2017 with Vipul Naik funding, it provides granular historical documentation cited by sources including LongtermWiki.

Org Watch tracks organizational information and appears to focus on entities within the AI safety and effective altruism spaces. The Vipul Naik page references “orgwatch.issarice.com” as a source for information about individuals’ organizational affiliations and activities.

AI Watch adds to this ecosystem by providing tracking of the contemporary AI safety field—complementing the historical focus of Timelines Wiki with current-state information about active organizations, researchers, funding, and publications.

This division of labor allows each tool to specialize while together providing comprehensive coverage of the AI safety ecosystem’s history, structure, and current activity. Researchers can consult Timelines Wiki for “when did X happen,” Org Watch for “what organizations are involved,” and AI Watch for broader field-level tracking.

Comparison to Other AI Safety Tracking Efforts

Section titled “Comparison to Other AI Safety Tracking Efforts”

Several other initiatives track aspects of the AI safety field, though with different scopes and approaches:

The Alignment Newsletter (created by Rohin Shah, later discontinued) provided curated summaries of AI alignment research papers and developments, focusing on synthesizing technical work rather than tracking organizational or funding information.

AI Alignment Forum and LessWrong serve as venues for research discussion and community coordination, but don’t systematically track organizations, funding, or comprehensive publication records.

The AI Safety Research Organizations list and related directories provide snapshots of organizations but typically lack the systematic tracking of changes over time, funding flows, or comprehensive coverage of individuals and publications.

According to EA Forum discussions, community members have expressed need for better tracking of AI safety funding and organizational activities. One 2023 post introduced “AI Lab Watch” as a project to evaluate frontier AI labs’ safety actions, suggesting ongoing demand for monitoring infrastructure. AI Watch may serve complementary purposes by providing broader field-level tracking beyond just frontier labs.

Rice’s approach emphasizes comprehensive data collection and systematic documentation rather than analysis or evaluation. This makes AI Watch useful as a reference source and data layer that others can build upon for analysis, similar to how Vipul Naik’s Donations List Website provides raw funding data that analysts can interpret.

Based on the pattern from Rice’s other projects, AI Watch likely employs:

Systematic data collection from public sources including organizational websites, grant announcements, publication databases, social media, and news articles. The approach emphasizes comprehensiveness within the defined scope rather than selective curation.

Structured data formats enabling querying and analysis. Timelines Wiki uses MediaWiki; other Rice projects may use databases or structured file formats that support programmatic access.

Citation and verification to enable users to check sources and assess reliability. Timelines Wiki provides citations for entries; AI Watch likely follows similar practices.

Focus on AI safety and adjacent communities rather than comprehensive coverage of all AI research. This reflects the project’s origins in and service to the AI safety community specifically.

However, several limitations likely apply:

Scope constraints due to the project being primarily created and maintained by a single individual (Issa Rice). Coverage depends on what Rice identifies as relevant and has capacity to document.

Funding dependence on Vipul Naik’s continued support, creating sustainability questions similar to those affecting Rice’s other contract work projects.

Verification challenges as a single maintainer must verify information across many organizations and individuals, potentially leading to gaps or delays in updates.

Potential for incompleteness in tracking private funding, unpublished research, or organizational activities not publicly announced.

The project provides valuable infrastructure but should be understood as a curated tracking effort rather than guaranteed complete coverage of the AI safety field.

The specific usage patterns and impact of AI Watch are not extensively documented in available sources. However, based on the pattern from related projects:

Research use: Likely consulted by AI safety researchers, funders, and analysts seeking to understand the field’s structure and activity patterns.

Citation as a source: May be referenced in EA Forum posts, research papers, and blog posts about the AI safety ecosystem, similar to how Timelines Wiki is cited in LongtermWiki.

Complementing other tools: Used alongside Timelines Wiki for historical context and Org Watch for organizational details, providing a comprehensive view when all tools are consulted together.

Community awareness: Helps community members stay informed about organizational developments, funding patterns, and who is working on what, potentially facilitating coordination and collaboration.

The project’s value derives from aggregating dispersed information into a centralized, structured format. Without such tools, researchers would need to manually track many sources to maintain awareness of field developments.

Relationship to Vipul Naik’s Funding Model

Section titled “Relationship to Vipul Naik’s Funding Model”

AI Watch likely benefits from the individual funding model established by Vipul Naik, who has funded approximately $80,000 of Issa Rice’s contract work across various projects. This funding arrangement enables creation of public goods that might not receive traditional grant funding due to difficulty demonstrating immediate impact or fitting established funding categories.

Naik’s contract work portal (contractwork.vipulnaik.com) documents payments for timelines, Wikipedia articles, and data infrastructure work. The funding model allows sustained development of comprehensive documentation that would be difficult to maintain through pure volunteer effort.

This patronage approach differs from typical EA funding mechanisms. Rather than large grants from institutions like Open Philanthropy or the Long-Term Future Fund, Naik provides smaller-scale individual funding for knowledge infrastructure projects. The arrangement has enabled projects like Timelines Wiki and the Donations List Website that serve as community resources.

However, this funding model creates sustainability questions. The projects depend on Naik’s personal financial situation and continued interest, without institutional backing or guaranteed continuity. If funding arrangements change, maintaining and updating AI Watch could become difficult.

Several aspects of AI Watch remain unclear from available sources:

Current maintenance status: The extent of active updates and new data entry is not documented.

Specific features and scope: What exactly AI Watch tracks, how data is organized, and what queries or views are available is not detailed in accessible sources.

Usage metrics: How frequently the tool is accessed and by whom is unknown.

Contribution model: Whether AI Watch accepts contributions from others beyond Issa Rice, or remains a single-maintainer project.

Funding details: While likely funded by Vipul Naik based on patterns from other projects, specific funding amounts and arrangements for AI Watch are not documented.

Technical implementation: The platform, database structure, and data formats used by AI Watch are not described in available sources.

The project’s website was not accessible during research for this article, returning only a verification page rather than actual content. This limits what can be confirmed about current functionality and status.

Beyond questions about current status, several broader uncertainties affect understanding of AI Watch’s role and impact:

Comprehensiveness of coverage: What fraction of AI safety organizations, people, funding, and publications are tracked, and what criteria determine inclusion.

Update frequency: How often data is refreshed to reflect new developments, organizational changes, funding announcements, and publications.

Verification methods: How information is fact-checked and what standards of evidence are required for inclusion in the database.

Relationship to other tracking efforts: Whether AI Watch coordinates with or duplicates other projects attempting to track the AI safety field.

Long-term sustainability: Plans for maintaining the project if Issa Rice’s involvement decreases or funding arrangements change.

Counterfactual impact: Whether AI Watch meaningfully improves community coordination, researcher awareness, or funder decision-making, or primarily serves as a reference that would be consulted rarely.

Accessibility and usability: How easy it is for community members to find relevant information in AI Watch, and whether the tool is designed for casual browsing or requires familiarity with its structure.

Note: This article is based primarily on user-provided context about AI Watch’s purpose and relationship to Issa Rice’s broader project ecosystem. Direct information about AI Watch is limited in available sources, as the website was not accessible during research. Information about the pattern of Rice’s work comes from documented projects like Timelines Wiki and references in the Vipul Naik page.