Grant 0XqNRP9F1E
1 → partial; dissent: 1 → unverifiable, 1 → confirmed
Our claim
entire record- Name
- ARIA TA1.4: Field Building for Better Formal Models of Society
- Currency
- GBP
- Date
- February 2025
- Notes
- [Safeguarded AI TA1.4] Field Building for Better Formal Models of Society. Lead(s): Joe Edelman, Ryan Lowe. Institutions: Meaning Alignment Institute. Status: active.
Source evidence
1 src · 3 checks- Grantee
- Meaning Alignment Institute
- Focus Area
- TA1.4
- Name
- Field Building for Better Formal Models of Society
- Description
- Meaning Alignment Institute
- Status
- active
NoteDeterministic match: grantee, name matched in source snapshot (48 rows)
NoteQUA-650 retro-scan: The claim is about a specific grant (ARIA TA1.4: Field Building for Better Formal Models of Society) to the Meaning Alignment Institute, while the source is about the broader Safeguarded AI programme within ARIA. The source does not mention TA1.4, the specific grant, or the Meaning Alignment Institute as a grantee. Per QUA-648, a programme within an organization is a MISMATCH from the organization itself.
NoteWhile the source confirms that TA1 exists within the Safeguarded AI programme and discusses expanding TA1's scope, it does not contain specific information about the grant record being verified. The source does not mention the grant name, the grantee organization (Meaning Alignment Institute), the specific date (2025-02), or the funder identifier (XqjV4mbMXQ). The source text appears to be a programme overview rather than a detailed grants database, so the absence of this specific grant information cannot be treated as a contradiction, only as unverifiable from this source.