Skip to content
Longterm Wiki
grant

Cadenza Labs: AI Safety research group working on own interpretability agenda

Child of Manifund

Metadata

Source Tablegrants
Source IDUWjd7xO7JU
Descriptionto Cadenza Labs, USD 7810, 2023-11-06
Source URLmanifund.org/projects/cadenza-labs-ai-safety-research-group-working-on-own-interpretability-agenda
ParentManifund
Children
CreatedMar 12, 2026, 4:59 PM
UpdatedMar 14, 2026, 6:22 AM
SyncedMar 12, 2026, 4:59 PM

Record Data

idUWjd7xO7JU
organizationIdManifund(organization)
granteeIdCadenza Labs
orgEntityIdManifund(organization)
orgDisplayName
granteeEntityId
granteeDisplayNameCadenza Labs
nameCadenza Labs: AI Safety research group working on own interpretability agenda
amount7810
currencyUSD
period
date2023-11-06
status
sourcemanifund.org/projects/cadenza-labs-ai-safety-research-group-working-on-own-inter…
notes[Science & technology, Technical AI safety, Global catastrophic risks] We're a team of SERI-MATS alumni working on interpretability, seeking funding to continue our research after our LTFF grant ended.
programId8jnn54YEbQ
dataSourceId

Source Check Verdicts

partial85% confidence

Last checked: 4/9/2026

The record claims a grant date of 2023-11-06, but the source text is from a Wayback Machine archive dated January 12, 2026. The amount ($7,810), grantee (Cadenza Labs), funder (Manifund), and project name all match the source. However, the source does not explicitly state the grant date of 2023-11-06 — it only shows the project page as archived in 2026. The grant date cannot be verified from this source material, making this a partial confirmation. The amount, grantee, and funder are confirmed, but the specific date is unverifiable from the provided source.

Debug info

Thing ID: UWjd7xO7JU

Source Table: grants

Source ID: UWjd7xO7JU

Parent Thing ID: sid_fFVOuFZCRf