grant
6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp.
Child of Long-Term Future Fund (LTFF)
Metadata
| Source Table | grants |
| Source ID | VcbYEe__xK |
| Description | to Artem Karpov, USD 1739, 2023-04 |
| Source URL | funds.effectivealtruism.org/grants |
| Parent | Long-Term Future Fund (LTFF) |
| Children | — |
| Created | Mar 12, 2026, 5:54 AM |
| Updated | Mar 14, 2026, 6:22 AM |
| Synced | Mar 12, 2026, 4:12 PM |
Record Data
id | VcbYEe__xK |
organizationId | Long-Term Future Fund (LTFF)(organization) |
granteeId | Artem Karpov |
orgEntityId | Long-Term Future Fund (LTFF)(organization) |
orgDisplayName | — |
granteeEntityId | — |
granteeDisplayName | Artem Karpov |
name | 6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp. |
amount | 1739 |
currency | USD |
period | — |
date | 2023-04 |
status | — |
source | funds.effectivealtruism.org/grants |
notes | [Long-Term Future Fund] 6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp. |
programId | xng_1vsce_ |
dataSourceId | — |
Source Check Verdicts
confirmed95% confidence
Last checked: 4/9/2026
[deterministic-row-match] Deterministic match: grantee, amount, date, name matched in source snapshot (1628 rows)
Debug info
Thing ID: VcbYEe__xK
Source Table: grants
Source ID: VcbYEe__xK
Parent Thing ID: sid_yA12C1KcjQ