Skip to content
Longterm Wiki
Back

Kyle Fish's role as a full-time AI welfare researcher at Anthropic is reported in Transformer News (https://www.trans...

web

This news article reports on Kyle Fish's position as a full-time AI welfare researcher at Anthropic, highlighting organizational efforts to address animal welfare concerns in AI development and demonstrating commitment to broader ethical considerations beyond human-focused AI safety.

Metadata

news articlenews

Cited by 1 page

PageTypeQuality
Anthropic (Funder)Analysis65.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202611 KB
[![Transformer](https://substackcdn.com/image/fetch/$s_!JQeB!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png)](https://www.transformernews.ai/)

# [![Transformer](https://substackcdn.com/image/fetch/$s_!Ca3v!,e_trim:10:white/e_trim:10:transparent/h_72,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e4faee7-33f6-4f1b-b361-7d68a2f2736c_1344x256.png)](https://www.transformernews.ai/)

SubscribeSign in

![User's avatar](https://substackcdn.com/image/fetch/$s_!gzqZ!,w_64,h_64,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94569052-9645-4c55-9ef3-6a679d6703f1_800x800.png)

Discover more from Transformer

Covering the power and politics of transformative AI.

Over 10,000 subscribers

Subscribe

By subscribing, you agree Substack's [Terms of Use](https://substack.com/tos), and acknowledge its [Information Collection Notice](https://substack.com/ccpa#personal-data-collected) and [Privacy Policy](https://substack.com/privacy).

Already have an account? Sign in

# Anthropic has hired an 'AI welfare' researcher

### Kyle Fish joined the company last month to explore whether we might have moral obligations to AI systems

[Shakeel Hashim](https://substack.com/@shakeelhashim)

Oct 31, 2024

2

1

Share

[![](https://substackcdn.com/image/fetch/$s_!vNA1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa59f8f67-0f0d-4b7f-8f3a-e34e3b77a867_2880x1620.webp)](https://substackcdn.com/image/fetch/$s_!vNA1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa59f8f67-0f0d-4b7f-8f3a-e34e3b77a867_2880x1620.webp) _Image: [Anthropic](https://www.anthropic.com/news/claude-3-5-sonnet)_

Anthropic has hired its first full-time employee focused on the welfare of artificial intelligence systems, Transformer has learned. It’s the clearest sign yet that AI companies are beginning to grapple with questions about whether future AI systems might deserve moral consideration — and whether that means we might have obligations to care about their welfare.

Kyle Fish, who joined the company's alignment science team in mid-September, told Transformer that he is tasked with investigating “model welfare” and what companies should do about it. The role involves exploring heady philosophical and technical questions, including which capabilities are required for something to be worthy of moral consideration, how we might recognise such capabilities in AIs, and what practical steps companies might take to protect AI systems’ interests — if they turn out to have any.

News of the hire comes as researchers — including Fish — publish a [major new report](https://www.transformernews.ai/p/ai-welfar

... (truncated, 11 KB total)
Resource ID: 55c4fe7285f6e10c | Stable ID: NGU3NDY2MT