Back
What's new at FAR AI — EA Forum
blogforum.effectivealtruism.org·forum.effectivealtruism.org/posts/arfyBSCunWXCtdPMJ/what-...
Data Status
Not fetched
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| FAR AI | Organization | 76.0 |
Cached Content Preview
HTTP 200Fetched Feb 22, 202611 KB
What's new at FAR AI — EA Forum
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Hide table of contents What's new at FAR AI
by AdamGleave Dec 4 2023 6 min read 0 68
AI safety Announcements and updates Organization updates FAR AI Frontpage What's new at FAR AI Summary Incubating & Accelerating AI Safety Research FAR Labs: An AI Safety co-working space in Berkeley Fieldbuilding & Outreach Who’s working at FAR? How can I get involved? We’re hiring! We’re looking for collaborators! Want to donate? Want to learn more about our research? No comments This is a linkpost for https://far.ai/post/2023-12-far-overview/ Summary
We are FAR AI : an AI safety research incubator and accelerator. Since our inception in July 2022, FAR has grown to a team of 12 full-time staff, produced 13 academic papers, opened the coworking space FAR Labs with 40 active members, and organized field-building events for more than 160 ML researchers.
Our organization consists of three main pillars:
Research . We rapidly explore a range of potential research directions in AI safety, scaling up those that show the greatest promise. Unlike other AI safety labs that take a bet on a single research direction, FAR pursues a diverse portfolio of projects. Our current focus areas are building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs ), finding more effective approaches to value alignment (e.g. training from language feedback ), and model evaluation (e.g. inverse scaling and codebook features ).
Coworking Space . We run FAR Labs, an AI safety coworking space in Berkeley. The space currently hosts FAR, AI Impacts , MATS , and several independent researchers. We are building a collaborative community space that fosters great work through excellent office space, a warm and intellectually generative culture, and tailored programs and training for members. Applications are open to new users of the space (individuals and organizations).
Field Building. We run workshops, primarily targeted at ML researchers, to help build the field of AI safety research and governance. We co-organized the International Dialogue for AI Safety bringing together prominent scientists from around the globe, culminating in a public statement calling for global action on AI safety research and governance. We will soon be hosting the New Orleans Alignment Workshop in December for over 140 researchers to learn about AI safety and find collaborators.
We want to expand, so if you’re excited by the work we do, consider donating or working for us ! We’re hiring research engineers, research scientists and communications specialists.
Incubating & Accelerating AI Safety Research
Our main goal is to explore new AI safety research directions, scaling up those that show the greatest promise. We select agendas that are too large to be pursued by individual academic or independent
... (truncated, 11 KB total)Resource ID:
862576e20112243d | Stable ID: Mjk0YTAxNT