Back
Announcing the Cambridge Boston Alignment Initiative [Hiring!]
webAuthors
kuhanj·tlevin·Xander123·Alexandra Bates
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: EA Forum
Announcement post for a regional AI alignment organization in the Cambridge/Boston area, relevant for those interested in the organizational landscape of the AI safety field or seeking employment opportunities.
Metadata
Importance: 35/100news
Summary
Announcement of the Cambridge-Boston Alignment Initiative (CBAI), a new organization focused on AI alignment research in the Cambridge/Boston area. The post introduces the initiative's mission, structure, and open hiring positions for researchers and staff.
Key Points
- •CBAI is a new AI alignment research initiative based in the Cambridge/Boston area, leveraging proximity to MIT, Harvard, and other institutions.
- •The organization is actively hiring researchers and staff to build out its alignment research capacity.
- •The initiative aims to coordinate and grow the local AI safety research community in the Cambridge/Boston ecosystem.
- •The post signals growing organizational infrastructure for AI alignment research outside of established hubs like the Bay Area.
- •CBAI likely focuses on fostering collaboration between academia and the AI safety community in a research-dense region.
Cached Content Preview
HTTP 200Fetched Apr 9, 20262 KB
# Announcing the Cambridge Boston Alignment Initiative [Hiring!] By kuhanj, tlevin, Xander123, Alexandra Bates Published: 2022-12-02 **TLDR: **The Cambridge Boston Alignment Initiative ([CBAI](http://cbai.ai)) is a new organization aimed at supporting and accelerating Cambridge and Boston students interested in pursuing careers in AI safety. We’re excited about our ongoing work, including running a winter ML bootcamp, and **are hiring for Cambridge-based roles (rolling applications, priority deadline Dec. 14 to work with us next year)**. * * * We think that reducing risks from advanced AI systems is one of the most important issues of our time, and that undergraduate and graduate students can quickly start doing valuable work that mitigates these risks. We (Kuhan, Trevor, Xander and Alexandra) formed the Cambridge Boston Alignment Initiative ([CBAI](http://cbai.ai)) to increase the number of talented researchers working to mitigate risks from AI by supporting Boston-area infrastructure, research and outreach related to AI alignment and governance. Our current programming involves working with groups like the [Harvard AI Safety Team (HAIST)](http://haist.ai) and [MIT AI Alignment (MAIA)](http://mitalignment.org), as well as organizing [a winter ML bootcamp](https://www.cbai.ai) based on Redwood Research’s MLAB curriculum. We think that the Boston and Cambridge area is a particularly important place to foster a strong community of AI safety-interested students and researchers. The AI alignment community and infrastructure in the Boston/Cambridge area has also grown rapidly in recent months (see [updates from HAIST and MAIA](https://www.lesswrong.com/posts/LShJtvwDf4AMo992L/update-on-harvard-ai-safety-team-and-mit-ai-alignment) for more context), and has many opportunities for improvement: office spaces, advanced programming, research, community events, and internship/job opportunities to name a few. If you’d like to work with us to make this happen, we’re hiring for full-time generalist roles in Boston. Depending on personal fit, this work might take the form of co-director, technical director/program lead, operations director, or operations associate. **We will respond to applications submitted by December 14** by the end of the year. For more information, see our [website](http://cbai.ai). For questions, email kuhan@cbai.ai. We’ll also be at [EAGxBerkeley](https://www.eaglobal.org/events/eagxberkeley2022/), and are excited to talk to people there.
Resource ID:
82326a529d03f51d