Skip to content
Longterm Wiki
Back

Victoria Krakovna – AI Safety Research (Personal Website)

web
vkrakovna.wordpress.com·vkrakovna.wordpress.com/

Victoria Krakovna is a prominent AI safety researcher at DeepMind and FLI co-founder; her blog and research outputs are widely cited in alignment literature, especially her specification gaming examples list.

Metadata

Importance: 62/100blog posthomepage

Summary

Personal website and blog of Victoria Krakovna, research scientist at Google DeepMind focusing on AI alignment. She works on topics including deceptive alignment, specification gaming, goal misgeneralization, dangerous capability evaluations, and side effects. She also co-founded the Future of Life Institute.

Key Points

  • Victoria Krakovna is a research scientist at Google DeepMind working on AI alignment and safety.
  • Research areas include deceptive alignment, specification gaming, goal misgeneralization, and dangerous capability evaluations.
  • Co-founder of the Future of Life Institute, a nonprofit focused on mitigating large-scale technological risks.
  • PhD in statistics and machine learning from Harvard, focused on interpretable models.
  • Blog serves as a hub for her publications, research notes, and resources on AI safety topics.

Cited by 1 page

PageTypeQuality
Sharp Left TurnRisk69.0

Cached Content Preview

HTTP 200Fetched Apr 7, 20262 KB
Victoria Krakovna 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 

 
 
 

 
 
 

 
 
 

 I am a research scientist at Google DeepMind focusing on AI alignment : ensuring that advanced AI systems try to do what we want them to do and don’t knowingly act against our interests. I have worked on various topics in AI safety including deceptive alignment, dangerous capability evaluations, specification gaming, goal misgeneralization, and avoiding side effects.

 I co-founded the Future of Life Institute , a non-profit organization working to mitigate technological risks to humanity and increase the chances of a positive future.

 (The views expressed on this website are my own and do not represent Google DeepMind or the Future of Life Institute.)

 My PhD in statistics and machine learning at Harvard focused on building interpretable models.

 In my spare time, I enjoy playing with my kids and spending time in nature.

 Find me on Twitter , Google Scholar , GitHub , LinkedIn , Alignment Forum . 

 Share this:

 
 Share on X (Opens in new window) 
 X 
 
 
 Share on Facebook (Opens in new window) 
 Facebook 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 
 
 
 Recent Posts

 
 
 2025-26 New Year review 
 January 19, 2026 
 

 
 Opinionated takes on parenting 
 December 16, 2025 
 

 
 Retrospective on life tracking and effectiveness systems 
 July 4, 2025 
 

 
 2024-25 New Year review 
 January 15, 2025 
 

 
 Moving on from community living 
 April 17, 2024 
 

 

 Categories

 
 AI alignment 

 conferences 

 life 

 machine learning 

 opinion 

 parenting 

 rationality 

 

 RSS - Posts 
 
 
 
 
 

 

 

 
 
 
 
 
 
 
 Subscribe 
 
 
 
 Subscribed 
 
 
 
 
 
 
 
 Victoria Krakovna 
 

 
 
 Join 88 other subscribers 
 
 
 
 
 
 
 
 
 
 Sign me up 
 
 
 
 
 Already have a WordPress.com account? Log in now. 
 

 
 
 
 

 
 
 
 
 
 
 
 Victoria Krakovna 
 

 
 
 
 Subscribe 
 
 
 
 Subscribed 
 
 

 Sign up 

 Log in 

 
 
 Copy shortlink 
 
 
 

 
 
 Report this content 
 

 
 
 View post in Reader 
 

 
 Manage subscriptions 
 

 Collapse this bar 

 
 
 
 

 
 
 

 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 

 
 
 
 Loading Comments... 
 
 
 
 
 
 
 Write a Comment... 
 
 
 
 
 Email (Required) 
 
 
 
 Name (Required) 
 
 
 
 Website
Resource ID: 2fcc58e7ff67cb2f | Stable ID: sid_bl7TCpeQ6A