DeepMind's game theory research
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Google DeepMind
This is DeepMind's general publications listing page; current tags ('causal-model', 'corrigibility', 'shutdown-problem') and title ('game theory research') appear incorrect and should be updated to reflect the page's actual broad scope.
Metadata
Summary
This is DeepMind's public research publications index, listing recent papers across a wide range of AI topics including safety, capabilities, multimodal learning, and more. The page aggregates hundreds of publications but does not specifically focus on game theory or AI safety. Notable safety-relevant entries include work on imitation learning safety, AI personhood, and human-AI alignment.
Key Points
- •Broad publications index covering 240+ DeepMind research papers across diverse AI topics
- •Includes some AI safety-relevant work such as 'Imitation Learning is Probably Existentially Safe' and 'A Pragmatic View of AI Personhood'
- •Contains papers on human-AI alignment in collective reasoning and AI consciousness/simulation distinctions
- •Not specifically a game theory or safety-focused resource despite current metadata labels
- •Serves as a reference hub for DeepMind's public research output rather than a curated safety resource
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Corrigibility Failure Pathways | Analysis | 62.0 |
Cached Content Preview
Publications — Google DeepMind Skip to main content Publications Explore a selection of our recent research on some of the most complex and interesting challenges in AI. 240 publications 23 April 2026 Dynamic Reflections: Probing Video Representations with Text Alignment 10 March 2026 The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness 15 February 2026 Simplicity and Complexity in Combinatorial Optimization 5 February 2026 Hybrid neural–cognitive models reveal how memory shapes human reward learning 9 January 2026 TRecViT: A Recurrent Video Transformer 21 November 2025 Imitation Learning is Probably Existentially Safe 4 November 2025 To Mask or to Mirror: Human-AI Alignment in Collective Reasoning 30 October 2025 A Pragmatic View of AI Personhood 29 September 2025 AI-Generated Video Detection via Perceptual Straightening 24 September 2025 Video models are zero-shot learners and reasoners 24 September 2025 EmbeddingGemma: Powerful and Lightweight Text Representations 4 September 2025 Improving cosmological reach of LIGO usingDeep Loop Shaping 3 September 2025 RoboBallet: Planning for Multi-Robot Reaching with Graph Neural Networks and Reinforcement Learning 8 August 2025 Properties of Algorithmic Information Distance 1 August 2025 Visual Intention Grounding for Egocentric Assistants 16 July 2025 Dialogues Between Technologists and the Art Worlds 13 July 2025 Large Language Models as Rankers, Judges, and Assistants: A Perspective on the Potential Over-Reliance on LLMs in IR 13 July 2025 SLIM: ONE-SHOT QUANTIZED SPARSE PLUS LOW-RANK APPROXIMATION OF LLMS 13 July 2025 Long-Form Speech Generation with Spoken Language Models 1 July 2025 Rethinking Example Selection in the Era of Million-Token Models 26 June 2025 Performance Prediction for Large Systems via Text-to-Text Regression 23 June 2025 LIA: Cost-efficient LLM Inference Acceleration with Intel Advanced Matrix Extensions and CXL 20 June 2025 AuPair: Golden Example Pairs for Code Repair 1 June 2025 Bridging Algorithmic Information Theory and Machine Learning, Part II: Clustering, Density Estimation, Kolmogorov Complexity-Based Kernels, and Kernel Learning in Unsupervised Learning 1 May 2025 Proactive Agents for Multi-Turn Text-to-Image Generation Under Uncertainty 29 April 2025 Prompting with Phonemes: Enhancing LLM Multilinguality for non-Latin Scripts 28 April 2025 Flow-Lenia: Emergent evolutionary dynamics in mass conservative continuous cellular automata 26 April 2025 Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA 26 April 2025 Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives 26 April 2025 Toward Understanding In-context vs. In-weight Learning 1 Page 2 Page 3 … Page 8
6a28ebdd777540fa | Stable ID: sid_rBy6ptDo25