Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Google DeepMind

This is Google DeepMind's ICML 2024 conference summary blog; useful for tracking the lab's current research directions but not a primary safety-focused resource. Content was unavailable for direct analysis, so metadata is inferred from URL, title, and existing tags.

Metadata

Importance: 38/100blog postnews

Summary

This page covers Google DeepMind's research contributions presented at ICML 2024, spanning advances in AGI frameworks, scaling, and capability evaluation. It highlights the breadth of DeepMind's research agenda across machine learning and AI safety. The page serves as a hub for researchers tracking frontier AI development and safety-relevant work from a leading lab.

Key Points

  • Aggregates Google DeepMind's research presentations and papers at ICML 2024 across multiple domains.
  • Includes work relevant to AGI frameworks, capability evaluation, and scaling behaviors.
  • Reflects DeepMind's institutional research priorities at a flagship ML conference.
  • Useful for tracking frontier AI capabilities research and any safety-adjacent contributions from the lab.
  • Content spans both theoretical advances and applied systems, making it a broad reference point.

Cited by 1 page

PageTypeQuality
Emergent CapabilitiesRisk61.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20264 KB
Google DeepMind at ICML 2024 — Google DeepMind Skip to main content July 19, 2024 Research Google DeepMind at ICML 2024

 Share Exploring AGI, the challenges of scaling and the future of multimodal generative AI

 Next week the artificial intelligence (AI) community will come together for the 2024 International Conference on Machine Learning (ICML). Running from July 21-27 in Vienna, Austria, the conference is an international platform for showcasing the latest advances, exchanging ideas and shaping the future of AI research.

 This year, teams from across Google DeepMind will present more than 80 research papers. At our booth, we’ll also showcase our multimodal on-device model, Gemini Nano , our new family of AI models for education called LearnLM and we’ll demo TacticAI , an AI assistant that can help with football tactics.

 Here we introduce some of our oral, spotlight and poster presentations:

 Google Research at ICML 2024 Defining the path to AGI

 What is artificial general intelligence (AGI)? The phrase describes an AI system that’s at least as capable as a human at most tasks. As AI models continue to advance, defining what AGI could look like in practice will become increasingly important.

 We’ll present a framework for classifying the capabilities and behaviors of AGI models . Depending on their performance, generality and autonomy, our paper categorizes systems ranging from non-AI calculators to emerging AI models and other novel technologies.

 We’ll also show that open-endedness is critical to building generalized AI that goes beyond human capabilities. While many recent AI advances were driven by existing Internet-scale data, open-ended systems can generate new discoveries that extend human knowledge.

 Your browser does not support the video tag. Your browser does not support the video tag. At ICML, we’ll be demoing Genie, a model that can generate a range of playable environments based on text prompts, images, photos, or sketches.

 Scaling AI systems efficiently and responsibly

 Developing larger, more capable AI models requires more efficient training methods, closer alignment with human preferences and better privacy safeguards.

 We’ll show how using classification instead of regression techniques makes it easier to scale deep reinforcement learning systems and achieve state-of-the-art performance across different domains. Additionally, we propose a novel approach that predicts the distribution of consequences of a reinforcement learning agent's actions , helping rapidly evaluate new scenarios.

 Our researchers present an alignment-maintaining approach that reduces the need for human oversight, and a new approach to fine-tuning large language models (LLMs) , based on game theory, better aligns a LLM’s output with human preferences.

 We critique the approach of training models on public data and only fine-tuning with "differentially private" training , and argue this approach may not offer the privacy or utility that is o

... (truncated, 4 KB total)
Resource ID: 93afca21d4d8f51c | Stable ID: sid_tAKOvM8LYQ