Skip to content
Longterm Wiki
Back

[1707.06347] Proximal Policy Optimization Algorithms

paper

Authors

John Schulman·Filip Wolski·Prafulla Dhariwal·Alec Radford·Oleg Klimov

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

PPO is a foundational reinforcement learning algorithm widely used in AI systems and safety research; understanding its mechanics and limitations is important for evaluating alignment properties of RL-based AI agents.

Paper Details

Citations
26,144
4610 influential
Year
2017

Metadata

arxiv preprintprimary source

Abstract

We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.

Summary

This paper introduces Proximal Policy Optimization (PPO), a new family of policy gradient methods for reinforcement learning that alternates between collecting environment data and optimizing a surrogate objective function. PPO enables multiple epochs of minibatch updates per data sample, unlike standard policy gradient methods. The approach combines benefits of Trust Region Policy Optimization (TRPO) while being simpler to implement, more general, and achieving better empirical sample complexity. Experiments on robotic locomotion and Atari games demonstrate that PPO outperforms other online policy gradient methods and offers a favorable balance between sample efficiency, implementation simplicity, and computational speed.

Cited by 1 page

PageTypeQuality
Deep Learning Revolution EraHistorical44.0

Cached Content Preview

HTTP 200Fetched Apr 7, 20260 KB
[1707.06347] Untitled Document 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 See pages 1-last of ppo-min.pdf 

 
 
 
 ◄ 
 
 Feeling
lucky? 
 
 Conversion
report 
 Report
an issue 
 View original
on arXiv ►
Resource ID: 40de426bfa4c85b7 | Stable ID: sid_kDjSYSfi6N