Longterm Wiki
Back

Proximal Policy Optimization Algorithms - ADS

web

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
Deep Learning Revolution EraHistorical44.0

Cached Content Preview

HTTP 200Fetched Feb 22, 20262 KB
Proximal Policy Optimization Algorithms - ADS 

 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 Now on home page 
 
 
 
 
 
 
 
 

 
 
 
 

 
 

 
 
 

 
 
 
 
 
 

 
 
 
 
 ADS

 

 
 
 
 | -->
 

 
 
 Proximal Policy Optimization Algorithms
 
 
 

 
 
 
 
 Schulman, John 
 
;
 
 Wolski, Filip 
 
;
 
 Dhariwal, Prafulla 
 
;
 
 Radford, Alec 
 
;
 
 Klimov, Oleg 
 

 
 
 
 
 

 
 Abstract

 
 We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.
 

 

 

 

 Publication: 
 
 arXiv e-prints 
 

 

 Pub Date: 
 July 2017 

 
 DOI: 
 
 
 
 10.48550/arXiv.1707.06347 
 
 

 
 
 

 
 arXiv: 
 
 
 arXiv:1707.06347 
 
 
 
 

 Bibcode: 
 
 
 2017arXiv170706347S
 
 
 

 
 Keywords: 
 
 
 
 Computer Science - Machine Learning

 
 
 
 

 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 full text sources 
 
 
 
 
 Preprint 
 
 
 
 
 
 
 
 | 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 

 
 
 
 
 
 
 
 
 
 
 ๐ŸŒ“
Resource ID: 276e467ae5c56037 | Stable ID: MGI2ODUyN2