Longterm Wiki
Back

Long-Term Planning and Situational Awareness in OpenAI Five - ADS

web

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
Deep Learning Revolution EraHistorical44.0

Cached Content Preview

HTTP 200Fetched Feb 22, 20262 KB
Long-Term Planning and Situational Awareness in OpenAI Five - ADS 

 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 Now on home page 
 
 
 
 
 
 
 
 

 
 
 
 

 
 

 
 
 

 
 
 
 
 
 

 
 
 
 
 ADS

 

 
 
 
 | -->
 

 
 
 Long-Term Planning and Situational Awareness in OpenAI Five
 
 
 

 
 
 
 
 Raiman, Jonathan 
 
;
 
 Zhang, Susan 
 
;
 
 Wolski, Filip 
 

 
 
 
 
 

 
 Abstract

 
 Understanding how knowledge about the world is represented within model-free deep reinforcement learning methods is a major challenge given the black box nature of its learning process within high-dimensional observation and action spaces. AlphaStar and OpenAI Five have shown that agents can be trained without any explicit hierarchical macro-actions to reach superhuman skill in games that require taking thousands of actions before reaching the final goal. Assessing the agent's plans and game understanding becomes challenging given the lack of hierarchy or explicit representations of macro-actions in these models, coupled with the incomprehensible nature of the internal representations. In this paper, we study the distributed representations learned by OpenAI Five to investigate how game knowledge is gradually obtained over the course of training. We also introduce a general technique for learning a model from the agent's hidden states to identify the formation of plans and subgoals. We show that the agent can learn situational similarity across actions, and find evidence of planning towards accomplishing subgoals minutes before they are executed. We perform a qualitative analysis of these predictions during the games against the DotA 2 world champions OG in April 2019.
 

 

 

 

 Publication: 
 
 arXiv e-prints 
 

 

 Pub Date: 
 December 2019 

 
 DOI: 
 
 
 
 10.48550/arXiv.1912.06721 
 
 

 
 
 

 
 arXiv: 
 
 
 arXiv:1912.06721 
 
 
 
 

 Bibcode: 
 
 
 2019arXiv191206721R
 
 
 

 
 Keywords: 
 
 
 
 Computer Science - Computation and Language;

 
 Computer Science - Machine Learning

 
 
 
 

 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 full text sources 
 
 
 
 
 Preprint 
 
 
 
 
 
 
 
 | 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 

 
 
 
 
 
 
 
 
 
 
 ๐ŸŒ“
Resource ID: 0fa324567bde555e | Stable ID: Y2VlYjc2Nj