Longterm Wiki
Back

[2205.10330] A Review of Safe Reinforcement Learning: Methods, Theory and Applications

paper

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
Deep Learning Revolution EraHistorical44.0

Cached Content Preview

HTTP 200Fetched Feb 22, 20266 KB
[2205.10330] A Review of Safe Reinforcement Learning: Methods, Theory and Applications 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
--> 

 
 
 Computer Science > Artificial Intelligence

 

 
 arXiv:2205.10330 (cs)
 
 
 
 
 
 [Submitted on 20 May 2022 ( v1 ), last revised 24 May 2024 (this version, v5)] 
 Title: A Review of Safe Reinforcement Learning: Methods, Theory and Applications

 Authors: Shangding Gu , Long Yang , Yali Du , Guang Chen , Florian Walter , Jun Wang , Alois Knoll View a PDF of the paper titled A Review of Safe Reinforcement Learning: Methods, Theory and Applications, by Shangding Gu and 6 other authors 
 View PDF 
 HTML (experimental) 

 
 Abstract: Reinforcement Learning (RL) has achieved tremendous success in many complex decision-making tasks. However, safety concerns are raised during deploying RL in real-world applications, leading to a growing demand for safe RL algorithms, such as in autonomous driving and robotics scenarios. While safe control has a long history, the study of safe RL algorithms is still in the early stages. To establish a good foundation for future safe RL research, in this paper, we provide a review of safe RL from the perspectives of methods, theories, and applications. Firstly, we review the progress of safe RL from five dimensions and come up with five crucial problems for safe RL being deployed in real-world applications, coined as "2H3W". Secondly, we analyze the algorithm and theory progress from the perspectives of answering the "2H3W" problems. Particularly, the sample complexity of safe RL algorithms is reviewed and discussed, followed by an introduction to the applications and benchmarks of safe RL algorithms. Finally, we open the discussion of the challenging problems in safe RL, hoping to inspire future research on this thread. To advance the study of safe RL algorithms, we release an open-sourced repository containing the implementations of major safe RL algorithms at the link: this https URL .
 

 
 
 
 Subjects: 
 
 Artificial Intelligence (cs.AI) ; Machine Learning (cs.LG) 
 
 Cite as: 
 arXiv:2205.10330 [cs.AI] 
 
 
 
 (or 
 arXiv:2205.10330v5 [cs.AI] for this version)
 
 
 
 
 https://doi.org/10.48550/arXiv.2205.10330 
 
 
 Focus to learn more 
 
 
 
 arXiv-issued DOI via DataCite 
 
 
 
 
 
 
 
 Submission history

 From: Shangding Gu [ view email ] 
 [v1] 
 Fri, 20 May 2022 17:42:38 UTC (26,152 KB)

 [v2] 
 Mon, 23 May 2022 08:18:52 UTC (27,097 KB)

 [v3] 
 Sat, 4 Jun 2022 17:03:49 UTC (27,091 KB)

 [v4] 
 Mon, 20 Feb 2023 10:34:26 UTC (27,102 KB)

 [v5] 
 Fri, 24 May 2024 22:33:04 UTC (27,157 KB)

 
 
 
 
 
 Full-text links: 
 Access Paper:

 
 
View a PDF of the paper titled A Review of Safe Reinforcement Learning: Methods, Theory and Applications, by Shangding Gu and 6 other authors View PDF 
 HTML (experimental) 
 TeX Source
 
 
 view license 
 
 
 Current browse context: cs.AI 

 
 
 < prev 
 
 | 
 next > 
 


... (truncated, 6 KB total)
Resource ID: 1efe2b3ae47b8e1b | Stable ID: MjY0ZmM1Yz