State-Wise Safe Reinforcement Learning: A Survey
Abstract
Despite the tremendous success of Reinforcement Learning (RL) algorithms in simulation environments, applying RL to real-world applications still faces many challenges. A major concern is safety, in another word, constraint satisfaction. State-wise constraints are one of the most common constraints in real-world applications and one of the most challenging constraints in Safe RL. Enforcing state-wise constraints is necessary and essential to many challenging tasks such as autonomous driving, robot manipulation. This paper provides a comprehensive review of existing approaches that address state-wise constraints in RL. Under the framework of State-wise Constrained Markov Decision Process (SCMDP), we will discuss the connections, differences, and trade-offs of existing approaches in terms of (i) safety guarantee and scalability, (ii) safety and reward performance, and (iii) safety after convergence and during training. We also summarize limitations of current methods and discuss potential future directions.
Cite
Text
Zhao et al. "State-Wise Safe Reinforcement Learning: A Survey." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/763Markdown
[Zhao et al. "State-Wise Safe Reinforcement Learning: A Survey." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/zhao2023ijcai-state/) doi:10.24963/IJCAI.2023/763BibTeX
@inproceedings{zhao2023ijcai-state,
title = {{State-Wise Safe Reinforcement Learning: A Survey}},
author = {Zhao, Weiye and He, Tairan and Chen, Rui and Wei, Tianhao and Liu, Changliu},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2023},
pages = {6814-6822},
doi = {10.24963/IJCAI.2023/763},
url = {https://mlanthology.org/ijcai/2023/zhao2023ijcai-state/}
}