Feasibility Consistent Representation Learning for Safe Reinforcement Learning
Abstract
In the field of safe reinforcement learning (RL), finding a balance between satisfying safety constraints and optimizing reward performance presents a significant challenge. A key obstacle in this endeavor is the estimation of safety constraints, which is typically more difficult than estimating a reward metric due to the sparse nature of the constraint signals. To address this issue, we introduce a novel framework named Feasibility Consistent Safe Reinforcement Learning (FCSRL). This framework combines representation learning with feasibility-oriented objectives to identify and extract safety-related information from the raw state for safe RL. Leveraging self-supervised learning techniques and a more learnable safety metric, our approach enhances the policy learning and constraint estimation. Empirical evaluations across a range of vector-state and image-based tasks demonstrate that our method is capable of learning a better safety-aware embedding and achieving superior performance than previous representation learning baselines.
Cite
Text
Cen et al. "Feasibility Consistent Representation Learning for Safe Reinforcement Learning." International Conference on Machine Learning, 2024.Markdown
[Cen et al. "Feasibility Consistent Representation Learning for Safe Reinforcement Learning." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/cen2024icml-feasibility/)BibTeX
@inproceedings{cen2024icml-feasibility,
title = {{Feasibility Consistent Representation Learning for Safe Reinforcement Learning}},
author = {Cen, Zhepeng and Yao, Yihang and Liu, Zuxin and Zhao, Ding},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {6002-6019},
volume = {235},
url = {https://mlanthology.org/icml/2024/cen2024icml-feasibility/}
}