Improved Off-Policy Reinforcement Learning in Biological Sequence Design
Abstract
Designing biological sequences with desired properties is challenging due to vast search spaces and limited evaluation budgets. Although reinforcement learning methods use proxy models for rapid reward evaluation, insufficient training data can cause proxy misspecification on out-of-distribution inputs. To address this, we propose a novel off-policy search, $\delta$-Conservative Search, that enhances robustness by restricting policy exploration to reliable regions. Starting from high-score offline sequences, we inject noise by randomly masking tokens with probability $\delta$, then denoise them using our policy. We further adapt $\delta$ based on proxy uncertainty on each data point, aligning the level of conservativeness with model confidence. Experimental results show that our conservative search consistently enhances the off-policy training, outperforming existing machine learning methods in discovering high-score sequences across diverse tasks, including DNA, RNA, protein, and peptide design.
Cite
Text
Kim et al. "Improved Off-Policy Reinforcement Learning in Biological Sequence Design." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Kim et al. "Improved Off-Policy Reinforcement Learning in Biological Sequence Design." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/kim2025icml-improved/)BibTeX
@inproceedings{kim2025icml-improved,
title = {{Improved Off-Policy Reinforcement Learning in Biological Sequence Design}},
author = {Kim, Hyeonah and Kim, Minsu and Yun, Taeyoung and Choi, Sanghyeok and Bengio, Emmanuel and Hernández-Garcı́a, Alex and Park, Jinkyoo},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {30290-30315},
volume = {267},
url = {https://mlanthology.org/icml/2025/kim2025icml-improved/}
}