MO2: Model-Based Offline Options
Abstract
The ability to discover useful behaviours from past experience and transfer them to new tasks is considered a core component of natural embodied intelligence. Inspired by neuroscience, discovering behaviours that switch at bottleneck states have been long sought after for inducing plans of minimum description length across tasks. Prior approaches have either only supported online, on-policy, bottleneck state discovery, limiting sample-efficiency, or discrete state-action domains, restricting applicability. To address this, we introduce Model-Based Offline Options (MO2), an offline hindsight framework supporting sample-efficient bottleneck option discovery over continuous state-action spaces. Once bottleneck options are learnt offline over source domains, they are transferred online to improve exploration and value estimation on the transfer domain. Our experiments show that on complex long-horizon continuous control tasks with sparse, delayed rewards, MO2’s properties are essential and lead to performance exceeding recent option learning methods. Additional ablations further demonstrate the impact on option predictability and credit assignment.
Cite
Text
Salter et al. "MO2: Model-Based Offline Options." Proceedings of The 1st Conference on Lifelong Learning Agents, 2022.Markdown
[Salter et al. "MO2: Model-Based Offline Options." Proceedings of The 1st Conference on Lifelong Learning Agents, 2022.](https://mlanthology.org/collas/2022/salter2022collas-mo2/)BibTeX
@inproceedings{salter2022collas-mo2,
title = {{MO2: Model-Based Offline Options}},
author = {Salter, Sasha and Wulfmeier, Markus and Tirumala, Dhruva and Heess, Nicolas and Riedmiller, Martin and Hadsell, Raia and Rao, Dushyant},
booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents},
year = {2022},
pages = {902-919},
volume = {199},
url = {https://mlanthology.org/collas/2022/salter2022collas-mo2/}
}