Markov Balance Satisfaction Improves Performance in Strictly Batch Offline Imitation Learning
Abstract
Imitation learning (IL) is notably effective for robotic tasks where directly programming behaviors or defining optimal control costs is challenging. In this work, we address a scenario where the imitator relies solely on observed behavior and cannot make environmental interactions during learning. It does not have additional supplementary datasets beyond the expert's dataset nor any information about the transition dynamics. Unlike state-of-the-art (SOTA) IL methods, this approach tackles the limitations of conventional IL by operating in a more constrained and realistic setting. Our method uses the Markov balance equation and introduces a novel conditional density estimation-based imitation learning framework. It employs conditional normalizing flows for transition dynamics estimation and aims at satisfying a balance equation for the environment. Through a series of numerical experiments on Classic Control and MuJoCo environments, we demonstrate consistently superior empirical performance compared to many SOTA IL algorithms.
Cite
Text
Agrawal et al. "Markov Balance Satisfaction Improves Performance in Strictly Batch Offline Imitation Learning." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I15.33680Markdown
[Agrawal et al. "Markov Balance Satisfaction Improves Performance in Strictly Batch Offline Imitation Learning." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/agrawal2025aaai-markov/) doi:10.1609/AAAI.V39I15.33680BibTeX
@inproceedings{agrawal2025aaai-markov,
title = {{Markov Balance Satisfaction Improves Performance in Strictly Batch Offline Imitation Learning}},
author = {Agrawal, Rishabh and Dahlin, Nathan and Jain, Rahul and Nayyar, Ashutosh},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {15311-15319},
doi = {10.1609/AAAI.V39I15.33680},
url = {https://mlanthology.org/aaai/2025/agrawal2025aaai-markov/}
}