Maximum Total Correlation Reinforcement Learning

Abstract

Simplicity is a powerful inductive bias. In reinforcement learning, regularization is used for simpler policies, data augmentation for simpler representations, and sparse reward functions for simpler objectives, all that, with the underlying motivation to increase generalizability and robustness by focusing on the essentials. Supplementary to these techniques, we investigate how to promote simple behavior throughout the episode. To that end, we introduce a modification of the reinforcement learning problem that additionally maximizes the total correlation within the induced trajectories. We propose a practical algorithm that optimizes all models, including policy and state representation, based on a lower-bound approximation. In simulated robot environments, our method naturally generates policies that induce periodic and compressible trajectories, and that exhibit superior robustness to noise and changes in dynamics compared to baseline methods, while also improving performance in the original tasks.

Cite

Text

You et al. "Maximum Total Correlation Reinforcement Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[You et al. "Maximum Total Correlation Reinforcement Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/you2025icml-maximum/)

BibTeX

@inproceedings{you2025icml-maximum,
  title     = {{Maximum Total Correlation Reinforcement Learning}},
  author    = {You, Bang and Liu, Puze and Liu, Huaping and Peters, Jan and Arenz, Oleg},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {72677-72699},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/you2025icml-maximum/}
}