A Theoretical Justification for Asymmetric Actor-Critic Algorithms

Abstract

In reinforcement learning for partially observable environments, many successful algorithms have been developed within the asymmetric learning paradigm. This paradigm leverages additional state information available at training time for faster learning. Although the proposed learning objectives are usually theoretically sound, these methods still lack a precise theoretical justification for their potential benefits. We propose such a justification for asymmetric actor-critic algorithms with linear function approximators by adapting a finite-time convergence analysis to this setting. The resulting finite-time bound reveals that the asymmetric critic eliminates error terms arising from aliasing in the agent state.

Cite

Text

Lambrechts et al. "A Theoretical Justification for Asymmetric Actor-Critic Algorithms." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Lambrechts et al. "A Theoretical Justification for Asymmetric Actor-Critic Algorithms." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/lambrechts2025icml-theoretical/)

BibTeX

@inproceedings{lambrechts2025icml-theoretical,
  title     = {{A Theoretical Justification for Asymmetric Actor-Critic Algorithms}},
  author    = {Lambrechts, Gaspard and Ernst, Damien and Mahajan, Aditya},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {32375-32405},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/lambrechts2025icml-theoretical/}
}