Koopman Spectrum Nonlinear Regulators and Efficient Online Learning
Abstract
Most modern reinforcement learning algorithms optimize a cumulative single-step cost along a trajectory. The optimized motions are often ‘unnatural’, representing, for example, behaviors with sudden accelerations that waste energy and lack predictability. In this work, we present a novel paradigm of controlling nonlinear systems via the minimization of the Koopman spectrum cost: a cost over the Koopman operator of the controlled dynamics. This induces a broader class of dynamical behaviors that evolve over stable manifolds such as nonlinear oscillators, closed loops, and smooth movements. We demonstrate that some dynamics characterizations that are not possible with a cumulative cost are feasible in this paradigm, which generalizes the classical eigenstructure and pole assignments to nonlinear decision making. Moreover, we present a sample efficient online learning algorithm for our problem that enjoys a sub-linear regret bound under some structural assumptions.
Cite
Text
Ohnishi et al. "Koopman Spectrum Nonlinear Regulators and Efficient Online Learning." Transactions on Machine Learning Research, 2024.Markdown
[Ohnishi et al. "Koopman Spectrum Nonlinear Regulators and Efficient Online Learning." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/ohnishi2024tmlr-koopman/)BibTeX
@article{ohnishi2024tmlr-koopman,
title = {{Koopman Spectrum Nonlinear Regulators and Efficient Online Learning}},
author = {Ohnishi, Motoya and Ishikawa, Isao and Lowrey, Kendall and Ikeda, Masahiro and Kakade, Sham M. and Kawahara, Yoshinobu},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/ohnishi2024tmlr-koopman/}
}