Feedback Favors the Generalization of Neural ODEs
Abstract
The well-known generalization problem hinders the application of artificial neural networks in continuous-time prediction tasks with varying latent dynamics. In sharp contrast, biological systems can neatly adapt to evolving environments benefiting from real-time feedback mechanisms. Inspired by the feedback philosophy, we present feedback neural networks, showing that a feedback loop can flexibly correct the learned latent dynamics of neural ordinary differential equations (neural ODEs), leading to a prominent generalization improvement. The feedback neural network is a novel two-DOF neural network, which possesses robust performance in unseen scenarios with no loss of accuracy performance on previous tasks. A linear feedback form is presented to correct the learned latent dynamics firstly, with a convergence guarantee. Then, domain randomization is utilized to learn a nonlinear neural feedback form. Finally, extensive tests including trajectory prediction of a real irregular object and model predictive control of a quadrotor with various uncertainties, are implemented, indicating significant improvements over state-of-the-art model-based and learning-based methods.
Cite
Text
Jia et al. "Feedback Favors the Generalization of Neural ODEs." International Conference on Learning Representations, 2025.Markdown
[Jia et al. "Feedback Favors the Generalization of Neural ODEs." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/jia2025iclr-feedback/)BibTeX
@inproceedings{jia2025iclr-feedback,
title = {{Feedback Favors the Generalization of Neural ODEs}},
author = {Jia, Jindou and Yang, Zihan and Wang, Meng and Guo, Kexin and Yang, Jianfei and Yu, Xiang and Guo, Lei},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/jia2025iclr-feedback/}
}