Dissecting Neural ODEs
Abstract
Continuous deep learning architectures have recently re-emerged as Neural Ordinary Differential Equations (Neural ODEs). This infinite-depth approach theoretically bridges the gap between deep learning and dynamical systems, offering a novel perspective. However, deciphering the inner working of these models is still an open challenge, as most applications apply them as generic black-box modules. In this work we ``open the box'', further developing the continuous-depth formulation with the aim of clarifying the influence of several design choices on the underlying dynamics.
Cite
Text
Massaroli et al. "Dissecting Neural ODEs." Neural Information Processing Systems, 2020.Markdown
[Massaroli et al. "Dissecting Neural ODEs." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/massaroli2020neurips-dissecting/)BibTeX
@inproceedings{massaroli2020neurips-dissecting,
title = {{Dissecting Neural ODEs}},
author = {Massaroli, Stefano and Poli, Michael and Park, Jinkyoo and Yamashita, Atsushi and Asama, Hajime},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/massaroli2020neurips-dissecting/}
}