Time Dependence in Non-Autonomous Neural ODEs
Abstract
Neural Ordinary Differential Equations (ODEs) are elegant reinterpretations of deep networks where continuous time can replace the discrete notion of depth, ODE solvers perform forward propagation, and the adjoint method enables efficient, constant memory backpropagation. Neural ODEs are universal approximators only when they are non-autonomous, that is, the dynamics depends explicitly on time. We propose a novel family of Neural ODEs with time-varying weights, where time-dependence is non-parametric, and the smoothness of weight trajectories can be explicitly controlled to allow a tradeoff between expressiveness and efficiency. Using this enhanced expressiveness, we outperform previous Neural ODE variants in both speed and representational capacity, ultimately outperforming standard ResNet and CNN models on select image classification and video prediction tasks.
Cite
Text
Davis et al. "Time Dependence in Non-Autonomous Neural ODEs." ICLR 2020 Workshops: DeepDiffEq, 2020.Markdown
[Davis et al. "Time Dependence in Non-Autonomous Neural ODEs." ICLR 2020 Workshops: DeepDiffEq, 2020.](https://mlanthology.org/iclrw/2020/davis2020iclrw-time/)BibTeX
@inproceedings{davis2020iclrw-time,
title = {{Time Dependence in Non-Autonomous Neural ODEs}},
author = {Davis, Jared Quincy and Choromanski, Krzysztof and Sindhwani, Vikas and Varley, Jake and Lee, Honglak and Slotine, Jean-Jacques and Likhosterov, Valerii and Weller, Adrian and Makadia, Ameesh},
booktitle = {ICLR 2020 Workshops: DeepDiffEq},
year = {2020},
url = {https://mlanthology.org/iclrw/2020/davis2020iclrw-time/}
}