Tucker Decomposition for Interpretable Neural Ordinary Differential Equations

Abstract

Tensorial and polynomial networks have emerged as effective tools in vari- ous fields, particularly for modeling the multilinear relationships among input variables. More recently, polynomial networks factorized using the canonical polyadic tensor decomposition (CPD) have been successfully used in the prob- lem of system dynamics identification, where the relations between variables are usually a polynomial function. This paper introduces a more general tensorial net- work that employs Tucker decomposition, thereby providing enhanced flexibility and expressivity in model construction. The study evaluates the performance of TuckerNet, comparing it against CPD-based networks in learning functions and identifying ordinary differential equation dynamics. The findings demonstrate the potential of TuckerNet as a superior alternative for tensorial network construction, particularly when constraining the number of parameters, while also highlighting aspects beyond decomposition that impact learning outcomes.

Cite

Text

Halatsis et al. "Tucker Decomposition for Interpretable Neural Ordinary Differential Equations." ICLR 2024 Workshops: AI4DiffEqtnsInSci, 2024.

Markdown

[Halatsis et al. "Tucker Decomposition for Interpretable Neural Ordinary Differential Equations." ICLR 2024 Workshops: AI4DiffEqtnsInSci, 2024.](https://mlanthology.org/iclrw/2024/halatsis2024iclrw-tucker/)

BibTeX

@inproceedings{halatsis2024iclrw-tucker,
  title     = {{Tucker Decomposition for Interpretable Neural Ordinary Differential Equations}},
  author    = {Halatsis, Dimitrios and Chrysos, Grigorios and Pereira, Joao and Alummoottil, Michael},
  booktitle = {ICLR 2024 Workshops: AI4DiffEqtnsInSci},
  year      = {2024},
  url       = {https://mlanthology.org/iclrw/2024/halatsis2024iclrw-tucker/}
}