On the Abilities of Mathematical Extrapolation with Implicit Models

Abstract

Deep neural networks excel on a variety of different tasks, often surpassing human abilities. However, when presented with out-of-distribution data, these models tend to break down even on the simplest tasks. In this paper, we compare the robustness of implicitly-defined and classical deep learning models on a series of mathematical extrapolation tasks, where the models are tested with out-of-distribution samples during inference time. Throughout our experiments, implicit models greatly outperform classical deep learning networks that overfit the training distribution. We showcase implicit models’ unique advantages for mathematical extrapolation thanks to their flexible and selective framework. Implicit models, with potentially unlimited depth, not only adapt well to out-of-distribution inputs but also understand the underlying structure of inputs much better.

Cite

Text

Decugis et al. "On the Abilities of Mathematical Extrapolation with Implicit Models." NeurIPS 2022 Workshops: DistShift, 2022.

Markdown

[Decugis et al. "On the Abilities of Mathematical Extrapolation with Implicit Models." NeurIPS 2022 Workshops: DistShift, 2022.](https://mlanthology.org/neuripsw/2022/decugis2022neuripsw-abilities/)

BibTeX

@inproceedings{decugis2022neuripsw-abilities,
  title     = {{On the Abilities of Mathematical Extrapolation with Implicit Models}},
  author    = {Decugis, Juliette and Emerling, Max and Ganesh, Ashwin and Tsai, Alicia Y. and El Ghaoui, Laurent},
  booktitle = {NeurIPS 2022 Workshops: DistShift},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/decugis2022neuripsw-abilities/}
}