Differentiable Spline Approximations

Abstract

The paradigm of differentiable programming has significantly enhanced the scope of machine learning via the judicious use of gradient-based optimization. However, standard differentiable programming methods (such as autodiff) typically require that the machine learning models be differentiable, limiting their applicability. Our goal in this paper is to use a new, principled approach to extend gradient-based optimization to functions well modeled by splines, which encompass a large family of piecewise polynomial models. We derive the form of the (weak) Jacobian of such functions and show that it exhibits a block-sparse structure that can be computed implicitly and efficiently. Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer'' in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis. We also open-source the code at \url{https://github.com/idealab-isu/DSA}.

Cite

Text

Cho et al. "Differentiable Spline Approximations." Neural Information Processing Systems, 2021.

Markdown

[Cho et al. "Differentiable Spline Approximations." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/cho2021neurips-differentiable/)

BibTeX

@inproceedings{cho2021neurips-differentiable,
  title     = {{Differentiable Spline Approximations}},
  author    = {Cho, Minsu and Balu, Aditya and Joshi, Ameya and Prasad, Anjana Deva and Khara, Biswajit and Sarkar, Soumik and Ganapathysubramanian, Baskar and Krishnamurthy, Adarsh and Hegde, Chinmay},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/cho2021neurips-differentiable/}
}