Differentiable Programming for Piecewise Polynomial Functions
Abstract
We introduce a new, principled approach to extend gradient-based optimization to piecewise smooth models, such as k-histograms, splines, and segmentation maps. We derive an accurate form of the weak Jacobian of such functions and show that it exhibits a block-sparse structure that can be computed implicitly and efficiently. We show that using the redesigned Jacobian leads to improved performance in applications such as denoising with piecewise polynomial regression models, data-free generative model training, and image segmentation.
Cite
Text
Cho et al. "Differentiable Programming for Piecewise Polynomial Functions." NeurIPS 2020 Workshops: LMCA, 2020.Markdown
[Cho et al. "Differentiable Programming for Piecewise Polynomial Functions." NeurIPS 2020 Workshops: LMCA, 2020.](https://mlanthology.org/neuripsw/2020/cho2020neuripsw-differentiable/)BibTeX
@inproceedings{cho2020neuripsw-differentiable,
title = {{Differentiable Programming for Piecewise Polynomial Functions}},
author = {Cho, Minsu and Joshi, Ameya and Lee, Xian Yeow and Balu, Aditya and Krishnamurthy, Adarsh and Ganapathysubramanian, Baskar and Sarkar, Soumik and Hegde, Chinmay},
booktitle = {NeurIPS 2020 Workshops: LMCA},
year = {2020},
url = {https://mlanthology.org/neuripsw/2020/cho2020neuripsw-differentiable/}
}