Nuclear Norm Regularization for Deep Learning

Abstract

Penalizing the nuclear norm of a function's Jacobian encourages it to locally behave like a low-rank linear map. Such functions vary locally along only a handful of directions, making the Jacobian nuclear norm a natural regularizer for machine learning problems. However, this regularizer is intractable for high-dimensional problems, as it requires computing a large Jacobian matrix and taking its SVD. We show how to efficiently penalize the Jacobian nuclear norm using techniques tailor-made for deep learning. We prove that for functions parametrized as compositions $f = g \circ h$, one may equivalently penalize the average squared Frobenius norm of $Jg$ and $Jh$. We then propose a denoising-style approximation that avoids the Jacobian computations altogether. Our method is simple, efficient, and accurate, enabling Jacobian nuclear norm regularization to scale to high-dimensional deep learning problems. We complement our theory with an empirical study of our regularizer's performance and investigate applications to denoising and representation learning.

Cite

Text

Scarvelis and Solomon. "Nuclear Norm Regularization for Deep Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-3691

Markdown

[Scarvelis and Solomon. "Nuclear Norm Regularization for Deep Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/scarvelis2024neurips-nuclear/) doi:10.52202/079017-3691

BibTeX

@inproceedings{scarvelis2024neurips-nuclear,
  title     = {{Nuclear Norm Regularization for Deep Learning}},
  author    = {Scarvelis, Christopher and Solomon, Justin},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3691},
  url       = {https://mlanthology.org/neurips/2024/scarvelis2024neurips-nuclear/}
}