Learning with Hyperspherical Uniformity

Abstract

Due to the over-parameterization nature, neural networks are a powerful tool for nonlinear function approximation. In order to achieve good generalization on unseen data, a suitable inductive bias is of great importance for neural networks. One of the most straightforward ways is to regularize the neural network with some additional objectives. L2 regularization serves as a standard regularization for neural networks. Despite its popularity, it essentially regularizes one dimension of the individual neuron, which is not strong enough to control the capacity of highly over-parameterized neural networks. Motivated by this, hyperspherical uniformity is proposed as a novel family of relational regularizations that impact the interaction among neurons. We consider several geometrically distinct ways to achieve hyperspherical uniformity. The effectiveness of hyperspherical uniformity is justified by theoretical insights and empirical evaluations.

Cite

Text

Liu et al. "Learning with Hyperspherical Uniformity." Artificial Intelligence and Statistics, 2021.

Markdown

[Liu et al. "Learning with Hyperspherical Uniformity." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/liu2021aistats-learning/)

BibTeX

@inproceedings{liu2021aistats-learning,
  title     = {{Learning with Hyperspherical Uniformity}},
  author    = {Liu, Weiyang and Lin, Rongmei and Liu, Zhen and Xiong, Li and Schölkopf, Bernhard and Weller, Adrian},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2021},
  pages     = {1180-1188},
  volume    = {130},
  url       = {https://mlanthology.org/aistats/2021/liu2021aistats-learning/}
}