Topologically Densified Distributions
Abstract
We study regularization in the context of small sample-size learning with over-parametrized neural networks. Specifically, we shift focus from architectural properties, such as norms on the network weights, to properties of the internal representations before a linear classifier. Specifically, we impose a topological constraint on samples drawn from the probability measure induced in that space. This provably leads to mass concentration effects around the representations of training instances, i.e., a property beneficial for generalization. By leveraging previous work to impose topological constrains in a neural network setting, we provide empirical evidence (across various vision benchmarks) to support our claim for better generalization.
Cite
Text
Hofer et al. "Topologically Densified Distributions." International Conference on Machine Learning, 2020.Markdown
[Hofer et al. "Topologically Densified Distributions." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/hofer2020icml-topologically/)BibTeX
@inproceedings{hofer2020icml-topologically,
title = {{Topologically Densified Distributions}},
author = {Hofer, Christoph and Graf, Florian and Niethammer, Marc and Kwitt, Roland},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {4304-4313},
volume = {119},
url = {https://mlanthology.org/icml/2020/hofer2020icml-topologically/}
}