Self-Orthogonality Module: A Network Architecture Plug-in for Learning Orthogonal Filters
Abstract
In this paper, we investigate the empirical impact of or- thogonality regularization (OR) in deep learning, either solo or collaboratively. Recent works on OR showed some promis- ing results on the accuracy. In our ablation study, however, we do not observe such significant improvement from exist- ing OR techniques compared with the conventional training based on weight decay, dropout, and batch normalization. To identify the real gain from OR, inspired by the locality sensitive hashing (LSH) in angle estimation, we propose to introduce an implicit self-regularization into OR to push the mean and variance of filter angles in a network towards 90 * and 0 * simultaneously to achieve (near) orthogonality among the filters, without using any other explicit regular- ization. Our regularization can be implemented as an archi- tectural plug-in and integrated with an arbitrary network. We reveal that OR helps stabilize the training process and leads to faster convergence and better generalization.
Cite
Text
Zhang et al. "Self-Orthogonality Module: A Network Architecture Plug-in for Learning Orthogonal Filters." Winter Conference on Applications of Computer Vision, 2020.Markdown
[Zhang et al. "Self-Orthogonality Module: A Network Architecture Plug-in for Learning Orthogonal Filters." Winter Conference on Applications of Computer Vision, 2020.](https://mlanthology.org/wacv/2020/zhang2020wacv-selforthogonality/)BibTeX
@inproceedings{zhang2020wacv-selforthogonality,
title = {{Self-Orthogonality Module: A Network Architecture Plug-in for Learning Orthogonal Filters}},
author = {Zhang, Ziming and Ma, Wenchi and Wu, Yuanwei and Wang, Guanghui},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2020},
url = {https://mlanthology.org/wacv/2020/zhang2020wacv-selforthogonality/}
}