Batch-Efficient EigenDecomposition for Small and Medium Matrices
Abstract
EigenDecomposition (ED) is at the heart of many computer vision algorithms and applications. One crucial bottleneck limiting its usage is the expensive computation cost, particularly for a mini-batch of matrices in the deep neural networks. In this paper, we propose a QR-based ED method dedicated to the application scenarios of computer vision. Our proposed method performs the ED entirely by batched matrix/vector multiplication, which processes all the matrices simultaneously and thus fully utilizes the power of GPUs. Our technique is based on the explicit QR iterations by Givens rotation with double Wilkinson shifts. With several acceleration techniques, the time complexity of QR iterations is reduced from $O{(}n^5{)}$ to $O{(}n^3{)}$. The numerical test shows that for small and medium batched matrices (\emph{e.g.,} $dim{<}32$) our method can be much faster than the Pytorch SVD function. Experimental results on visual recognition and image generation demonstrate that our methods also achieve competitive performances.
Cite
Text
Song et al. "Batch-Efficient EigenDecomposition for Small and Medium Matrices." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20050-2_34Markdown
[Song et al. "Batch-Efficient EigenDecomposition for Small and Medium Matrices." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/song2022eccv-batchefficient/) doi:10.1007/978-3-031-20050-2_34BibTeX
@inproceedings{song2022eccv-batchefficient,
title = {{Batch-Efficient EigenDecomposition for Small and Medium Matrices}},
author = {Song, Yue and Sebe, Nicu and Wang, Wei},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-20050-2_34},
url = {https://mlanthology.org/eccv/2022/song2022eccv-batchefficient/}
}