Convolutional Auto-Encoder with Tensor-Train Factorization

Abstract

Convolutional auto-encoders (CAEs) are extensively used for general purpose feature extraction, image reconstruction, image denoising, and other machine learning tasks. Despite their many successes, similar to other convolutional networks, CAEs often suffer from over-parameterization when trained with small or moderate-sized datasets. In such cases, CAEs suffer from excess computational and memory overhead as well as decreased performance due to parameter over-fitting. In this work we introduce CAE-TT: a CAE with tunable tensor-train (TT) structure to its convolution and transpose-convolution filters. By tuning the TT-ranks, CAE-TT can adjust the number of its learning parameters without changing the network architecture. In our numerical studies, we demonstrate the performance of the proposed method and compare it with alternatives, in both batch and online learning settings.

Cite

Text

Sharma et al. "Convolutional Auto-Encoder with Tensor-Train Factorization." IEEE/CVF International Conference on Computer Vision Workshops, 2021. doi:10.1109/ICCVW54120.2021.00027

Markdown

[Sharma et al. "Convolutional Auto-Encoder with Tensor-Train Factorization." IEEE/CVF International Conference on Computer Vision Workshops, 2021.](https://mlanthology.org/iccvw/2021/sharma2021iccvw-convolutional/) doi:10.1109/ICCVW54120.2021.00027

BibTeX

@inproceedings{sharma2021iccvw-convolutional,
  title     = {{Convolutional Auto-Encoder with Tensor-Train Factorization}},
  author    = {Sharma, Manish and Markopoulos, Panos P. and Saber, Eli and Asif, M. Salman and Prater-Bennette, Ashley},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2021},
  pages     = {198-206},
  doi       = {10.1109/ICCVW54120.2021.00027},
  url       = {https://mlanthology.org/iccvw/2021/sharma2021iccvw-convolutional/}
}