MAGMA: Manifold Regularization for MAEs

Abstract

Masked Autoencoders (MAEs) are an important divide in self-supervised learning (SSL) due to their independence from augmentation techniques for generating positive (and/or negative) pairs as in contrastive frameworks. Their masking and reconstruction strategy also nicely aligns with SSL approaches in natural language processing. Most MAEs are built upon Transformer-based architectures where visual features are not regularized as opposed to their convolutional neural network (CNN) based counterparts which can potentially hinder their performance. To address this we introduce MAGMA a novel batch-wide layer-wise regularization loss applied to representations of different Transformer layers. We demonstrate that by plugging in the proposed regularization loss one can significantly improve the performance of MAE-based models. We further demonstrate the impact of the proposed loss on optimizing other generic SSL approaches (such as VICReg and SimCLR) broadening the impact of the proposed approach. Our code base can be found at https://github.com/adondera/magma.

Cite

Text

Dondera et al. "MAGMA: Manifold Regularization for MAEs." Winter Conference on Applications of Computer Vision, 2025.

Markdown

[Dondera et al. "MAGMA: Manifold Regularization for MAEs." Winter Conference on Applications of Computer Vision, 2025.](https://mlanthology.org/wacv/2025/dondera2025wacv-magma/)

BibTeX

@inproceedings{dondera2025wacv-magma,
  title     = {{MAGMA: Manifold Regularization for MAEs}},
  author    = {Dondera, Alin-Eugen and Singh, Anuj R and Jamali-Rad, Hadi},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2025},
  pages     = {6890-6899},
  url       = {https://mlanthology.org/wacv/2025/dondera2025wacv-magma/}
}