Unveiling Multiple Descents in Unsupervised Autoencoders

Abstract

The phenomenon of double descent has challenged the traditional bias-variance trade-off in supervised learning but remains unexplored in unsupervised learning, with some studies arguing for its absence. In this study, we first demonstrate analytically that double descent does not occur in linear unsupervised autoencoders (AEs). In contrast, we show for the first time that both double and triple descent can be observed with nonlinear AEs across various data models and architectural designs. We examine the effects of partial sample and feature noise and highlight the critical role of bottleneck size in shaping the double descent curve. Through extensive experiments on both synthetic and real datasets, we uncover model-wise, epoch-wise, and sample-wise double descent across several data types and architectures. Our findings indicate that over-parameterized models not only improve reconstruction but also enhance performance in downstream tasks such as anomaly detection and domain adaptation, highlighting their practical value in complex real-world scenarios.

Cite

Text

Rahimi et al. "Unveiling Multiple Descents in Unsupervised Autoencoders." Transactions on Machine Learning Research, 2025.

Markdown

[Rahimi et al. "Unveiling Multiple Descents in Unsupervised Autoencoders." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/rahimi2025tmlr-unveiling/)

BibTeX

@article{rahimi2025tmlr-unveiling,
  title     = {{Unveiling Multiple Descents in Unsupervised Autoencoders}},
  author    = {Rahimi, Kobi and Refael, Yehonathan and Tirer, Tom and Lindenbaum, Ofir},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/rahimi2025tmlr-unveiling/}
}