Engineering the Neural Collapse Geometry of Supervised-Contrastive Loss (Student Abstract)

Abstract

Supervised-contrastive loss (SCL) is an alternative to cross-entropy (CE) for classification tasks that makes use of similarities in the embedding space to allow for richer representations. Previous works have used trainable prototypes to help improve test accuracy of SCL when training under imbalance. In this work, we propose the use of fixed prototypes to help engineering the feature geometry when training with SCL. We gain further insights by considering a limiting scenario where the number of prototypes far outnumber the original batch size. Through this, we establish a connection to CE loss with a fixed classifier and normalized embeddings. We validate our findings by conducting a series of experiments with deep neural networks on benchmark vision datasets.

Cite

Text

Gill et al. "Engineering the Neural Collapse Geometry of Supervised-Contrastive Loss (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30447

Markdown

[Gill et al. "Engineering the Neural Collapse Geometry of Supervised-Contrastive Loss (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/gill2024aaai-engineering/) doi:10.1609/AAAI.V38I21.30447

BibTeX

@inproceedings{gill2024aaai-engineering,
  title     = {{Engineering the Neural Collapse Geometry of Supervised-Contrastive Loss (Student Abstract)}},
  author    = {Gill, Jaidev and Vakilian, Vala and Thrampoulidis, Christos},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {23503-23505},
  doi       = {10.1609/AAAI.V38I21.30447},
  url       = {https://mlanthology.org/aaai/2024/gill2024aaai-engineering/}
}