Variable Discretization for Self-Supervised Learning
Abstract
In this study, we propose Variable Disretization (VD) for self-supervised image representation learning. VD is to discretize each and every variable in the embedding space making their probability distributions estimable, based on which the learning process can be directly principled by information measures. Specifically, a loss function is defined to maximize the joint entropy between discrete variables. Our theoretical analysis guarantees that the entropy-maximized VD can learn transform-invariant, non-trivial, redundancy-minimized, and discriminative features. Extensive experiments demonstrate the superiority of VD on various downstream tasks in terms of both accuracy and training efficiency. Moreover, the VD-based information-theoretic optimization could be adapted to other learning paradigms or multimodal data representation learning.
Cite
Text
Niu et al. "Variable Discretization for Self-Supervised Learning." ICLR 2023 Workshops: ME-FoMo, 2023.Markdown
[Niu et al. "Variable Discretization for Self-Supervised Learning." ICLR 2023 Workshops: ME-FoMo, 2023.](https://mlanthology.org/iclrw/2023/niu2023iclrw-variable/)BibTeX
@inproceedings{niu2023iclrw-variable,
title = {{Variable Discretization for Self-Supervised Learning}},
author = {Niu, Chuang and Xia, Wenjun and Wang, Ge},
booktitle = {ICLR 2023 Workshops: ME-FoMo},
year = {2023},
url = {https://mlanthology.org/iclrw/2023/niu2023iclrw-variable/}
}