Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers
Abstract
We consider the problem of learning a neural network classifier. Under the information bottleneck (IB) principle, we associate with this classification problem a representation learning problem, which we call “IB learning”. We show that IB learning is, in fact, equivalent to a special class of the quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a “vector quantization” approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework, “Aggregated Learning”, for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. The effectiveness of this framework is verified through extensive experiments on standard image recognition and text classification tasks.
Cite
Text
Soflaei et al. "Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.6038Markdown
[Soflaei et al. "Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/soflaei2020aaai-aggregated/) doi:10.1609/AAAI.V34I04.6038BibTeX
@inproceedings{soflaei2020aaai-aggregated,
title = {{Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers}},
author = {Soflaei, Masoumeh and Guo, Hongyu and Al-Bashabsheh, Ali and Mao, Yongyi and Zhang, Richong},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {5810-5817},
doi = {10.1609/AAAI.V34I04.6038},
url = {https://mlanthology.org/aaai/2020/soflaei2020aaai-aggregated/}
}