Semi-Supervised Learning of Compact Document Representations with Deep Networks
Abstract
Finding a good representation of text documents is crucial in document retrieval and classification systems. Nowadays, the most popular representation is simply based on a vector of counts storing the number of occurrences of each word in the document. This representation falls short in describing the dependence existing between similar words, and it cannot disambiguate phenomena like synonymy and polysemy of words. In this paper, we propose an algorithm to learn text document representations based on the recent advances in training deep networks. This technique can efficiently produce a very compact and informative representation of a document. Our experiments compare favorably this algorithm against similar algorithms but producing sparse and binary representations. Unlike other models, this method is trained by taking into account both an unsupervised and a supervised objective. We show that it is very advantageous to exploit even a few labeled samples during training, and that we can learn extremely compact representations by using deep and non-linear models.
Cite
Text
Ranzato and Szummer. "Semi-Supervised Learning of Compact Document Representations with Deep Networks." International Conference on Machine Learning, 2008. doi:10.1145/1390156.1390256Markdown
[Ranzato and Szummer. "Semi-Supervised Learning of Compact Document Representations with Deep Networks." International Conference on Machine Learning, 2008.](https://mlanthology.org/icml/2008/ranzato2008icml-semi/) doi:10.1145/1390156.1390256BibTeX
@inproceedings{ranzato2008icml-semi,
title = {{Semi-Supervised Learning of Compact Document Representations with Deep Networks}},
author = {Ranzato, Marc'Aurelio and Szummer, Martin},
booktitle = {International Conference on Machine Learning},
year = {2008},
pages = {792-799},
doi = {10.1145/1390156.1390256},
url = {https://mlanthology.org/icml/2008/ranzato2008icml-semi/}
}