Deep Representation Learning with Target Coding
Abstract
We consider the problem of learning deep representation when target labels are available. In this paper, we show that there exists intrinsic relationship between target coding and feature representation learning in deep networks. Specifically, we found that distributed binary acode with error correcting capability is more capable of encouraging discriminative features, in comparison tothe 1-of-K coding that is typically used in supervised deep learning. This new finding reveals additional benefit of using error-correcting code for deep model learning, apart from its well-known error correcting property. Extensive experiments are conducted on popular visual benchmark datasets.
Cite
Text
Yang et al. "Deep Representation Learning with Target Coding." AAAI Conference on Artificial Intelligence, 2015. doi:10.1609/AAAI.V29I1.9796Markdown
[Yang et al. "Deep Representation Learning with Target Coding." AAAI Conference on Artificial Intelligence, 2015.](https://mlanthology.org/aaai/2015/yang2015aaai-deep/) doi:10.1609/AAAI.V29I1.9796BibTeX
@inproceedings{yang2015aaai-deep,
title = {{Deep Representation Learning with Target Coding}},
author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Shum, Kenneth W. and Tang, Xiaoou},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2015},
pages = {3848-3854},
doi = {10.1609/AAAI.V29I1.9796},
url = {https://mlanthology.org/aaai/2015/yang2015aaai-deep/}
}