Learning K-Way D-Dimensional Discrete Codes for Compact Embedding Representations

Abstract

Conventional embedding methods directly associate each symbol with a continuous embedding vector, which is equivalent to applying a linear transformation based on a “one-hot” encoding of the discrete symbols. Despite its simplicity, such approach yields the number of parameters that grows linearly with the vocabulary size and can lead to overfitting. In this work, we propose a much more compact K-way D-dimensional discrete encoding scheme to replace the “one-hot" encoding. In the proposed “KD encoding”, each symbol is represented by a $D$-dimensional code with a cardinality of $K$, and the final symbol embedding vector is generated by composing the code embedding vectors. To end-to-end learn semantically meaningful codes, we derive a relaxed discrete optimization approach based on stochastic gradient descent, which can be generally applied to any differentiable computational graph with an embedding layer. In our experiments with various applications from natural language processing to graph convolutional networks, the total size of the embedding layer can be reduced up to 98% while achieving similar or better performance.

Cite

Text

Chen et al. "Learning K-Way D-Dimensional Discrete Codes for Compact Embedding Representations." International Conference on Machine Learning, 2018.

Markdown

[Chen et al. "Learning K-Way D-Dimensional Discrete Codes for Compact Embedding Representations." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/chen2018icml-learning/)

BibTeX

@inproceedings{chen2018icml-learning,
  title     = {{Learning K-Way D-Dimensional Discrete Codes for Compact Embedding Representations}},
  author    = {Chen, Ting and Min, Martin Renqiang and Sun, Yizhou},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {854-863},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/chen2018icml-learning/}
}