DeepVQ: A Deep Network Architecture for Vector Quantization
Abstract
Vector quantization (VQ) is a classic problem in signal processing, source coding and information theory. Leveraging recent advances in deep neural networks (DNN), this paper bridges the gap between a classic quantization problem and DNN. We introduce -- for the first time -- a deep network architecture for vector quantization (DeepVQ). Applying recent binary optimization theory, we propose a training algorithm to tackle binary constraints. Notably, our network outputs binary codes directly. As a result, DeepVQ can perform quantization of vectors with a simple forward pass, and this overcomes the exponential complexity issue of previous VQ approaches. Experiments show that our network is able to achieve encouraging results and outperforms recent deep learning-based clustering approaches that have been modified for VQ. Importantly, our network serves as a generic framework which can be applied for other networks in which binary constraints are required.
Cite
Text
Le Tan et al. "DeepVQ: A Deep Network Architecture for Vector Quantization." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.Markdown
[Le Tan et al. "DeepVQ: A Deep Network Architecture for Vector Quantization." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.](https://mlanthology.org/cvprw/2018/tan2018cvprw-deepvq/)BibTeX
@inproceedings{tan2018cvprw-deepvq,
title = {{DeepVQ: A Deep Network Architecture for Vector Quantization}},
author = {Le Tan, Dang-Khoa and Le, Huu and Hoang, Tuan and Do, Thanh-Toan and Cheung, Ngai-Man},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2018},
pages = {2579-2582},
url = {https://mlanthology.org/cvprw/2018/tan2018cvprw-deepvq/}
}