Revisiting Additive Quantization

Abstract

We revisit Additive Quantization (AQ), an approach to vector quantization that uses multiple, full-dimensional, and non-orthogonal codebooks. Despite its elegant and simple formulation, AQ has failed to achieve state-of-the-art performance on standard retrieval benchmarks, because the encoding problem, which amounts to MAP inference in multiple fully-connected Markov Random Fields (MRFs), has proven to be hard to solve. We demonstrate that the performance of AQ can be improved to surpass the state of the art by leveraging iterated local search, a stochastic local search approach known to work well for a range of NP-hard combinatorial problems. We further show a direct application of our approach to a recent formulation of vector quantization that enforces sparsity of the codebooks. Unlike previous work, which required specialized optimization techniques, our formulation can be plugged directly into state-of-the-art lasso optimizers. This results in a conceptually simple, easily implemented method that outperforms the previous state of the art in solving sparse vector quantization. Our implementation is publicly available ( https://github.com/jltmtz/local-search-quantization ).

Cite

Text

Martinez et al. "Revisiting Additive Quantization." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46475-6_9

Markdown

[Martinez et al. "Revisiting Additive Quantization." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/martinez2016eccv-revisiting/) doi:10.1007/978-3-319-46475-6_9

BibTeX

@inproceedings{martinez2016eccv-revisiting,
  title     = {{Revisiting Additive Quantization}},
  author    = {Martinez, Julieta and Clement, Joris and Hoos, Holger H. and Little, James J.},
  booktitle = {European Conference on Computer Vision},
  year      = {2016},
  pages     = {137-153},
  doi       = {10.1007/978-3-319-46475-6_9},
  url       = {https://mlanthology.org/eccv/2016/martinez2016eccv-revisiting/}
}