Learning Relative Similarity by Stochastic Dual Coordinate Ascent

Abstract

Learning relative similarity from pairwise instances is an important problem in machine learning and has a wide range of applications. Despite being studied for years, some existing methods solved by Stochastic Gradient Descent (SGD) techniques generally suffer from slow convergence. In this paper, we investigate the application of Stochastic Dual Coordinate Ascent (SDCA) technique to tackle the optimization task of relative similarity learning by extending from vector to matrix parameters. Theoretically, we prove the optimal linear convergence rate for the proposed SDCA algorithm, beating the well-known sublinear convergence rate by the previous best metric learning algorithms. Empirically, we conduct extensive experiments on both standard and large-scale data sets to validate the effectiveness of the proposed algorithm for retrieval tasks.

Cite

Text

Wu et al. "Learning Relative Similarity by Stochastic Dual Coordinate Ascent." AAAI Conference on Artificial Intelligence, 2014. doi:10.1609/AAAI.V28I1.9002

Markdown

[Wu et al. "Learning Relative Similarity by Stochastic Dual Coordinate Ascent." AAAI Conference on Artificial Intelligence, 2014.](https://mlanthology.org/aaai/2014/wu2014aaai-learning/) doi:10.1609/AAAI.V28I1.9002

BibTeX

@inproceedings{wu2014aaai-learning,
  title     = {{Learning Relative Similarity by Stochastic Dual Coordinate Ascent}},
  author    = {Wu, Pengcheng and Ding, Yi and Zhao, Peilin and Miao, Chunyan and Hoi, Steven C. H.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2014},
  pages     = {2142-2148},
  doi       = {10.1609/AAAI.V28I1.9002},
  url       = {https://mlanthology.org/aaai/2014/wu2014aaai-learning/}
}