Robust Metric Learning by Smooth Optimization

Abstract

Most existing distance metric learning methods assume perfect side information that is usually given in pairwise or triplet constraints. Instead, in many real-world applications, the constraints are derived from side information, such as users' implicit feedbacks and citations among articles. As a result, these constraints are usually noisy and contain many mistakes. In this work, we aim to learn a distance metric from noisy constraints by robust optimization in a worst-case scenario, to which we refer as robust metric learning. We formulate the learning task initially as a combinatorial optimization problem, and show that it can be elegantly transformed to a convex programming problem. We present an efficient learning algorithm based on smooth optimization [7]. It has a worst-case convergence rate of O(1/√e) for smooth optimization problems, where e is the desired error of the approximate solution. Finally, our empirical study with UCI data sets demonstrate the effectiveness of the proposed method in comparison to state-of-the-art methods.

Cite

Text

Huang et al. "Robust Metric Learning by Smooth Optimization." Conference on Uncertainty in Artificial Intelligence, 2010.

Markdown

[Huang et al. "Robust Metric Learning by Smooth Optimization." Conference on Uncertainty in Artificial Intelligence, 2010.](https://mlanthology.org/uai/2010/huang2010uai-robust/)

BibTeX

@inproceedings{huang2010uai-robust,
  title     = {{Robust Metric Learning by Smooth Optimization}},
  author    = {Huang, Kaizhu and Jin, Rong and Xu, Zenglin and Liu, Cheng-Lin},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2010},
  pages     = {244-251},
  url       = {https://mlanthology.org/uai/2010/huang2010uai-robust/}
}