Entropy-Penalized Semidefinite Programming

Abstract

Low-rank methods for semi-definite programming (SDP) have gained a lot of interest recently, especially in machine learning applications. Their analysis often involves determinant-based or Schatten-norm penalties, which are difficult to implement in practice due to high computational efforts. In this paper, we propose Entropy-Penalized Semi-Definite Programming (EP-SDP), which provides a unified framework for a broad class of penalty functions used in practice to promote a low-rank solution. We show that EP-SDP problems admit an efficient numerical algorithm, having (almost) linear time complexity of the gradient computation; this makes it useful for many machine learning and optimization problems. We illustrate the practical efficiency of our approach on several combinatorial optimization and machine learning problems.

Cite

Text

Krechetov et al. "Entropy-Penalized Semidefinite Programming." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/157

Markdown

[Krechetov et al. "Entropy-Penalized Semidefinite Programming." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/krechetov2019ijcai-entropy/) doi:10.24963/IJCAI.2019/157

BibTeX

@inproceedings{krechetov2019ijcai-entropy,
  title     = {{Entropy-Penalized Semidefinite Programming}},
  author    = {Krechetov, Mikhail and Marecek, Jakub and Maximov, Yury and Takác, Martin},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {1123-1129},
  doi       = {10.24963/IJCAI.2019/157},
  url       = {https://mlanthology.org/ijcai/2019/krechetov2019ijcai-entropy/}
}