Greedy Learning of Generalized Low-Rank Models

Abstract

Learning of low-rank matrices is fundamental to many machine learning applications. A state-of-the-art algorithm is the rank-one matrix pursuit (R1MP). However, it can only be used in matrix completion problems with the square loss. In this paper, we develop a more flexible greedy algorithm for generalized low-rank models whose optimization objective can be smooth or nonsmooth, general convex or strongly convex. The proposed algorithm has low per-iteration time complexity and fast convergence rate. Experimental results show that it is much faster than the state-of-the-art, with comparable or even better prediction performance. PDF

Cite

Text

Yao and Kwok. "Greedy Learning of Generalized Low-Rank Models." International Joint Conference on Artificial Intelligence, 2016.

Markdown

[Yao and Kwok. "Greedy Learning of Generalized Low-Rank Models." International Joint Conference on Artificial Intelligence, 2016.](https://mlanthology.org/ijcai/2016/yao2016ijcai-greedy/)

BibTeX

@inproceedings{yao2016ijcai-greedy,
  title     = {{Greedy Learning of Generalized Low-Rank Models}},
  author    = {Yao, Quanming and Kwok, James T.},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {2294-2300},
  url       = {https://mlanthology.org/ijcai/2016/yao2016ijcai-greedy/}
}