Optimization Learning: Perspective, Method, and Applications

Abstract

Numerous tasks at the core of statistics, learning, and vision areas are specific cases of ill-posed inverse problems. Recently, learning-based (e.g., deep) iterative methods have been empirically shown to be useful for these problems. Nevertheless, integrating learnable structures into iterations is still a laborious process, which can only be guided by intuitions or empirical insights. Moreover, there is a lack of rigorous analysis of the convergence behaviors of these reimplemented iterations, and thus the significance of such methods is a little bit vague. We move beyond these limits and propose a theoretically guaranteed optimization learning paradigm, a generic and provable paradigm for nonconvex inverse problems, and develop a series of convergent deep models. Our theoretical analysis reveals that the proposed optimization learning paradigm allows us to generate globally convergent trajectories for learning-based iterative methods. Thanks to the superiority of our framework, we achieve state-of-the-art performance on different real applications.

Cite

Text

Liu. "Optimization Learning: Perspective, Method, and Applications." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/728

Markdown

[Liu. "Optimization Learning: Perspective, Method, and Applications." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/liu2020ijcai-optimization/) doi:10.24963/IJCAI.2020/728

BibTeX

@inproceedings{liu2020ijcai-optimization,
  title     = {{Optimization Learning: Perspective, Method, and Applications}},
  author    = {Liu, Risheng},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {5164-5168},
  doi       = {10.24963/IJCAI.2020/728},
  url       = {https://mlanthology.org/ijcai/2020/liu2020ijcai-optimization/}
}