Finite-Time Convergence in Continuous-Time Optimization

Abstract

In this paper, we investigate a Lyapunov-like differential inequality that allows us to establish finite-time stability of a continuous-time state-space dynamical system represented via a multivariate ordinary differential equation or differential inclusion. Equipped with this condition, we successfully synthesize first and second-order dynamical systems that achieve finite-time convergence to the minima of a given sufficiently regular cost function. As a byproduct, we show that the p-rescaled gradient flow (p-RGF) proposed by Wibisono et al. (2016) is indeed finite-time convergent, provided the cost function is gradient dominated of order q in (1,p). Thus, we effectively bridge a gap between the p-RGF and the normalized gradient flow (NGF) (p=\infty) proposed by Cortes (2006) in his seminal paper in the context of multi-agent systems. We discuss strategies to discretize our proposed flows and conclude by conducting some numerical experiments to illustrate our results.

Cite

Text

Romero and Benosman. "Finite-Time Convergence in Continuous-Time Optimization." International Conference on Machine Learning, 2020.

Markdown

[Romero and Benosman. "Finite-Time Convergence in Continuous-Time Optimization." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/romero2020icml-finitetime/)

BibTeX

@inproceedings{romero2020icml-finitetime,
  title     = {{Finite-Time Convergence in Continuous-Time Optimization}},
  author    = {Romero, Orlando and Benosman, Mouhacine},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {8200-8209},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/romero2020icml-finitetime/}
}