Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization

Abstract

This paper introduces a new method for minimizing matrix-smooth non-convex objectives through the use of novel Compressed Gradient Descent (CGD) algorithms enhanced with a matrix-valued stepsize. The proposed algorithms are theoretically analyzed first in the single-node and subsequently in the distributed settings. Our theoretical results reveal that the matrix stepsize in CGD can capture the objective’s structure and lead to faster convergence compared to a scalar stepsize. As a byproduct of our general results, we emphasize the importance of selecting the compression mechanism and the matrix stepsize in a layer-wise manner, taking advantage of model structure. Moreover, we provide theoretical guarantees for free compression, by designing specific layer-wise compressors for the non-convex matrix smooth objectives. Our findings are supported with empirical evidence.

Cite

Text

Li et al. "Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization." NeurIPS 2023 Workshops: OPT, 2023.

Markdown

[Li et al. "Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization." NeurIPS 2023 Workshops: OPT, 2023.](https://mlanthology.org/neuripsw/2023/li2023neuripsw-detcgd/)

BibTeX

@inproceedings{li2023neuripsw-detcgd,
  title     = {{Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization}},
  author    = {Li, Hanmin and Karagulyan, Avetik and Richtárik, Peter},
  booktitle = {NeurIPS 2023 Workshops: OPT},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/li2023neuripsw-detcgd/}
}