Fast Iterative Hard Thresholding Methods with Pruning Gradient Computations

Abstract

We accelerate the iterative hard thresholding (IHT) method, which finds (k) important elements from a parameter vector in a linear regression model. Although the plain IHT repeatedly updates the parameter vector during the optimization, computing gradients is the main bottleneck. Our method safely prunes unnecessary gradient computations to reduce the processing time.The main idea is to efficiently construct a candidate set, which contains (k) important elements in the parameter vector, for each iteration. Specifically, before computing the gradients, we prune unnecessary elements in the parameter vector for the candidate set by utilizing upper bounds on absolute values of the parameters. Our method guarantees the same optimization results as the plain IHT because our pruning is safe. Experiments show that our method is up to 73 times faster than the plain IHT without degrading accuracy.

Cite

Text

Ida et al. "Fast Iterative Hard Thresholding Methods with Pruning Gradient Computations." Neural Information Processing Systems, 2024. doi:10.52202/079017-1674

Markdown

[Ida et al. "Fast Iterative Hard Thresholding Methods with Pruning Gradient Computations." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/ida2024neurips-fast/) doi:10.52202/079017-1674

BibTeX

@inproceedings{ida2024neurips-fast,
  title     = {{Fast Iterative Hard Thresholding Methods with Pruning Gradient Computations}},
  author    = {Ida, Yasutoshi and Kanai, Sekitoshi and Kumagai, Atsutoshi and Iwata, Tomoharu and Fujiwara, Yasuhiro},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1674},
  url       = {https://mlanthology.org/neurips/2024/ida2024neurips-fast/}
}