A Study on Trust Region Update Rules in Newton Methods for Large-Scale Linear Classification
Abstract
The main task in training a linear classifier is to solve an unconstrained minimization problem. To apply an optimization method typically we iteratively find a good direction and then decide a suitable step size. Past developments of extending optimization methods for large-scale linear classification focus on finding the direction, but little attention has been paid on adjusting the step size. In this work, we explain that inappropriate step-size adjustment may lead to serious slow convergence. Among the two major methods for step-size selection, line search and trust region, we focus on investigating the trust region methods. After presenting some detailed analysis, we develop novel and effective techniques to adjust the trust-region size. Experiments indicate that our new settings significantly outperform existing implementations for large-scale linear classification.
Cite
Text
Hsia et al. "A Study on Trust Region Update Rules in Newton Methods for Large-Scale Linear Classification." Proceedings of the Ninth Asian Conference on Machine Learning, 2017.Markdown
[Hsia et al. "A Study on Trust Region Update Rules in Newton Methods for Large-Scale Linear Classification." Proceedings of the Ninth Asian Conference on Machine Learning, 2017.](https://mlanthology.org/acml/2017/hsia2017acml-study/)BibTeX
@inproceedings{hsia2017acml-study,
title = {{A Study on Trust Region Update Rules in Newton Methods for Large-Scale Linear Classification}},
author = {Hsia, Chih-Yang and Zhu, Ya and Lin, Chih-Jen},
booktitle = {Proceedings of the Ninth Asian Conference on Machine Learning},
year = {2017},
pages = {33-48},
volume = {77},
url = {https://mlanthology.org/acml/2017/hsia2017acml-study/}
}