Descent Methods for Tuning Parameter Refinement

Abstract

This paper addresses multidimensional tuning parameter selection in the context of “train-validate-test” and $K$-fold cross validation. A coarse grid search over tuning parameter space is used to initialize a descent method which then jointly optimizes over variables and tuning parameters. We study four regularized regression methods and develop the update equations for the corresponding descent algorithms. Experiments on both simulated and real-world datasets show that the method results in significant tuning parameter refinement.

Cite

Text

Lorbert and Ramadge. "Descent Methods for Tuning Parameter Refinement." Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.

Markdown

[Lorbert and Ramadge. "Descent Methods for Tuning Parameter Refinement." Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.](https://mlanthology.org/aistats/2010/lorbert2010aistats-descent/)

BibTeX

@inproceedings{lorbert2010aistats-descent,
  title     = {{Descent Methods for Tuning Parameter Refinement}},
  author    = {Lorbert, Alexander and Ramadge, Peter},
  booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics},
  year      = {2010},
  pages     = {469-476},
  volume    = {9},
  url       = {https://mlanthology.org/aistats/2010/lorbert2010aistats-descent/}
}