Derivative-Free Optimization via Classification

Abstract

Many randomized heuristic derivative-free optimization methods share a framework that iteratively learns a model for promising search areas and samples solutions from the model. This paper studies a particular setting of such framework, where the model is implemented by a classification model discriminating good solutions from bad ones. This setting allows a general theoretical characterization, where critical factors to the optimization are discovered. We also prove that optimization problems with Local Lipschitz continuity can be solved in polynomial time by proper configurations of this framework. Following the critical factors, we propose the randomized coordinate shrinking classification algorithm to learn the model, forming the RACOS algorithm, for optimization in continuous and discrete domains. Experiments on the testing functions as well as on the machine learning tasks including spectral clustering and classification with Ramp loss demonstrate the effectiveness of RACOS.

Cite

Text

Yu et al. "Derivative-Free Optimization via Classification." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10289

Markdown

[Yu et al. "Derivative-Free Optimization via Classification." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/yu2016aaai-derivative/) doi:10.1609/AAAI.V30I1.10289

BibTeX

@inproceedings{yu2016aaai-derivative,
  title     = {{Derivative-Free Optimization via Classification}},
  author    = {Yu, Yang and Qian, Hong and Hu, Yi-Qi},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {2286-2292},
  doi       = {10.1609/AAAI.V30I1.10289},
  url       = {https://mlanthology.org/aaai/2016/yu2016aaai-derivative/}
}