Derivative-Free Optimization of High-Dimensional Non-Convex Functions by Sequential Random Embeddings
Abstract
Derivative-free optimization methods are suitable for sophisticated optimization problems, while are hard to scale to high dimensionality (e.g., larger than 1,000). Previously, the random embedding technique has been shown successful for solving high-dimensional problems with low effective dimensions. However, it is unrealistic to assume a low effective dimension in many applications. This paper turns to study high-dimensional problems with low optimal epsilon-effective dimensions, which allow all dimensions to be effective but many of them only have a small bounded effect. We characterize the properties of random embedding for this kind of problems, and propose the sequential random embeddings (SRE) to reduce the embedding gap while running optimization algorithms in the low-dimensional spaces. We apply SRE to several state-of-the-art derivative-free optimization methods, and conduct experiments on synthetic functions as well as non-convex classification tasks with up to 100,000 variables. Experiment results verify the effectiveness of SRE. PDF
Cite
Text
Qian et al. "Derivative-Free Optimization of High-Dimensional Non-Convex Functions by Sequential Random Embeddings." International Joint Conference on Artificial Intelligence, 2016.Markdown
[Qian et al. "Derivative-Free Optimization of High-Dimensional Non-Convex Functions by Sequential Random Embeddings." International Joint Conference on Artificial Intelligence, 2016.](https://mlanthology.org/ijcai/2016/qian2016ijcai-derivative/)BibTeX
@inproceedings{qian2016ijcai-derivative,
title = {{Derivative-Free Optimization of High-Dimensional Non-Convex Functions by Sequential Random Embeddings}},
author = {Qian, Hong and Hu, Yi-Qi and Yu, Yang},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2016},
pages = {1946-1952},
url = {https://mlanthology.org/ijcai/2016/qian2016ijcai-derivative/}
}