Learning to Optimize in Swarms
Abstract
Learning to optimize has emerged as a powerful framework for various optimization and machine learning tasks. Current such "meta-optimizers" often learn in the space of continuous optimization algorithms that are point-based and uncertainty-unaware. To overcome the limitations, we propose a meta-optimizer that learns in the algorithmic space of both point-based and population-based optimization algorithms. The meta-optimizer targets at a meta-loss function consisting of both cumulative regret and entropy. Specifically, we learn and interpret the update formula through a population of LSTMs embedded with sample- and feature-level attentions. Meanwhile, we estimate the posterior directly over the global optimum and use an uncertainty measure to help guide the learning process. Empirical results over non-convex test functions and the protein-docking application demonstrate that this new meta-optimizer outperforms existing competitors. The codes are publicly available at: https://github.com/Shen-Lab/LOIS
Cite
Text
Cao et al. "Learning to Optimize in Swarms." Neural Information Processing Systems, 2019.Markdown
[Cao et al. "Learning to Optimize in Swarms." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/cao2019neurips-learning-a/)BibTeX
@inproceedings{cao2019neurips-learning-a,
title = {{Learning to Optimize in Swarms}},
author = {Cao, Yue and Chen, Tianlong and Wang, Zhangyang and Shen, Yang},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {15044-15054},
url = {https://mlanthology.org/neurips/2019/cao2019neurips-learning-a/}
}