An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-Driven Molecule Optimization
Abstract
Molecule optimization is an important problem in chemical discovery and has been approached using many techniques, including generative modeling, reinforcement learning, genetic algorithms, and much more. Recent work has also applied zeroth-order (ZO) optimization, a subset of gradient-free optimization that solves problems similarly to gradient-based methods, for optimizing latent vector representations from an autoencoder. In this paper, we study the effectiveness of various ZO optimization methods for optimizing molecular objectives, which are characterized by variable smoothness, infrequent optima, and other challenges. We provide insights on the robustness of various ZO optimizers in this setting, show the advantages of ZO sign-based gradient descent (ZO-signGD), discuss how ZO optimization can be used practically in realistic discovery tasks, and demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
Cite
Text
Lo and Chen. "An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-Driven Molecule Optimization." NeurIPS 2022 Workshops: AI4Science, 2022.Markdown
[Lo and Chen. "An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-Driven Molecule Optimization." NeurIPS 2022 Workshops: AI4Science, 2022.](https://mlanthology.org/neuripsw/2022/lo2022neuripsw-empirical/)BibTeX
@inproceedings{lo2022neuripsw-empirical,
title = {{An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-Driven Molecule Optimization}},
author = {Lo, Elvin and Chen, Pin-Yu},
booktitle = {NeurIPS 2022 Workshops: AI4Science},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/lo2022neuripsw-empirical/}
}