Memory-Based Stochastic Optimization
Abstract
In this paper we introduce new algorithms for optimizing noisy plants in which each experiment is very expensive. The algorithms build a global non-linear model of the expected output at the same time as using Bayesian linear regression analysis of locally weighted polynomial models. The local model answers queries about confi(cid:173) dence, noise, gradient and Hessians, and use them to make auto(cid:173) mated decisions similar to those made by a practitioner of Response Surface Methodology. The global and local models are combined naturally as a locally weighted regression. We examine the ques(cid:173) tion of whether the global model can really help optimization, and we extend it to the case of time-varying functions. We compare the new algorithms with a highly tuned higher-order stochastic op(cid:173) timization algorithm on randomly-generated functions and a sim(cid:173) ulated manufacturing task. We note significant improvements in total regret , time to converge, and final solution quality.
Cite
Text
Moore and Schneider. "Memory-Based Stochastic Optimization." Neural Information Processing Systems, 1995.Markdown
[Moore and Schneider. "Memory-Based Stochastic Optimization." Neural Information Processing Systems, 1995.](https://mlanthology.org/neurips/1995/moore1995neurips-memorybased/)BibTeX
@inproceedings{moore1995neurips-memorybased,
title = {{Memory-Based Stochastic Optimization}},
author = {Moore, Andrew W. and Schneider, Jeff G.},
booktitle = {Neural Information Processing Systems},
year = {1995},
pages = {1066-1072},
url = {https://mlanthology.org/neurips/1995/moore1995neurips-memorybased/}
}