Knowledge Compilation to Speed up Numerical Optimization
Abstract
Many important application problems can be formalized as constrained non-linear optimization tasks. However, numerical methods for solving such problems are brittle and do not scale well. This paper describes a method to speed up and increase the reliability of numerical optimization by (a) optimizing the computation of the objective function, and (b) splitting the objective function into special cases that possess differentiable closed forms. This allows us to replace a single inefficient non-gradient-based optimization by a set of efficient numerical gradient-directed optimizations that can be performed in parallel. In the domain of 2-dimensional structural design, this procedure yields a 95% speedup over traditional optimization methods and decreases the dependence of the numerical methods on having a good starting point.
Cite
Text
Cerbone and Dietterich. "Knowledge Compilation to Speed up Numerical Optimization." International Conference on Machine Learning, 1991. doi:10.1016/B978-1-55860-200-7.50122-7Markdown
[Cerbone and Dietterich. "Knowledge Compilation to Speed up Numerical Optimization." International Conference on Machine Learning, 1991.](https://mlanthology.org/icml/1991/cerbone1991icml-knowledge/) doi:10.1016/B978-1-55860-200-7.50122-7BibTeX
@inproceedings{cerbone1991icml-knowledge,
title = {{Knowledge Compilation to Speed up Numerical Optimization}},
author = {Cerbone, Giuseppe and Dietterich, Thomas G.},
booktitle = {International Conference on Machine Learning},
year = {1991},
pages = {600-604},
doi = {10.1016/B978-1-55860-200-7.50122-7},
url = {https://mlanthology.org/icml/1991/cerbone1991icml-knowledge/}
}