Open Problem: Black-Box Reductions and Adaptive Gradient Methods for Nonconvex Optimization
Abstract
We describe an open problem: reduce offline nonconvex stochastic optimization to regret minimization in online convex optimization. The conjectured reduction aims to make progress on explaining the success of adaptive gradient methods for deep learning. A prize of 500 dollars is offered to the winner.
Cite
Text
Chen and Hazan. "Open Problem: Black-Box Reductions and Adaptive Gradient Methods for Nonconvex Optimization." Conference on Learning Theory, 2024.Markdown
[Chen and Hazan. "Open Problem: Black-Box Reductions and Adaptive Gradient Methods for Nonconvex Optimization." Conference on Learning Theory, 2024.](https://mlanthology.org/colt/2024/chen2024colt-open/)BibTeX
@inproceedings{chen2024colt-open,
title = {{Open Problem: Black-Box Reductions and Adaptive Gradient Methods for Nonconvex Optimization}},
author = {Chen, Xinyi and Hazan, Elad},
booktitle = {Conference on Learning Theory},
year = {2024},
pages = {5317-5324},
volume = {247},
url = {https://mlanthology.org/colt/2024/chen2024colt-open/}
}