Explicit and Implicit Graduated Optimization in Deep Neural Networks
Abstract
Graduated optimization is a global optimization technique that is used to minimize a multimodal nonconvex function by smoothing the objective function with noise and gradually refining the solution. This paper experimentally evaluates the performance of the explicit graduated optimization algorithm with an optimal noise scheduling derived from a previous study and discusses its limitations. The evaluation uses traditional benchmark functions and empirical loss functions for modern neural network architectures. In addition, this paper extends the implicit graduated optimization algorithm, which is based on the fact that stochastic noise in the optimization process of SGD implicitly smooths the objective function, to SGD with momentum, analyzes its convergence, and demonstrates its effectiveness through experiments on image classification tasks with ResNet architectures.
Cite
Text
Sato and Iiduka. "Explicit and Implicit Graduated Optimization in Deep Neural Networks." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I19.34234Markdown
[Sato and Iiduka. "Explicit and Implicit Graduated Optimization in Deep Neural Networks." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/sato2025aaai-explicit/) doi:10.1609/AAAI.V39I19.34234BibTeX
@inproceedings{sato2025aaai-explicit,
title = {{Explicit and Implicit Graduated Optimization in Deep Neural Networks}},
author = {Sato, Naoki and Iiduka, Hideaki},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {20283-20291},
doi = {10.1609/AAAI.V39I19.34234},
url = {https://mlanthology.org/aaai/2025/sato2025aaai-explicit/}
}