ADAM Optimization with Adaptive Batch Selection

Abstract

Adam is a widely used optimizer in neural network training due to its adaptive learning rate. However, because different data samples influence model updates to varying degrees, treating them equally can lead to inefficient convergence. To address this, a prior work proposed adapting the sampling distribution using a bandit framework to select samples adaptively. While promising, the bandit-based variant of Adam suffers from limited theoretical guarantees. In this paper, we introduce \textit{Adam with Combinatorial Bandit Sampling} (AdamCB), which integrates combinatorial bandit techniques into Adam to resolve these issues. AdamCB is able to fully utilize feedback from multiple samples at once, enhancing both theoretical guarantees and practical performance. Our regret analysis shows that AdamCB achieves faster convergence than Adam-based methods including the previous bandit-based variant. Numerical experiments demonstrate that AdamCB consistently outperforms existing methods.

Cite

Text

Kim and Oh. "ADAM Optimization with Adaptive Batch Selection." International Conference on Learning Representations, 2025.

Markdown

[Kim and Oh. "ADAM Optimization with Adaptive Batch Selection." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/kim2025iclr-adam/)

BibTeX

@inproceedings{kim2025iclr-adam,
  title     = {{ADAM Optimization with Adaptive Batch Selection}},
  author    = {Kim, Gyu Yeol and Oh, Min-hwan},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/kim2025iclr-adam/}
}