Learning to Sample in Stochastic Optimization
Abstract
We consider a PAC-Bayes analysis of stochastic optimization algorithms, and devise a new SGDA algorithm inspired from our bounds. Our algorithm learns a data-dependent sampling scheme along with model parameters, which may be seen as assigning a probability to each training point. We demonstrate that learning the sampling scheme increases robustness against misleading training points, as our algorithm learns to avoid bad examples during training. We conduct experiments in both standard and adversarial learning problems on several benchmark datasets, and demonstrate various applications including interpretability upon visual inspection, and robustness to the ill effects of bad training points. We also extend our analysis to pairwise SGD to demonstrate the generalizability of our methodology.
Cite
Text
Zhou et al. "Learning to Sample in Stochastic Optimization." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.Markdown
[Zhou et al. "Learning to Sample in Stochastic Optimization." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.](https://mlanthology.org/uai/2025/zhou2025uai-learning/)BibTeX
@inproceedings{zhou2025uai-learning,
title = {{Learning to Sample in Stochastic Optimization}},
author = {Zhou, Sijia and Lei, Yunwen and Kaban, Ata},
booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence},
year = {2025},
pages = {5099-5115},
volume = {286},
url = {https://mlanthology.org/uai/2025/zhou2025uai-learning/}
}