Pairwise Learning with Adaptive Online Gradient Descent
Abstract
In this paper, we propose an adaptive online gradient descent method with momentum for pairwise learning, in which the stepsize is determined by historical information. Due to the structure of pairwise learning, the sample pairs are dependent on the parameters, causing difficulties in the convergence analysis. To this end, we develop novel techniques for the convergence analysis of the proposed algorithm. We show that the proposed algorithm can output the desired solution in strongly convex, convex, and nonconvex cases. Furthermore, we present theoretical explanations for why our proposed algorithm can accelerate previous workhorses for online pairwise learning. All assumptions used in the theoretical analysis are mild and common, making our results applicable to various pairwise learning problems. To demonstrate the efficiency of our algorithm, we compare the proposed adaptive method with the non-adaptive counterpart on the benchmark online AUC maximization problem.
Cite
Text
Sun et al. "Pairwise Learning with Adaptive Online Gradient Descent." Transactions on Machine Learning Research, 2023.Markdown
[Sun et al. "Pairwise Learning with Adaptive Online Gradient Descent." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/sun2023tmlr-pairwise/)BibTeX
@article{sun2023tmlr-pairwise,
title = {{Pairwise Learning with Adaptive Online Gradient Descent}},
author = {Sun, Tao and Wang, Qingsong and Lei, Yunwen and Li, Dongsheng and Wang, Bao},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/sun2023tmlr-pairwise/}
}