Online Learning with Non-Convex Losses and Non-Stationary Regret
Abstract
In this paper, we consider online learning with non-convex loss functions. Similar to Besbes et al. [2015] we apply non-stationary regret as the performance metric. In particular, we study the regret bounds under different assumptions on the information available regarding the loss functions. When the gradient of the loss function at the decision point is available, we propose an online normalized gradient descent algorithm (ONGD) to solve the online learning problem. In another situation, when only the value of the loss function is available, we propose a bandit online normalized gradient descent algorithm (BONGD). Under a condition to be called weak pseudo-convexity (WPC), we show that both algorithms achieve a cumulative regret bound of O($\sqrt{T+V_T T}$), where $V_T$ is the total temporal variations of the loss functions, thus establishing a sublinear regret bound for online learning with non-convex loss functions and non-stationary regret measure.
Cite
Text
Gao et al. "Online Learning with Non-Convex Losses and Non-Stationary Regret." International Conference on Artificial Intelligence and Statistics, 2018.Markdown
[Gao et al. "Online Learning with Non-Convex Losses and Non-Stationary Regret." International Conference on Artificial Intelligence and Statistics, 2018.](https://mlanthology.org/aistats/2018/gao2018aistats-online/)BibTeX
@inproceedings{gao2018aistats-online,
title = {{Online Learning with Non-Convex Losses and Non-Stationary Regret}},
author = {Gao, Xiand and Li, Xiaobo and Zhang, Shuzhong},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2018},
pages = {235-243},
url = {https://mlanthology.org/aistats/2018/gao2018aistats-online/}
}