Robust Multi-Objective Learning with Mentor Feedback
Abstract
We study decision making when each action is described by a set of objectives, all of which are to be maximized. During the training phase, we have access to the actions of an outside agent (“mentor”). In the test phase, our goal is to maximally improve upon the mentor’s (unobserved) actions across all objectives. We present an algorithm with a vanishing regret compared with the optimal possible improvement, and show that our regret bound is the best possible. The bound is independent of the number of actions, and scales only as the logarithm of the number of objectives.
Cite
Text
Agarwal et al. "Robust Multi-Objective Learning with Mentor Feedback." Annual Conference on Computational Learning Theory, 2014.Markdown
[Agarwal et al. "Robust Multi-Objective Learning with Mentor Feedback." Annual Conference on Computational Learning Theory, 2014.](https://mlanthology.org/colt/2014/agarwal2014colt-robust/)BibTeX
@inproceedings{agarwal2014colt-robust,
title = {{Robust Multi-Objective Learning with Mentor Feedback}},
author = {Agarwal, Alekh and Badanidiyuru, Ashwinkumar and Dudík, Miroslav and Schapire, Robert E. and Slivkins, Aleksandrs},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2014},
pages = {726-741},
url = {https://mlanthology.org/colt/2014/agarwal2014colt-robust/}
}