Contextual Markov Decision Processes Using Generalized Linear Models
Abstract
We consider the recently proposed reinforcement learning (RL) framework of Contextual Markov Decision Processes (CMDP), where the agent has a sequence of episodic interactions with tabular environments chosen from a possibly infinite set. The parameters of these environments depend on a context vector that is available to the agent at the start of each episode. In this paper, we propose a no-regret online RL algorithm in the setting where the MDP parameters are obtained from the context using generalized linear models (GLMs). The proposed algorithm GL-ORL relies on efficient online updates and is also memory efficient. Our analysis of the algorithm gives new results in the logit link case and improves previous bounds in the linear case. Our work is theoretical and we primarily focus on regret bounds but we aim to highlight the ubiquitous sequential decision making problem of learning generalizable policies for a population of individuals.
Cite
Text
Modi and Tewari. "Contextual Markov Decision Processes Using Generalized Linear Models." ICML 2019 Workshops: RL4RealLife, 2019.Markdown
[Modi and Tewari. "Contextual Markov Decision Processes Using Generalized Linear Models." ICML 2019 Workshops: RL4RealLife, 2019.](https://mlanthology.org/icmlw/2019/modi2019icmlw-contextual/)BibTeX
@inproceedings{modi2019icmlw-contextual,
title = {{Contextual Markov Decision Processes Using Generalized Linear Models}},
author = {Modi, Aditya and Tewari, Ambuj},
booktitle = {ICML 2019 Workshops: RL4RealLife},
year = {2019},
url = {https://mlanthology.org/icmlw/2019/modi2019icmlw-contextual/}
}