RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health Interventions

Abstract

Mobile health leverages personalized and contextually tailored interventions optimized through bandit and reinforcement learning algorithms. In practice, however, challenges such as participant heterogeneity, nonstationarity, and nonlinear relationships hinder algorithm performance. We propose RoME, a Robust Mixed-Effects contextual bandit algorithm that simultaneously addresses these challenges via (1) modeling the differential reward with user- and time-specific random effects, (2) network cohesion penalties, and (3) debiased machine learning for flexible estimation of baseline rewards. We establish a high-probability regret bound that depends solely on the dimension of the differential-reward model, enabling us to achieve robust regret bounds even when the baseline reward is highly complex. We demonstrate the superior performance of the RoME algorithm in a simulation and two off-policy evaluation studies.

Cite

Text

Huch et al. "RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health Interventions." Neural Information Processing Systems, 2024. doi:10.52202/079017-4074

Markdown

[Huch et al. "RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health Interventions." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/huch2024neurips-rome/) doi:10.52202/079017-4074

BibTeX

@inproceedings{huch2024neurips-rome,
  title     = {{RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health Interventions}},
  author    = {Huch, Easton and Shi, Jieru and Abbott, Madeline R. and Golbus, Jessica R. and Moreno, Alexander and Dempsey, Walter H.},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-4074},
  url       = {https://mlanthology.org/neurips/2024/huch2024neurips-rome/}
}