Planning and Learning with Adaptive Lookahead

Abstract

Some of the most powerful reinforcement learning frameworks use planning for action selection. Interestingly, their planning horizon is either fixed or determined arbitrarily by the state visitation history. Here, we expand beyond the naive fixed horizon and propose a theoretically justified strategy for adaptive selection of the planning horizon as a function of the state-dependent value estimate. We propose two variants for lookahead selection and analyze the trade-off between iteration count and computational complexity per iteration. We then devise a corresponding deep Q-network algorithm with an adaptive tree search horizon. We separate the value estimation per depth to compensate for the off-policy discrepancy between depths. Lastly, we demonstrate the efficacy of our adaptive lookahead method in a maze environment and Atari.

Cite

Text

Rosenberg et al. "Planning and Learning with Adaptive Lookahead." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I8.26149

Markdown

[Rosenberg et al. "Planning and Learning with Adaptive Lookahead." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/rosenberg2023aaai-planning/) doi:10.1609/AAAI.V37I8.26149

BibTeX

@inproceedings{rosenberg2023aaai-planning,
  title     = {{Planning and Learning with Adaptive Lookahead}},
  author    = {Rosenberg, Aviv and Hallak, Assaf and Mannor, Shie and Chechik, Gal and Dalal, Gal},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {9606-9613},
  doi       = {10.1609/AAAI.V37I8.26149},
  url       = {https://mlanthology.org/aaai/2023/rosenberg2023aaai-planning/}
}