Amortila, Philip

11 publications

NeurIPS 2025 Model Selection for Off-Policy Evaluation: New Algorithms and Experimental Protocol Pai Liu, LingfengZhao, Shivangi Agarwal, Jinghan Liu, Audrey Huang, Philip Amortila, Nan Jiang
ICLR 2024 Harnessing Density Ratios for Online Reinforcement Learning Philip Amortila, Dylan J Foster, Nan Jiang, Ayush Sekhari, Tengyang Xie
COLT 2024 Mitigating Covariate Shift in Misspecified Regression with Applications to Reinforcement Learning Philip Amortila, Tongyi Cao, Akshay Krishnamurthy
NeurIPS 2024 Reinforcement Learning Under Latent Dynamics: Toward Statistical and Algorithmic Modularity Philip Amortila, Dylan J. Foster, Nan Jiang, Akshay Krishnamurthy, Zakaria Mhammedi
ICML 2024 Scalable Online Exploration via Coverability Philip Amortila, Dylan J Foster, Akshay Krishnamurthy
ICML 2023 The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation Philip Amortila, Nan Jiang, Csaba Szepesvari
NeurIPS 2022 A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation Philip Amortila, Nan Jiang, Dhruv Madeka, Dean P. Foster
ALT 2021 Exponential Lower Bounds for Planning in MDPs with Linearly-Realizable Optimal Action-Value Functions Gellért Weisz, Philip Amortila, Csaba Szepesvári
COLT 2021 On Query-Efficient Planning in MDPs Under Linear Realizability of the Optimal State-Value Function Gellert Weisz, Philip Amortila, Barnabás Janzer, Yasin Abbasi-Yadkori, Nan Jiang, Csaba Szepesvari
AISTATS 2020 A Distributional Analysis of Sampling-Based Reinforcement Learning Algorithms Philip Amortila, Doina Precup, Prakash Panangaden, Marc G. Bellemare
ICML 2020 Constrained Markov Decision Processes via Backward Value Functions Harsh Satija, Philip Amortila, Joelle Pineau