Ball, Philip J

11 publications

TMLR 2023 Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A Osborne, Yee Whye Teh
ICML 2023 Efficient Online Reinforcement Learning with Offline Data Philip J. Ball, Laura Smith, Ilya Kostrikov, Sergey Levine
ICLRW 2023 Synthetic Experience Replay Cong Lu, Philip J. Ball, Jack Parker-Holder
ICMLW 2023 Synthetic Experience Replay Cong Lu, Philip J. Ball, Yee Whye Teh, Jack Parker-Holder
AutoML 2022 Bayesian Generational Population-Based Training Xingchen Wan, Cong Lu, Jack Parker-Holder, Philip J. Ball, Vu Nguyen, Binxin Ru, Michael Osborne
ICLRW 2022 Bayesian Generational Population-Based Training Xingchen Wan, Cong Lu, Jack Parker-Holder, Philip J. Ball, Vu Nguyen, Binxin Ru, Michael Osborne
ICMLW 2022 Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A Osborne, Yee Whye Teh
AAAI 2022 Same State, Different Task: Continual Reinforcement Learning Without Interference Samuel Kessler, Jack Parker-Holder, Philip J. Ball, Stefan Zohren, Stephen J. Roberts
ICML 2022 Stabilizing Off-Policy Deep Reinforcement Learning from Pixels Edoardo Cetin, Philip J Ball, Stephen Roberts, Oya Celiktutan
ICML 2021 Augmented World Models Facilitate Zero-Shot Dynamics Generalization from a Single Offline Environment Philip J Ball, Cong Lu, Jack Parker-Holder, Stephen Roberts
UAI 2019 The Sensitivity of Counterfactual Fairness to Unmeasured Confounding Niki Kilbertus, Philip J. Ball, Matt J. Kusner, Adrian Weller, Ricardo Silva