Skalse, Joar Max Viktor

11 publications

AAAI 2025 Partial Identifiability in Inverse Reinforcement Learning for Agents with Non-Exponential Discounting Joar Max Viktor Skalse, Alessandro Abate
ICML 2025 The Perils of Optimizing Learned Reward Functions: Low Training Error Does Not Guarantee Low Regret Lukas Fluri, Leon Lang, Alessandro Abate, Patrick Forré, David Krueger, Joar Max Viktor Skalse
ICLR 2024 Goodhart's Law in Reinforcement Learning Jacek Karwowski, Oliver Hayman, Xingjian Bai, Klaus Kiendlhofer, Charlie Griffin, Joar Max Viktor Skalse
ICLR 2024 On the Expressivity of Objective-Specification Formalisms in Reinforcement Learning Rohan Subramani, Marcus Williams, Max Heitmann, Halfdan Holm, Charlie Griffin, Joar Max Viktor Skalse
ICLR 2024 Quantifying the Sensitivity of Inverse Reinforcement Learning to Misspecification Joar Max Viktor Skalse, Alessandro Abate
ICLR 2024 STARC: A General Framework for Quantifying Differences Between Reward Functions Joar Max Viktor Skalse, Lucy Farnik, Sumeet Ramesh Motwani, Erik Jenner, Adam Gleave, Alessandro Abate
ICML 2023 Invariance in Policy Optimisation and Partial Identifiability in Reward Learning Joar Max Viktor Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, Adam Gleave
NeurIPSW 2022 A General Framework for Reward Function Distances Erik Jenner, Joar Max Viktor Skalse, Adam Gleave
NeurIPSW 2022 All’s Well That Ends Well: Avoiding Side Effects with Distance-Impact Penalties Charlie Griffin, Joar Max Viktor Skalse, Lewis Hammond, Alessandro Abate
NeurIPSW 2022 Misspecification in Inverse Reinforcement Learning Joar Max Viktor Skalse, Alessandro Abate
NeurIPSW 2022 The Reward Hypothesis Is False Joar Max Viktor Skalse, Alessandro Abate