Rothkopf, Constantin A.

12 publications

ICML 2025 Bongard in Wonderland: Visual Puzzles That Still Make AI Go Mad? Antonia Wüst, Tim Tobiasch, Lukas Helff, Inga Ibs, Wolfgang Stammer, Devendra Singh Dhami, Constantin A. Rothkopf, Kristian Kersting
ICLR 2025 Inverse Decision-Making Using Neural Amortized Bayesian Actors Dominik Straub, Tobias F. Niehues, Jan Peters, Constantin A. Rothkopf
NeurIPS 2025 What Do You Know? Bayesian Knowledge Inference for Navigating Agents Matthias Schultheis, Jana-Sophie Schönfeld, Constantin A. Rothkopf, Heinz Koeppl
NeurIPSW 2024 Bongard in Wonderland: Visual Puzzles That Still Make AI Go Mad? Antonia Wüst, Tim Tobiasch, Lukas Helff, Devendra Singh Dhami, Constantin A. Rothkopf, Kristian Kersting
WACV 2023 Improving Saliency Models' Predictions of the Next Fixation with Humans' Intrinsic Cost of Gaze Shifts Florian Kadner, Tobias Thomas, David Hoppe, Constantin A. Rothkopf
NeurIPS 2023 Probabilistic Inverse Optimal Control for Non-Linear Partially Observable Systems Disentangles Perceptual Uncertainty and Behavioral Costs Dominik Straub, Matthias Schultheis, Heinz Koeppl, Constantin A Rothkopf
NeurIPS 2022 Reinforcement Learning with Non-Exponential Discounting Matthias Schultheis, Constantin A Rothkopf, Heinz Koeppl
NeurIPS 2021 Inverse Optimal Control Adapted to the Noise Characteristics of the Human Sensorimotor System Matthias Schultheis, Dominik Straub, Constantin A Rothkopf
AAAI 2017 I See What You See: Inferring Sensor and Policy Models of Human Real-World Motor Behavior Felix Schmitt, Hans-Joachim Bieg, Michael Herman, Constantin A. Rothkopf
WACV 2017 Model-Driven Simulations for Computer Vision V. S. R. Veeravasarapu, Constantin A. Rothkopf, Visvanathan Ramesh
NeurIPS 2016 Catching Heuristics Are Optimal Control Policies Boris Belousov, Gerhard Neumann, Constantin A Rothkopf, Jan R Peters
ECML-PKDD 2011 Preference Elicitation and Inverse Reinforcement Learning Constantin A. Rothkopf, Christos Dimitrakakis