L4DC 2024
139 papers
A Framework for Evaluating Human Driver Models Using Neuroimaging
Christopher Strong, Kaylene Stocking, Jingqi Li, Tianjiao Zhang, Jack Gallant, Claire Tomlin Bounded Robustness in Reinforcement Learning via Lexicographic Objectives
Daniel Jarne Ornia, Licio Romao, Lewis Hammond, Manuel Mazo Jr, Alessandro Abate CACTO-SL: Using Sobolev Learning to Improve Continuous Actor-Critic with Trajectory Optimization
Elisa Alboni, Gianluigi Grandesso, Gastone Pietro Rosati Papini, Justin Carpentier, Andrea Del Prete DC4L: Distribution Shift Recovery via Data-Driven Control for Deep Learning Models
Vivian Lin, Kuk Jin Jang, Souradeep Dutta, Michele Caprio, Oleg Sokolsky, Insup Lee Deep Hankel Matrices with Random Elements
Nathan Lawrence, Philip Loewen, Shuyuan Wang, Michael Forbes, Bhushan Gopaluni Dynamics Harmonic Analysis of Robotic Systems: Application in Data-Driven Koopman Modelling
Daniel Ordoñez-Apraez, Vladimir Kostic, Giulio Turrisi, Pietro Novelli, Carlos Mastalli, Claudio Semini, Massimilano Pontil Efficient Imitation Learning with Conservative World Models
Victor Kolev, Rafael Rafailov, Kyle Hatch, Jiajun Wu, Chelsea Finn Expert with Clustering: Hierarchical Online Preference Learning Framework
Tianyue Zhou, Jung-Hoon Cho, Babak Rahimi Ardabili, Hamed Tabkhi, Cathy Wu Gradient Shaping for Multi-Constraint Safe Reinforcement Learning
Yihang Yao, Zuxin Liu, Zhepeng Cen, Peide Huang, Tingnan Zhang, Wenhao Yu, Ding Zhao Hamiltonian GAN
Christine Allen-Blanchette In Vivo Learning-Based Control of Microbial Populations Density in Bioreactors
Sara Maria Brancato, Davide Salzano, Francesco De Lellis, Davide Fiore, Giovanni Russo, Mario di Bernardo Inverse Optimal Control as an Errors-in-Variables Problem
Rahel Rickenbach, Anna Scampicchio, Melanie N. Zeilinger Learning Flow Functions of Spiking Systems
Miguel Aguiar, Amritam Das, Karl H. Johansson Learning for CasADi: Data-Driven Models in Numerical Optimization
Tim Salzmann, Jon Arrizabalaga, Joel Andersson, Marco Pavone, Markus Ryll Learning-Based Rigid Tube Model Predictive Control
Yulong Gao, Shuhao Yan, Jian Zhou, Mark Cannon, Alessandro Abate, Karl Henrik Johansson Multi-Agent Assignment via State Augmented Reinforcement Learning
Leopoldo Agorio, Sean Van Alen, Miguel Calvo-Fullana, Santiago Paternain, Juan Andrés Bazerque Multi-Modal Conformal Prediction Regions by Optimizing Convex Shape Templates
Renukanandan Tumu, Matthew Cleaveland, Rahul Mangharam, George Pappas, Lars Lindemann PlanNetX: Learning an Efficient Neural Network Planner from MPC for Longitudinal Control
Jasper Hoffmann, Diego Fernandez Clausen, Julien Brosseit, Julian Bernhard, Klemens Esterle, Moritz Werling, Michael Karg, Joschka Joschka Bödecker Probabilistic ODE Solvers for Integration Error-Aware Numerical Optimal Control
Amon Lahr, Filip Tronarp, Nathanael Bosch, Jonathan Schmidt, Philipp Hennig, Melanie N. Zeilinger Real-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning
Mohak Bhardwaj, Thomas Lampe, Michael Neunert, Francesco Romano, Abbas Abdolmaleki, Arunkumar Byravan, Markus Wulfmeier, Martin Riedmiller, Jonas Buchli Reinforcement Learning-Driven Parametric Curve Fitting for Snake Robot Gait Design
Jack Naish, Jacob Rodriguez, Jenny Zhang, Bryson Jones, Guglielmo Daddi, Andrew Orekhov, Rob Royce, Michael Paton, Howie Choset, Masahiro Ono, Rohan Thakker State-Wise Safe Reinforcement Learning with Pixel Observations
Sinong Zhan, Yixuan Wang, Qingyuan Wu, Ruochen Jiao, Chao Huang, Qi Zhu Towards Safe Multi-Task Bayesian Optimization
Jannis Lübsen, Christian Hespe, Annika Eichler