L4DC 2025
120 papers
A Pontryagin Perspective on Reinforcement Learning
Onno Eberhard, Claire Vernade, Michael Muehlebach Accelerating Proximal Policy Optimization Learning Using Task Prediction for Solving Environments with Delayed Rewards
Ahmad Ahmad, Mehdi Kermanshah, Kevin Leahy, Zachary Serlin, Ho Chit Siu, Makai Mann, Cristian-Ioan Vasile, Roberto Tron, Calin Belta Anytime Safe Reinforcement Learning
Pol Mestres, Arnau Marzabal, Jorge Cortes Automating the Loop in Traffic Incident Management on Highway
Matteo Cercola, Nicola Gatti, Pedro Huertas Leyva, Benedetto Carambia, Simone Formentin Back to Base: Towards Hands-Off Learning via Safe Resets with Reach-Avoid Safety Filters
Azra Begzadic, Nikhil Shinde, Sander Tonkens, Dylan Hirsch, Kaleb Ugalde, Michael Yip, Jorge Cortes, Sylvia Herbert BIGE : Biomechanics-Informed GenAI for Exercise Science
Shubh Maheshwari, Anwesh Mohanty, Yadi Cao, Swithin Razu, Andrew McCulloch, Rose Yu Dense Dynamics-Aware Reward Synthesis: Integrating Prior Experience with Demonstrations
Cevahir Koprulu, Po-Han Li, Tianyu Qiu, Ruihan Zhao, Tyler Westenbroek, David Fridovich-Keil, Sandeep Chinchali, Ufuk Topcu Diffusion Predictive Control with Constraints
Ralf Römer, Alexander von Rohr, Angela Schoellig Flow Matching for Stochastic Linear Control Systems
Yuhang Mei, Mohammad Al-Jarrah, Amirhossein Taghvaei, Yongxin Chen Formation Shape Control Using the Gromov-Wasserstein Metric
Haruto Nakashima, Siddhartha Ganguly, Kohei Morimoto, Kenji Kashima Hybrid Modeling of Heterogeneous Human Teams for Collaborative Decision Processes
Amirhossein Ravari, Seyede Fatemeh Ghoreishi, Tian Lan, Nathaniel D. Bastian, Mahdi Imani HydroGym: A Reinforcement Learning Platform for Fluid Dynamics
Christian Lagemann, Ludger Paehler, Jared Callaham, Sajeda Mokbel, Samuel Ahnert, Kai Lagemann, Esther Lagemann, Nikolaus Adams, Steven Brunton Interacting Particle Systems for Fast Linear Quadratic RL
Anant A. Joshi, Heng-Sheng Chang, Amirhossein Taghvaei, Prashant G. Mehta, Sean P. Meyn Kernel-Based Optimal Control: An Infinitesimal Generator Approach
Petar Bevanda, Nicolas Hoischen, Tobias Wittmann, Jan Brudigam, Sandra Hirche, Boris Houska Koopman Based Trajectory Optimization with Mixed Boundaries
Mohamed Abou-Taleb, Maximilian Raff, Kathrin Flaßkamp, C. David Remy Learning and Steering Game Dynamics Towards Desirable Outcomes
Ilayda Canyakmaz, Iosif Sakos, Wayne Lin, Antonios Varvitsiotis, Georgios Piliouras Learning Biomolecular Models Using Signal Temporal Logic
Hanna Krasowski, Eric Palanques-Tost, Calin Belta, Murat Arcak Logarithmic Regret for Nonlinear Control
James Wang, Bruce Lee, Ingvar Ziemann, Nikolai Matni Lyapunov Perception Contracts for Operating Design Domains
Yangge Li, Chenxi Ji, Jai Anchalia, Yixuan Jia, Benjamin C Yang, Daniel Zhuang, Sayan Mitra Multi-Agent Stochastic Bandits Robust to Adversarial Corruptions
Fatemeh Ghaffari, Xuchuang Wang, Jinhang Zuo, Mohammad Hajiesmaili Nonconvex Linear System Identification with Minimal State Representation
Uday Kiran Reddy Tadipatri, Benjamin D. Haeffele, Joshua Agterberg, Ingvar Ziemann, Rene Vidal Orthogonal Projection-Based Regularization for Efficient Model Augmentation
Bendeguz Mate Györök, Jan H. Hoekstra, Johan Kon, Tamas Peni, Maarten Schoukens, Roland Toth Predictive Monitoring of Black-Box Dynamical Systems
Thomas A. Henzinger, Fabian Kresse, Kaushik Mallik, Emily Yu, \DJor\dje Žikelić Realizable Continuous-Space Shields for Safe Reinforcement Learning
Kyungmin Kim, Davide Corsi, Andoni Rodrı́guez, Jb Lanier, Benjami Parellada, Pierre Baldi, César Sánchez, Roy Fox Reinforcement Learning from Multi-Level and Episodic Human Feedback
Muhammad Qasim Elahi, Somtochukwu Oguchienti, Maheed H. Ahmed, Mahsa Ghasemi STLGame: Signal Temporal Logic Games in Adversarial Multi-Agent Systems
Shuo Yang, Hongrui Zheng, Cristian-Ioan Vasile, George Pappas, Rahul Mangharam Symmetries-Enhanced Multi-Agent Reinforcement Learning
Nikolaos Bousias, Stefanos Pertigkiozoglou, Kostas Daniilidis, George Pappas TAB-Fields: A Maximum Entropy Framework for Mission-Aware Adversarial Planning
Gokul Puthumanaillam, Jae Hyuk Song, Nurzhan Yesmagambet, Shinkyu Park, Melkior Ornik TamedPUMA: Safe and Stable Imitation Learning with Geometric Fabrics
Saray Bakker, Rodrigo Perez-Dattari, Cosimo Della Santina, Wendelin Böhmer, Javier Alonso-Mora Toward Near-Globally Optimal Nonlinear Model Predictive Control via Diffusion Models
Tzu-Yuan Huang, Armin Lederer, Nicolas Hoischen, Jan Brudigam, Xuehua Xiao, Stefan Sosnowski, Sandra Hirche