L4DC 2020
99 papers
A Theoretical Analysis of Deep Q-Learning
Jianqing Fan, Zhaoran Wang, Yuchen Xie, Zhuoran Yang Actively Learning Gaussian Process Dynamics
Mona Buisson-Fenet, Friedrich Solowjow, Sebastian Trimpe Counterfactual Programming for Optimal Control
Luiz F. O. Chamon, Santiago Paternain, Alejandro Ribeiro Encoding Physical Constraints in Differentiable Newton-Euler Algorithm
Giovanni Sutanto, Austin Wang, Yixin Lin, Mustafa Mukadam, Gaurav Sukhatme, Akshara Rai, Franziska Meier Feed-Forward Neural Networks with Trainable Delay
Xunbi A. Ji, Tamás G. Molnár, Sergei S. Avedisov, Gábor Orosz Fitting a Linear Control Policy to Demonstrations with a Kalman Constraint
Malayandi Palan, Shane Barratt, Alex McCauley, Dorsa Sadigh, Vikas Sindhwani, Stephen Boyd Improving Input-Output Linearizing Controllers for Bipedal Robots via Reinforcement Learning
Fernando Castañeda, Mathias Wulfman, Ayush Agrawal, Tyler Westenbroek, Shankar Sastry, Claire Tomlin, Koushil Sreenath Information Theoretic Model Predictive Q-Learning
Mohak Bhardwaj, Ankur Handa, Dieter Fox, Byron Boots Keyframing the Future: Keyframe Discovery for Visual Prediction and Planning
Karl Pertsch, Oleh Rybkin, Jingyun Yang, Shenghao Zhou, Konstantinos Derpanis, Kostas Daniilidis, Joseph Lim, Andrew Jaegle L1-GP: L1 Adaptive Control with Bayesian Learning
Aditya Gahlawat, Pan Zhao, Andrew Patterson, Naira Hovakimyan, Evangelos Theodorou Learning Convex Optimization Control Policies
Akshay Agrawal, Shane Barratt, Stephen Boyd, Bartolomeo Stellato Learning to Correspond Dynamical Systems
Nam Hee Kim, Zhaoming Xie, Michiel Panne Learning to Plan via Deep Optimistic Value Exploration
Tim Seyde, Wilko Schwarting, Sertac Karaman, Daniela Rus Linear Antisymmetric Recurrent Neural Networks
Signe Moe, Filippo Remonato, Esten Ingar Grøtli, Jan Tommy Gravdahl Localized Active Learning of Gaussian Process State Space Models
Alexandre Capone, Gerrit Noske, Jonas Umlauft, Thomas Beckers, Armin Lederer, Sandra Hirche Lyceum: An Efficient and Scalable Ecosystem for Robot Learning
Colin Summers, Kendall Lowrey, Aravind Rajeswaran, Siddhartha Srinivasa, Emanuel Todorov Objective Mismatch in Model-Based Reinforcement Learning
Nathan Lambert, Brandon Amos, Omry Yadan, Roberto Calandra On the Robustness of Data-Driven Controllers for Linear Systems
Rajasekhar Anguluri, Abed Alrahman Al Makdah, Vaibhav Katewa, Fabio Pasqualetti Online Data Poisoning Attacks
Xuezhou Zhang, Xiaojin Zhu, Laurent Lessard Periodic Q-Learning
Donghwan Lee, Niao He Plan2Vec: Unsupervised Representation Learning by Latent Plans
Ge Yang, Amy Zhang, Ari Morcos, Joelle Pineau, Pieter Abbeel, Roberto Calandra Planning from Images with Deep Latent Gaussian Process Dynamics
Nathanael Bosch, Jan Achterhold, Laura Leal-Taixé, Jörg Stückler Robust Guarantees for Perception-Based Control
Sarah Dean, Nikolai Matni, Benjamin Recht, Vickie Ye Robust Regression for Safe Exploration in Control
Anqi Liu, Guanya Shi, Soon-Jo Chung, Anima Anandkumar, Yisong Yue Smart Forgetting for Safe Online Learning with Gaussian Processes
Jonas Umlauft, Thomas Beckers, Alexandre Capone, Armin Lederer, Sandra Hirche Sparse and Low-Bias Estimation of High Dimensional Vector Autoregressive Models
Trevor Ruiz, Sharmodeep Bhattacharyya, Mahesh Balasubramanian, Kristofer Bouchard Structured Mechanical Models for Robot Learning and Control
Jayesh K. Gupta, Kunal Menda, Zachary Manchester, Mykel Kochenderfer