CoRL 2019
110 papers
A Learnable Safety Measure
Steve Heim, Alexander Rohr, Sebastian Trimpe, Alexander Badri-Spröwitz Active Domain Randomization
Bhairav Mehta, Manfred Diaz, Florian Golemo, Christopher J. Pal, Liam Paull Asking Easy Questions: A User-Friendly Approach to Active Reward Learning
Erdem Bıyık, Malayandi Palan, Nicholas C. Landolfi, Dylan P. Losey, Dorsa Sadigh Conditional Driving from Natural Language Instructions
Junha Roh, Chris Paxton, Andrzej Pronobis, Ali Farhadi, Dieter Fox Contextual Imagined Goals for Self-Supervised Robotic Learning
Ashvin Nair, Shikhar Bahl, Alexander Khazatsky, Vitchyr Pong, Glen Berseth, Sergey Levine Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics
Michael Neunert, Abbas Abdolmaleki, Markus Wulfmeier, Thomas Lampe, Tobias Springenberg, Roland Hafner, Francesco Romano, Jonas Buchli, Nicolas Heess, Martin Riedmiller Counter-Example Guided Learning of Bounds on Environment Behavior
Yuxiao Chen, Sumanth Dathathri, Tung Phan-Minh, Richard M. Murray Curious iLQR: Resolving Uncertainty in Model-Based RL
Sarah Bechtle, Yixin Lin, Akshara Rai, Ludovic Righetti, Franziska Meier Data Efficient Reinforcement Learning for Legged Robots
Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Tingnan Zhang, Jie Tan, Vikas Sindhwani Deep Dynamics Models for Learning Dexterous Manipulation
Anusha Nagabandi, Kurt Konolige, Sergey Levine, Vikash Kumar Deep Value Model Predictive Control
David Hoeller, Farbod Farshidian, Marco Hutter Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction
Ajay Jain, Sergio Casas, Renjie Liao, Yuwen Xiong, Song Feng, Sean Segal, Raquel Urtasun Disentangled Relational Representations for Explaining and Learning from Demonstration
Yordan Hristov, Daniel Angelov, Michael Burke, Alex Lascarides, Subramanian Ramamoorthy End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds
Yin Zhou, Pei Sun, Yu Zhang, Dragomir Anguelov, Jiyang Gao, Tom Ouyang, James Guo, Jiquan Ngiam, Vijay Vasudevan Entity Abstraction in Visual Model-Based Reinforcement Learning
Rishi Veerapaneni, John D. Co-Reyes, Michael Chang, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua Tenenbaum, Sergey Levine Experience-Embedded Visual Foresight
Lin Yen-Chen, Maria Bauza, Phillip Isola Graph Policy Gradients for Large Scale Robot Control
Arbaaz Khan, Ekaterina Tolstaya, Alejandro Ribeiro, Vijay Kumar Graph-Structured Visual Imitation
Maximilian Sieb, Zhou Xian, Audrey Huang, Oliver Kroemer, Katerina Fragkiadaki Identifying Unknown Instances for Autonomous Driving
Kelvin Wong, Shenlong Wang, Mengye Ren, Ming Liang, Raquel Urtasun Imagined Value Gradients: Model-Based Policy Optimization with Tranferable Latent Dynamics Models
Arunkumar Byravan, Jost Tobias Springenberg, Abbas Abdolmaleki, Roland Hafner, Michael Neunert, Thomas Lampe, Noah Siegel, Nicolas Heess, Martin Riedmiller Learning by Cheating
Dian Chen, Brady Zhou, Vladlen Koltun, Philipp Krähenbühl Learning Decentralized Controllers for Robot Swarms with Graph Neural Networks
Ekaterina Tolstaya, Fernando Gama, James Paulos, George Pappas, Vijay Kumar, Alejandro Ribeiro Learning Latent Plans from Play
Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet Learning Locomotion Skills for Cassie: Iterative Design and Sim-to-Real
Zhaoming Xie, Patrick Clary, Jeremy Dao, Pedro Morais, Jonanthan Hurst, Michiel Panne Learning Reactive Motion Policies in Multiple Task Spaces from Human Demonstrations
M. Asif Rana, Anqi Li, Harish Ravichandar, Mustafa Mukadam, Sonia Chernova, Dieter Fox, Byron Boots, Nathan Ratliff Learning to Navigate Using Mid-Level Visual Priors
Alexander Sax, Jeffrey O. Zhang, Bradley Emi, Amir Zamir, Silvio Savarese, Leonidas Guibas, Jitendra Malik Leveraging Exploration in Off-Policy Algorithms via Normalizing Flows
Bogdan Mazoure, Thang Doan, Audrey Durand, Joelle Pineau, R Devon Hjelm Locally Weighted Regression Pseudo-Rehearsal for Adaptive Model Predictive Control
Grady R. Williams, Brian Goldfain, Keuntaek Lee, Jason Gibson, James M. Rehg, Evangelos A. Theodorou MAME : Model-Agnostic Meta-Exploration
Swaminathan Gurumurthy, Sumit Kumar, Katia Sycara Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning
Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, Sergey Levine Navigation Agents for the Visually Impaired: A Sidewalk Simulator and Experiments
Martin Weiss, Simon Chamorro, Roger Girgis, Margaux Luck, Samira E. Kahou, Joseph P. Cohen, Derek Nowrouzezahrai, Doina Precup, Florian Golemo, Chris Pal Nonverbal Robot Feedback for Human Teachers
Sandy H. Huang, Isabella Huang, Ravi Pandya, Anca D. Dragan On-Policy Robot Imitation Learning from a Converging Supervisor
Ashwin Balakrishna, Brijen Thananjeyan, Jonathan Lee, Felix Li, Arsh Zahed, Joseph E. Gonzalez, Ken Goldberg Perceptual Attention-Based Predictive Control
Keuntaek Lee, Gabriel Nakajima An, Viacheslav Zakharov, Evangelos A. Theodorou Provably Robust Blackbox Optimization for Reinforcement Learning
Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Deepali Jain, Yuxiang Yang, Atil Iscen, Jasmine Hsu, Vikas Sindhwani Quasi-Newton Trust Region Policy Optimization
Devesh K. Jha, Arvind U. Raghunathan, Diego Romeres Receding Horizon Curiosity
Matthias Schultheis, Boris Belousov, Hany Abdulsamad, Jan Peters ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots
Michael Ahn, Henry Zhu, Kristian Hartikainen, Hugo Ponte, Abhishek Gupta, Sergey Levine, Vikash Kumar RoboNet: Large-Scale Multi-Robot Learning
Sudeep Dasari, Frederik Ebert, Stephen Tian, Suraj Nair, Bernadette Bucher, Karl Schmeckpeper, Siddharth Singh, Sergey Levine, Chelsea Finn Self-Paced Contextual Reinforcement Learning
Pascal Klink, Hany Abdulsamad, Boris Belousov, Jan Peters Two Stream Networks for Self-Supervised Ego-Motion Estimation
Rares Ambrus, Vitor Guizilini, Jie Li, Sudeep Pillai Adrien Gaidon Understanding Teacher Gaze Patterns for Robot Learning
Akanksha Saran, Elaine Schaertl Short, Andrea Thomaz, Scott Niekum Vision-and-Dialog Navigation
Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer Worst Cases Policy Gradients
Yichuan Charlie Tang, Jian Zhang, Ruslan Salakhutdinov