CoRL 2020
165 papers
3D-OES: Viewpoint-Invariant Object-Factorized Environment Simulators
Hsiao-Yu Tung, Zhou Xian, Mihir Prabhudesai, Shamit Lal, Katerina Fragkiadaki A Long Horizon Planning Framework for Manipulating Rigid Pointcloud Objects
Anthony Simeonov, Yilun Du, Beomjoon Kim, Francois Hogan, Joshua Tenenbaum, Pulkit Agrawal, Alberto Rodriguez A User’s Guide to Calibrating Robotic Simulators
Bhairav Mehta, Ankur Handa, Dieter Fox, Fabio Ramos Action-Conditional Recurrent Kalman Networks for Forward and Inverse Dynamics Learning
Vaisakh Shaj, Philipp Becker, Dieter Büchler, Harit Pandya, Niels van Duijkeren, C. James Taylor, Marc Hanheide, Gerhard Neumann Amodal 3D Reconstruction for Robotic Manipulation via Stability and Connectivity
William Agnew, Christopher Xie, Aaron Walsman, Octavian Murad, Yubo Wang, Pedro Domingos), Siddhartha Srinivasa Attention-Privileged Reinforcement Learning
Sasha Salter, Dushyant Rao, Markus Wulfmeier, Raia Hadsell, Ingmar Posner Chaining Behaviors from Data with Model-Free Reinforcement Learning
Avi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, Sergey Levine Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Wilko Schwarting, Tim Seyde, Igor Gilitschenski, Lucas Liebenwein, Ryan Sander, Sertac Karaman, Daniela Rus Deep Reactive Planning in Dynamic Environments
Kei Ota, Devesh Jha, Tadashi Onishi, Asako Kanezaki, Yusuke Yoshiyasu, Yoko Sasaki, Toshisada Mariyama, Daniel Nikovski DeepMPCVS: Deep Model Predictive Control for Visual Servoing
Pushkal Katara, Harish Yvs, Harit Pandya, Abhinav Gupta, AadilMehdi Sanchawala, Gourav Kumar, Brojeshwar Bhowmick, Madhava Krishna Differentiable Logic Layer for Rule Guided Trajectory Prediction
Xiao Li, Guy Rosman, Igor Gilitschenski, Jonathan DeCastro, Cristian-Ioan Vasile, Sertac Karaman, Daniela Rus F-IRL: Inverse Reinforcement Learning via State Marginal Matching
Tianwei Ni, Harshit Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, Ben Eysenbach Flightmare: A Flexible Quadrotor Simulator
Yunlong Song, Selim Naji, Elia Kaufmann, Antonio Loquercio, Davide Scaramuzza GDN: A Coarse-to-Fine (C2F) Representation for End-to-End 6-DoF Grasp Detection
Kuang-Yu Jeng, Yueh-Cheng Liu, Zhe Yu Liu, Jen-Wei Wang, Ya-Liang Chang, Hung-Ting Su, Winston Hsu Integrating Egocentric Localization for More Realistic Point-Goal Navigation Agents
Samyak Datta, Oleksandr Maksymets, Judy Hoffman, Stefan Lee, Dhruv Batra, Devi Parikh Iterative Semi-Parametric Dynamics Model Learning for Autonomous Racing
Ignat Georgiev, Christoforos Chatzikomis, Timo Voelkl, Joshua Smith, Michael Mistry Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion
Xingye Da, Zhaoming Xie, David Hoeller, Byron Boots, Anima Anandkumar, Yuke Zhu, Buck Babich, Animesh Garg Learning a Decision Module by Imitating Driver’s Control Behaviors
Junning Huang, Sirui Xie, Jiankai Sun, Qiurui Ma, Chunxiao Liu, Dahua Lin, Bolei Zhou Learning Arbitrary-Goal Fabric Folding with One Hour of Real Robot Experience
Robert Lee, Daniel Ward, Vibhavari Dasagi, Akansel Cosgun, Juxi Leitner, Peter Corke Learning Dexterous Manipulation from Suboptimal Experts
Rae Jeong, Jost Tobias Springenberg, Jackie Kay, Dan Zheng, Alexandre Galashov, Nicolas Heess, Francesco Nori Learning Equality Constraints for Motion Planning on Manifolds
Giovanni Sutanto, Isabel Rayas Fernández, Peter Englert, Ragesh Kumar Ramachandran, Gaurav Sukhatme Learning from Demonstrations Using Signal Temporal Logic
Aniruddh Puranic, Jyotirmoy Deshmukh, Stefanos Nikolaidis Learning Hierarchical Task Networks with Preferences from Unannotated Demonstrations
Kevin Chen, Nithin Shrivatsav Srikanth, David Kent, Harish Ravichandar, Sonia Chernova Learning Hybrid Control Barrier Functions from Data
Lars Lindemann, Haimin Hu, Alexander Robey, Hanwen Zhang, Dimos Dimarogonas, Stephen Tu, Nikolai Matni Learning Obstacle Representations for Neural Motion Planning
Robin Strudel, Ricardo Garcia Pinel, Justin Carpentier, Jean-Paul Laumond, Ivan Laptev, Cordelia Schmid Learning Rich Touch Representations Through Cross-Modal Self-Supervision
Martina Zambelli, Yusuf Aytar, Francesco Visin, Yuxiang Zhou, Raia Hadsell Learning Stability Certificates from Data
Nicholas Boffi, Stephen Tu, Nikolai Matni, Jean-Jacques Slotine, Vikas Sindhwani Learning to Communicate and Correct Pose Errors
Nicholas Vadivelu, Mengye Ren, James Tu, Jingkang Wang, Raquel Urtasun Learning to Improve Multi-Robot Hallway Navigation
Jin Soo Park, Brian Tsang, Harel Yedidsion, Garrett Warnell, Daehyun Kyoung, Peter Stone Learning Trajectories for Visual-Inertial System Calibration via Model-Based Heuristic Deep Reinforcement Learning
Le Chen, Yunke Ao, Florian Tschopp, Andrei Cramariuc, Michel Breyer, Jen Jen Chung, Roland Siegwart, Cesar Cadena Learning Vision-Based Reactive Policies for Obstacle Avoidance
Elie Aljalbout, Ji Chen, Konstantin Ritt, Maximilian Ulmer, Sami Haddadin LiRaNet: End-to-End Trajectory Prediction Using Spatio-Temporal Radar Fusion
Meet Shah, Zhiling Huang, Ankit Laddha, Matthew Langford, Blake Barber, Sida Zhang, Carlos Vallespi-Gonzalez, Raquel Urtasun mAP-Adaptive Goal-Based Trajectory Prediction
Lingyao Zhang, Po-Hsun Su, Jerrick Hoang, Galen Clark Haynes, Micol Marchetti-Bowick MELD: Meta-Reinforcement Learning from Images via Latent State Models
Zihao Zhao, Anusha Nagabandi, Kate Rakelly, Chelsea Finn, Sergey Levine Model-Based Inverse Reinforcement Learning from Visual Demonstrations
Neha Das, Sarah Bechtle, Todor Davchev, Dinesh Jayaraman, Akshara Rai, Franziska Meier Model-Based Reinforcement Learning for Decentralized Multiagent Rendezvous
Rose Wang, J. Chase Kew, Dennis Lee, Tsang-Wei Lee, Tingnan Zhang, Brian Ichter, Jie Tan, Aleksandra Faust Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments
Jun Yamada, Youngwoon Lee, Gautam Salhotra, Karl Pertsch, Max Pflueger, Gaurav Sukhatme, Joseph Lim, Peter Englert Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments
Tianchen Ji, Sri Theja Vuppala, Girish Chowdhary, Katherine Driggs-Campbell MultiPoint: Cross-Spectral Registration of Thermal and Optical Aerial Imagery
Florian Achermann, Andrey Kolobov, Debadeepta Dey, Timo Hinzmann, Jen Jen Chung, Roland Siegwart, Nicholas Lawrance Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic Reinforcement Learning
Ryan Julian, Benjamin Swanson, Gaurav Sukhatme, Sergey Levine, Chelsea Finn, Karol Hausman One Thousand and One Hours: Self-Driving Motion Prediction Dataset
John Houston, Guido Zuidhof, Luca Bergamini, Yawei Ye, Long Chen, Ashesh Jain, Sammy Omari, Vladimir Iglovikov, Peter Ondruska Policy Learning in SE(3) Action Spaces
Dian Wang, Colin Kohler, Robert Platt Range Conditioned Dilated Convolutions for Scale Invariant 3D Object Detection
Alex Bewley, Pei Sun, Thomas Mensink, Dragomir Anguelov, Cristian Sminchisescu Reactive Motion Planning with Probabilisticsafety Guarantees
Yuxiao Chen, Ugo Rosolia, Chuchu Fan, Aaron Ames, Richard Murray Recovering and Simulating Pedestrians in the Wild
Ze Yang, Sivabalan Manivasagam, Ming Liang, Bin Yang, Wei-Chiu Ma, Raquel Urtasun Robust Policies via Mid-Level Visual Representations: An Experimental Study in Manipulation and Navigation
Bryan Chen, Alexander Sax, Francis Lewis, Iro Armeni, Silvio Savarese, Amir Zamir, Jitendra Malik, Lerrel Pinto Robust Quadrupedal Locomotion on Sloped Terrains: A Linear Policy Approach
Kartik Paigwar, Lokesh Krishna, Sashank Tirumala, Naman Khetan, Aditya Varma, Ashish Joglekar, Shalabh Bhatnagar, Ashitava Ghosal, Bharadwaj Amrutur, Shishir Kolathaya S3K: Self-Supervised Semantic Keypoints for Robotic Manipulation via Multi-View Consistency
Mel Vecerik, Jean-Baptiste Regli, Oleg Sushkov, David Barker, Rugile Pevceviciute, Thomas Rothörl, Raia Hadsell, Lourdes Agapito, Jonathan Scholz Safe Policy Learning for Continuous Control
Yinlam Chow, Ofir Nachum, Aleksandra Faust, Edgar Dueñez-Guzman, Mohammad Ghavamzadeh SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning
Albert Zhao, Tong He, Yitao Liang, Haibin Huang, Guy Van den Broeck, Stefano Soatto Sample-Efficient Cross-Entropy Method for Real-Time Planning
Cristina Pinneri, Shambhuraj Sawant, Sebastian Blaes, Jan Achterhold, Joerg Stueckler, Michal Rolinek, Georg Martius Self-Supervised 3D Keypoint Learning for Ego-Motion Estimation
Jiexiong Tang, Rares Ambrus, Vitor Guizilini, Sudeep Pillai, Hanme Kim, Patric Jensfelt, Adrien Gaidon Self-Supervised Object-in-Gripper Segmentation from Robotic Motions
Wout Boerdijk, Martin Sundermeyer, Maximilian Durner, Rudolph Triebel SelfVoxeLO: Self-Supervised LiDAR Odometry with Voxel-Based Deep Neural Networks
Yan Xu, Zhaoyang Huang, Kwan-Yee Lin, Xinge Zhu, Jianping Shi, Hujun Bao, Guofeng Zhang, Hongsheng Li Sim-to-Real Transfer for Vision-and-Language Navigation
Peter Anderson, Ayush Shrivastava, Joanne Truong, Arjun Majumdar, Devi Parikh, Dhruv Batra, Stefan Lee SMARTS: An Open-Source Scalable Multi-Agent RL Training School for Autonomous Driving
Ming Zhou, Jun Luo, Julian Villella, Yaodong Yang, David Rusu, Jiayu Miao, Weinan Zhang, Montgomery Alban, Iman Fadakar, Zheng Chen, Chongxi Huang, Ying Wen, Kimia Hassanzadeh, Daniel Graves, Zhengbang Zhu, Yihan Ni, Nhat Nguyen, Mohamed Elsayed, Haitham Ammar, Alexander Cowen-Rivers, Sanjeevan Ahilan, Zheng Tian, Daniel Palenicek, Kasra Rezaee, Peyman Yadmellat, Kun Shao, Dong Chen, Baokuan Zhang, Hongbo Zhang, Jianye Hao, Wulong Liu, Jun Wang Soft Multicopter Control Using Neural Dynamics Identification
Yitong Deng, Yaorui Zhang, Xingzhe He, Shuqi Yang, Yunjin Tong, Michael Zhang, Daniel DiPietro, Bo Zhu Stein Variational Model Predictive Control
Alexander Lambert, Fabio Ramos, Byron Boots, Dieter Fox, Adam Fishman STReSSD: Sim-to-Real from Sound for Stochastic Dynamics
Carolyn Matl, Yashraj Narang, Dieter Fox, Ruzena Bajcsy, Fabio Ramos StrObe: Streaming Object Detection from LiDAR Packets
Davi Frossard, Shun Da Suo, Sergio Casas, James Tu, Raquel Urtasun Task-Relevant Adversarial Imitation Learning
Konrad Zolna, Scott Reed, Alexander Novikov, Sergio Gómez Colmenarejo, David Budden, Serkan Cabi, Misha Denil, Nando de Freitas, Ziyu Wang The EMPATHIC Framework for Task Learning from Implicit Human Feedback
Yuchen Cui, Qiping Zhang, Brad Knox, Alessandro Allievi, Peter Stone, Scott Niekum TNT: Target-Driven Trajectory Prediction
Hang Zhao, Jiyang Gao, Tian Lan, Chen Sun, Ben Sapp, Balakrishnan Varadarajan, Yue Shen, Yi Shen, Yuning Chai, Cordelia Schmid, Congcong Li, Dragomir Anguelov Towards General and Autonomous Learning of Core Skills: A Case Study in Locomotion
Roland Hafner, Tim Hertweck, Philipp Kloeppner, Michael Bloesch, Michael Neunert, Markus Wulfmeier, Saran Tunyasuvunakool, Nicolas Heess, Martin Riedmiller Transporter Networks: Rearranging the Visual World for Robotic Manipulation
Andy Zeng, Pete Florence, Jonathan Tompson, Stefan Welker, Jonathan Chien, Maria Attarian, Travis Armstrong, Ivan Krasin, Dan Duong, Vikas Sindhwani, Johnny Lee TriFinger: An Open-Source Robot for Learning Dexterity
Manuel Wuthrich, Felix Widmaier, Felix Grimminger, Shruti Joshi, Vaibhav Agrawal, Bilal Hammoud, Majid Khadiv, Miroslav Bogdanovic, Vincent Berenz, Julian Viereck, Maximilien Naveau, Ludovic Righetti, Bernhard Schölkopf, Stefan Bauer Universal Embeddings for Spatio-Temporal Tagging of Self-Driving Logs
Sean Segal, Eric Kee, Wenjie Luo, Abbas Sadat, Ersin Yumer, Raquel Urtasun Unsupervised Monocular Depth Learning in Dynamic Scenes
Hanhan Li, Ariel Gordon, Hang Zhao, Vincent Casser, Anelia Angelova Untangling Dense Knots by Learning Task-Relevant Keypoints
Jennifer Grannen, Priya Sundaresan, Brijen Thananjeyan, Jeffrey Ichnowski, Ashwin Balakrishna, Vainavi Viswanath, Michael Laskey, Joseph Gonzalez, Ken Goldberg Visual Imitation Made Easy
Sarah Young, Dhiraj Gandhi, Shubham Tulsiani, Abhinav Gupta, Pieter Abbeel, Lerrel Pinto Visual Localization and Mapping with Hybrid SFA
Muhammad Haris, Mathias Franzius, Ute Bauer-Wersing, Sai Krishna Kaushik Karanam