ICMLW 2019
92 papers
A Functional Extension of Multi-Output Learning
Alex Lambert, Romain Brault, Zoltan Szabo, Florence d'Alche-Buc A Meta Understanding of Meta-Learning
Wei-Lun Chao, Han-Jia Ye, De-Chuan Zhan, Mark Campbell, Kilian Q. Weinberger A Modern Take on the Bias-Variance Tradeoff in Neural Networks
Brady Neal, Sarthak Mittal, Aristide Baratin, Vinayak Tantia, Matthew Scicluna, Simon Lacoste-Julien, Ioannis Mitliagkas A Systematic Framework for Natural Perturbations from Videos
Vaishaal Shankar, Achal Dave, Rebecca Roelofs, Deva Ramanan, Benjamin Recht, Ludwig Schmidt Adversarial Training Can Hurt Generalization
Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, Percy Liang Angular Visual Hardness
Beidi Chen, Weiyang Liu, Animesh Garg, Zhiding Yu, Anshumali Shrivastava, Anima Anandkumar Are All Layers Created Equal?
Chiyuan Zhang, Samy Bengio, Yoram Singer Bad Global Minima Exist and SGD Can Reach Them
Shengchao Liu, Dimitris Papailiopoulos, Dimitris Achlioptas Batch Normalization Is a Cause of Adversarial Vulnerability
Angus Galloway, Anna Golubeva, Thomas Tanay, Medhat Moussa, Graham W. Taylor Challenges of Real-World Reinforcement Learning
Gabriel Dulac-Arnold, Daniel Mankowitz, Todd Hester Connections Between Optimization in Machine Learning and Adaptive Control
Joseph E. Gaudio, Travis E. Gibson, Anuradha M. Annaswamy, Michael A. Bolender, Eugene Lavretsky Curious iLQR: Resolving Uncertainty in Model-Based RL
Sarah Bechtle, Akshara Rai, Yixin Lin, Ludovic Righetti, Franziska Meier Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Reward Signals
Gerrit Schoettler, Ashvin Nair, Jianlan Luo, Shikhar Bahl, Juan Aparicio Ojea, Eugen Solowjow, Sergey Levine Federated Optimization for Heterogeneous Networks
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith Goal-Conditioned Imitation Learning
Yiming Ding, Carlos Florensa, Mariano Phielipp, Pieter Abbeel Horizon: Facebook’s Open Source Applied Reinforcement Learning Platform
Jason Gauci, Edoardo Conti, Yitao Liang, Kittipat Virochsiri, Yuchen He, Zachary Kaden, Vivek Narayanan, Xiaohui Ye, Zhengxing Chen Improving Relevance Prediction with Transfer Learning in Large-Scale Retrieval Systems
Ruoxi Wang, Zhe Zhao, Xinyang Yi, Ji Yang, Derek Zhiyuan Cheng, Lichan Hong, Steve Tjoa, Jieqi Kang, Evan Ettinger, Ed Chi Learning to Learn to Communicate
Ryan Lowe, Abhinav Gupta, Jakob Foerster, Douwe Kiela, Joelle Pineau Lessons from Contextual Bandit Learning in a Customer Support Bot
Nikos Karampatziakis, Sebastian Kochman, Jade Huang, Paul Mineiro, Kathy Osborne, Weizhu Chen Line Attractor Dynamics in Recurrent Networks for Sentiment Classification
Niru Maheswaranathan, Alex H. Williams, Matthew D. Golub, Surya Ganguli, David Sussillo Lyapunov-Based Safe Policy Optimization for Continuous Control
Yinlam Chow, Ofir Nachum, Aleksandra Faust, Edgar Duenez-Guzman, Mohammad Ghavamzadeh Memorization in Overparameterized Autoencoders
Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler Meta-Reinforcement Learning for Adaptive Autonomous Driving
Yesmina Jaafra, Jean Luc Laurent, Aline Deruyver, Mohamed Saber Naceur Multi-Task Learning via Task Multi-Clustering
Andy Yan, Xin Wang, Ion Stoica, Joseph Gonzalez, Roy Fox Off-Policy Evaluation of Generalization for Deep Q-Learning in BinaryReward Tasks
Alex Irpan, Kanishka Rao, Konstantinos Bousmalis, Chris Harris, Julian Ibarz, Sergey Levine Optimizing 3D Structure of H2O Molecule Using DDPG
Soo Kyung Kim, Peggy Li, Joanne Taery Kim, Piyush Karande, Yong Han ORL: Reinforcement Learning Benchmarks for Online Stochastic Optimization Problems
Bharathan Balaji, Jordan Bell-Masterson, Enes Bilgin, Andreas Damianou, Pablo Moreno Garcia, Arpit Jain, Anna Luo, Alvaro Maggiar, Balakrishnan Narayanaswamy, Chun Ye P3O: Policy-on Policy-Off Policy Optimization
Rasool Fakoor, Pratik Chaudhari, Alexander J. Smola Park: An Open Platform for Learning Augmented Computer Systems
Hongzi Mao, Parimarjan Negi, Akshay Narayan, Hanrui Wang, Jiacheng Yang, Haonan Wang, Ryan Marcus, Ravichandra Addanki, Mehrdad Khani, Songtao He, Vikram Nathan, Frank Cangialosi, Shaileshh Bojja Venkatakrishnan, Wei-Hung Weng, Song Han, Tim Kraska, Mohammad Alizadeh Personalized Student Stress Prediction with Deep Multi-Task Network
Abhinav Shaw, Natcha Simsiri, Iman Dezbani, Madelina Fiterau, Tauhidur Rahaman Progressive Memory Banks for Incremental Domain Adaptation
Nabiha Asghar, Lili Mou, Kira A. Selby, Kevin D. Pantasdo, Pascal Poupart, Xin Jiang Q-Learning for Continuous Actions with Cross-Entropy Guided Policies
Riley Simmons-Edler, Ben Eisner, Eric Mitchell, Sebastian Seung, Daniel Lee Real-World Autonomous Vehicle Control Trained Entirely Within Data-Driven Simulation
Alexander Amini, Igor Gilitschenski, Jacob Phillips, Julia Moseyko, Sertac Karaman, Daniela Rus Real-World Video Adaptation with Reinforcement Learning
Hongzi Mao, Shannon Chen, Drew Dimmery, Shaun Singh, Drew Blaisdell, Yuandong Tian, Mohammad Alizadeh, Eytan Bakshy SmartChoices: Hybridizing Programming and Machine Learning
Victor Carbune, Thierry Coppey, Alexander Daryin, Thomas Deselaers, Nikhil Sarda, Jay Yagnik The Difficulty of Training Sparse Neural Networks
Utku Evci, Fabian Pedregosa, Aidan Gomez, Erich Elsen