Levine, Sergey

460 publications

CoRL 2025 $\pi_0.5$: A Vision-Language-Action Model with Open-World Generalization Kevin Black, Noah Brown, James Darpinian, Karan Dhabalia, Danny Driess, Adnan Esmail, Michael Robert Equi, Chelsea Finn, Niccolo Fusai, Manuel Y. Galliker, Dibya Ghosh, Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Devin LeBlanc, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Allen Z. Ren, Lucy Xiaoyang Shi, Laura Smith, Jost Tobias Springenberg, Kyle Stachowicz, James Tanner, Quan Vuong, Homer Walke, Anna Walling, Haohuan Wang, Lili Yu, Ury Zhilinsky
NeurIPS 2025 A Stable Whitening Optimizer for Efficient Neural Network Training Kevin Frans, Sergey Levine, Pieter Abbeel
ICLR 2025 Adding Conditional Control to Diffusion Models with Reinforcement Learning Yulai Zhao, Masatoshi Uehara, Gabriele Scalia, Sunyuan Kung, Tommaso Biancalani, Sergey Levine, Ehsan Hajiramezanali
CoRL 2025 AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real World Zhiyuan Zhou, Pranav Atreya, You Liang Tan, Karl Pertsch, Sergey Levine
ICLRW 2025 AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real World Zhiyuan Zhou, Pranav Atreya, You Liang Tan, Karl Pertsch, Sergey Levine
ICML 2025 Behavioral Exploration: Learning to Explore via In-Context Adaptation Andrew Wagenmaker, Zhiyuan Zhou, Sergey Levine
NeurIPS 2025 Compute-Optimal Scaling for Value-Based Deep RL Preston Fu, Oleh Rybkin, Zhiyuan Zhou, Michal Nauman, Pieter Abbeel, Sergey Levine, Aviral Kumar
NeurIPS 2025 Consistently Simulating Human Personas with Multi-Turn Reinforcement Learning Marwa Abdulhai, Ryan Cheng, Donovan Clay, Tim Althoff, Sergey Levine, Natasha Jaques
NeurIPS 2025 Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding Xiner Li, Yulai Zhao, Chenyu Wang, Gabriele Scalia, Gökcen Eraslan, Surag Nair, Tommaso Biancalani, Shuiwang Ji, Aviv Regev, Sergey Levine, Masatoshi Uehara
ICLR 2025 Digi-Q: Learning VLM Q-Value Functions for Training Device-Control Agents Hao Bai, Yifei Zhou, Li Erran Li, Sergey Levine, Aviral Kumar
ICLR 2025 Efficient Online Reinforcement Learning Fine-Tuning Need Not Retain Offline Data Zhiyuan Zhou, Andy Peng, Qiyang Li, Sergey Levine, Aviral Kumar
ICLR 2025 Fine-Tuning Discrete Diffusion Models via Reward Optimization with Applications to DNA and Protein Design Chenyu Wang, Masatoshi Uehara, Yichun He, Amy Wang, Avantika Lal, Tommi Jaakkola, Sergey Levine, Aviv Regev, Hanchen, Tommaso Biancalani
ICML 2025 Flow Q-Learning Seohong Park, Qiyang Li, Sergey Levine
ICML 2025 Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models Lucy Xiaoyang Shi, Brian Ichter, Michael Robert Equi, Liyiming Ke, Karl Pertsch, Quan Vuong, James Tanner, Anna Walling, Haohuan Wang, Niccolo Fusai, Adrian Li-Bell, Danny Driess, Lachy Groom, Sergey Levine, Chelsea Finn
NeurIPS 2025 Horizon Reduction Makes RL Scalable Seohong Park, Kevin Frans, Deepinder Mann, Benjamin Eysenbach, Aviral Kumar, Sergey Levine
NeurIPS 2025 Knowledge Insulating Vision-Language-Action Models: Train Fast, Run Fast, Generalize Better Danny Driess, Jost Tobias Springenberg, Brian Ichter, Lili Yu, Adrian Li-Bell, Karl Pertsch, Allen Z. Ren, Homer Walke, Quan Vuong, Lucy Xiaoyang Shi, Sergey Levine
ICML 2025 LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language Models Marwa Abdulhai, Isadora White, Charlie Victor Snell, Charles Sun, Joey Hong, Yuexiang Zhai, Kelvin Xu, Sergey Levine
ICLR 2025 Language Guided Skill Discovery Seungeun Rho, Laura Smith, Tianyu Li, Sergey Levine, Xue Bin Peng, Sehoon Ha
ICML 2025 Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration Max Wilcoxson, Qiyang Li, Kevin Frans, Sergey Levine
ICLR 2025 OGBench: Benchmarking Offline Goal-Conditioned RL Seohong Park, Kevin Frans, Benjamin Eysenbach, Sergey Levine
NeurIPS 2025 Offline Goal-Conditioned Reinforcement Learning with Quasimetric Representations Vivek Myers, Bill Zheng, Benjamin Eysenbach, Sergey Levine
ICLR 2025 One Step Diffusion via Shortcut Models Kevin Frans, Danijar Hafner, Sergey Levine, Pieter Abbeel
NeurIPS 2025 Planning Without Search: Refining Frontier LLMs with Offline Goal-Conditioned RL Joey Hong, Anca Dragan, Sergey Levine
ICLR 2025 Prioritized Generative Replay Renhao Wang, Kevin Frans, Pieter Abbeel, Sergey Levine, Alexei A Efros
ICML 2025 Proposer-Agent-Evaluator (PAE): Autonomous Skill Discovery for Foundation Model Internet Agents Yifei Zhou, Qianlan Yang, Kaixiang Lin, Min Bai, Xiong Zhou, Yu-Xiong Wang, Sergey Levine, Li Erran Li
ICLR 2025 Q-SFT: Q-Learning for Language Models via Supervised Fine-Tuning Joey Hong, Anca Dragan, Sergey Levine
NeurIPS 2025 Real-Time Execution of Action Chunking Flow Policies Kevin Black, Manuel Y Galliker, Sergey Levine
CoRL 2025 Reflective Planning: Vision-Language Models for Multi-Stage Long-Horizon Robotic Manipulation Yunhai Feng, Jiaming Han, Zhuoran Yang, Xiangyu Yue, Sergey Levine, Jianlan Luo
NeurIPS 2025 Reinforcement Learning with Action Chunking Qiyang Li, Zhiyuan Zhou, Sergey Levine
ICML 2025 Reward-Guided Iterative Refinement in Diffusion Models at Test-Time with Applications to Protein and DNA Design Masatoshi Uehara, Xingyu Su, Yulai Zhao, Xiner Li, Aviv Regev, Shuiwang Ji, Sergey Levine, Tommaso Biancalani
CoRL 2025 RoboArena: Distributed Real-World Evaluation of Generalist Robot Policies Pranav Atreya, Karl Pertsch, Tony Lee, Moo Jin Kim, Arhan Jain, Artur Kuramshin, Cyrus Neary, Edward S. Hu, Kanav Arora, Kirsty Ellis, Luca Macesanu, Matthew Leonard, Meedeum Cho, Ozgur Aslan, Shivin Dass, Tony Wang, Xingfang Yuan, Abhishek Gupta, Dinesh Jayaraman, Glen Berseth, Kostas Daniilidis, Roberto Martín-Martín, Youngwoon Lee, Percy Liang, Chelsea Finn, Sergey Levine
ICML 2025 SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-Training Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, Yi Ma
ICML 2025 Scaling Test-Time Compute Without Verification or RL Is Suboptimal Amrith Setlur, Nived Rajaraman, Sergey Levine, Aviral Kumar
ICLRW 2025 Scaling Test-Time Compute Without Verification or RL Is Suboptimal Amrith Setlur, Nived Rajaraman, Sergey Levine, Aviral Kumar
NeurIPS 2025 Self-Challenging Language Model Agents Yifei Zhou, Sergey Levine, Jason E Weston, Xian Li, Sainbayar Sukhbaatar
CoRL 2025 Steering Your Diffusion Policy with Latent Space Reinforcement Learning Andrew Wagenmaker, Yunchu Zhang, Mitsuhiko Nakamoto, Seohong Park, Waleed Yagoub, Anusha Nagabandi, Abhishek Gupta, Sergey Levine
NeurIPS 2025 Temporal Representation Alignment: Successor Features Enable Emergent Compositionality in Robot Instruction Following Vivek Myers, Bill Zheng, Anca Dragan, Kuan Fang, Sergey Levine
CoRL 2025 Training Strategies for Efficient Embodied Reasoning William Chen, Suneel Belkhale, Suvir Mirchandani, Karl Pertsch, Danny Driess, Oier Mees, Sergey Levine
ICML 2025 Value-Based Deep RL Scales Predictably Oleh Rybkin, Michal Nauman, Preston Fu, Charlie Victor Snell, Pieter Abbeel, Sergey Levine, Aviral Kumar
ICLRW 2025 Value-Based Deep RL Scales Predictably Oleh Rybkin, Michal Nauman, Preston Fu, Charlie Victor Snell, Pieter Abbeel, Sergey Levine, Aviral Kumar
TMLR 2025 Vision-Language Models Provide Promptable Representations for Reinforcement Learning William Chen, Oier Mees, Aviral Kumar, Sergey Levine
ICML 2025 What Do Learning Dynamics Reveal About Generalization in LLM Mathematical Reasoning? Katie Kang, Amrith Setlur, Dibya Ghosh, Jacob Steinhardt, Claire Tomlin, Sergey Levine, Aviral Kumar
ICML 2024 ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine, Aviral Kumar
ICLRW 2024 ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL Yifei Zhou, Andrea Zanette, Jiayi Pan, Aviral Kumar, Sergey Levine
CoRL 2024 Autonomous Improvement of Instruction Following Skills via Foundation Models Zhiyuan Zhou, Pranav Atreya, Abraham Lee, Homer Rich Walke, Oier Mees, Sergey Levine
NeurIPS 2024 Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models Masatoshi Uehara, Yulai Zhao, Ehsan Hajiramezanali, Gabriele Scalia, Gokcen Eraslan, Avantika Lal, Sergey Levine, Tommaso Biancalani
ICML 2024 Chain of Code: Reasoning with a Language Model-Augmented Code Emulator Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, Brian Ichter
ICLR 2024 Deep Neural Networks Tend to Extrapolate Predictably Katie Kang, Amrith Setlur, Claire Tomlin, Sergey Levine
NeurIPSW 2024 Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding Xiner Li, Yulai Zhao, Chenyu Wang, Gabriele Scalia, Gökcen Eraslan, Surag Nair, Tommaso Biancalani, Shuiwang Ji, Aviv Regev, Sergey Levine, Masatoshi Uehara
NeurIPS 2024 Designing Cell-Type-Specific Promoter Sequences Using Conservative Model-Based Optimization Aniketh Janardhan Reddy, Xinyang Geng, Michael H. Herschl, Sathvik Kolli, Aviral Kumar, Patrick D. Hsu, Sergey Levine, Nilah M. Ioannidis
NeurIPS 2024 DigiRL: Training In-the-Wild Device-Control Agents with Autonomous Reinforcement Learning Hao Bai, Yifei Zhou, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, Aviral Kumar
ICMLW 2024 DigiRL: Training In-the-Wild Device-Control Agents with Autonomous Reinforcement Learning Yifei Zhou, Hao Bai, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, Aviral Kumar
ICMLW 2024 DigiRL: Training In-the-Wild Device-Control Agents with Autonomous Reinforcement Learning Hao Bai, Yifei Zhou, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, Aviral Kumar
ICMLW 2024 DigiRL: Training In-the-Wild Device-Control Agents with Autonomous Reinforcement Learning Hao Bai, Yifei Zhou, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, Aviral Kumar
CoRL 2024 Evaluating Real-World Robot Manipulation Policies in Simulation Xuanlin Li, Kyle Hsu, Jiayuan Gu, Oier Mees, Karl Pertsch, Homer Rich Walke, Chuyuan Fu, Ishikaa Lunawat, Isabel Sieh, Sean Kirmani, Sergey Levine, Jiajun Wu, Chelsea Finn, Hao Su, Quan Vuong, Ted Xiao
ICML 2024 Feedback Efficient Online Fine-Tuning of Diffusion Models Masatoshi Uehara, Yulai Zhao, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Sergey Levine, Tommaso Biancalani
NeurIPSW 2024 Fine-Tuning Discrete Diffusion Models via Reward Optimization with Applications to DNA and Protein Design Chenyu Wang, Masatoshi Uehara, Yichun He, Amy Wang, Tommaso Biancalani, Avantika Lal, Tommi Jaakkola, Sergey Levine, Hanchen, Aviv Regev
NeurIPS 2024 Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning Yuexiang Zhai, Hao Bai, Zipeng Lin, Jiayi Pan, Shengbang Tong, Yifei Zhou, Alane Suhr, Saining Xie, Yann LeCun, Yi Ma, Sergey Levine
ICML 2024 Foundation Policies with Hilbert Representations Seohong Park, Tobias Kreiman, Sergey Levine
AISTATS 2024 Functional Graphical Models: Structure Enables Offline Data-Driven Optimization Kuba Grudzien, Masatoshi Uehara, Sergey Levine, Pieter Abbeel
NeurIPS 2024 Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference Benjamin Eysenbach, Vivek Myers, Ruslan Salakhutdinov, Sergey Levine
NeurIPS 2024 Is Value Learning Really the Main Bottleneck in Offline RL? Seohong Park, Kevin Frans, Sergey Levine, Aviral Kumar
ICMLW 2024 Is Value Learning Really the Main Bottleneck in Offline RL? Seohong Park, Kevin Frans, Sergey Levine, Aviral Kumar
CoRL 2024 LeLaN: Learning a Language-Conditioned Navigation Policy from In-the-Wild Video Noriaki Hirose, Catherine Glossop, Ajay Sridhar, Oier Mees, Sergey Levine
ICML 2024 Learning Temporal Distances: Contrastive Successor Features Can Provide a Metric Structure for Decision-Making Vivek Myers, Chongyi Zheng, Anca Dragan, Sergey Levine, Benjamin Eysenbach
NeurIPS 2024 Learning to Assist Humans Without Inferring Rewards Vivek Myers, Evan Ellis, Sergey Levine, Benjamin Eysenbach, Anca Dragan
ICMLW 2024 Learning to Assist Humans Without Inferring Rewards Vivek Myers, Evan Ellis, Benjamin Eysenbach, Sergey Levine, Anca Dragan
ICML 2024 Learning to Explore in POMDPs with Informational Rewards Annie Xie, Logan Mondal Bhamidipaty, Evan Zheran Liu, Joey Hong, Sergey Levine, Chelsea Finn
CoRL 2024 Lifelong Autonomous Improvement of Navigation Foundation Models in the Wild Kyle Stachowicz, Lydia Ignatova, Sergey Levine
ICLR 2024 METRA: Scalable Unsupervised RL with Metric-Aware Abstraction Seohong Park, Oleh Rybkin, Sergey Levine
CoRL 2024 Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs Zhuo Xu, Hao-Tien Lewis Chiang, Zipeng Fu, Mithun George Jacob, Tingnan Zhang, Tsang-Wei Edward Lee, Wenhao Yu, Connor Schenck, David Rendleman, Dhruv Shah, Fei Xia, Jasmine Hsu, Jonathan Hoech, Pete Florence, Sean Kirmani, Sumeet Singh, Vikas Sindhwani, Carolina Parada, Chelsea Finn, Peng Xu, Sergey Levine, Jie Tan
ICLR 2024 Offline RL with Observation Histories: Analyzing and Improving Sample Complexity Joey Hong, Anca Dragan, Sergey Levine
CoRL 2024 OpenVLA: An Open-Source Vision-Language-Action Model Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan P Foster, Pannag R Sanketi, Quan Vuong, Thomas Kollar, Benjamin Burchfiel, Russ Tedrake, Dorsa Sadigh, Sergey Levine, Percy Liang, Chelsea Finn
ICML 2024 PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs Soroush Nasiriany, Fei Xia, Wenhao Yu, Ted Xiao, Jacky Liang, Ishita Dasgupta, Annie Xie, Danny Driess, Ayzaan Wahid, Zhuo Xu, Quan Vuong, Tingnan Zhang, Tsang-Wei Edward Lee, Kuang-Huei Lee, Peng Xu, Sean Kirmani, Yuke Zhu, Andy Zeng, Karol Hausman, Nicolas Heess, Chelsea Finn, Sergey Levine, Brian Ichter
CoRL 2024 Policy Adaptation via Language Optimization: Decomposing Tasks for Few-Shot Imitation Vivek Myers, Chunyuan Zheng, Oier Mees, Kuan Fang, Sergey Levine
ICLR 2024 Project and Probe: Sample-Efficient Adaptation by Interpolating Orthogonal Features Annie S Chen, Yoonho Lee, Amrith Setlur, Sergey Levine, Chelsea Finn
ICML 2024 Prompting Is a Double-Edged Sword: Improving Worst-Group Robustness of Foundation Models Amrith Setlur, Saurabh Garg, Virginia Smith, Sergey Levine
ICLRW 2024 Prompting for Robustness: Extracting Robust Classifiers from Foundation Models Amrith Setlur, Saurabh Garg, Virginia Smith, Sergey Levine
ICLR 2024 RLIF: Interactive Imitation Learning as Reinforcement Learning Jianlan Luo, Perry Dong, Yuexiang Zhai, Yi Ma, Sergey Levine
CoRL 2024 Robotic Control via Embodied Chain-of-Thought Reasoning Michał Zawalski, William Chen, Karl Pertsch, Oier Mees, Chelsea Finn, Sergey Levine
CoRL 2024 SELFI: Autonomous Self-Improvement with RL for Vision-Based Navigation Around People Noriaki Hirose, Dhruv Shah, Kyle Stachowicz, Ajay Sridhar, Sergey Levine
CoRL 2024 Scaling Cross-Embodied Learning: One Policy for Manipulation, Navigation, Locomotion and Aviation Ria Doshi, Homer Rich Walke, Oier Mees, Sudeep Dasari, Sergey Levine
ICLR 2024 Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data Chongyi Zheng, Benjamin Eysenbach, Homer Rich Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine
CoRL 2024 Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance Mitsuhiko Nakamoto, Oier Mees, Aviral Kumar, Sergey Levine
ICML 2024 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL Jesse Farebrother, Jordi Orbay, Quan Vuong, Adrien Ali Taiga, Yevgen Chebotar, Ted Xiao, Alex Irpan, Sergey Levine, Pablo Samuel Castro, Aleksandra Faust, Aviral Kumar, Rishabh Agarwal
ICLR 2024 The False Promise of Imitating Proprietary Language Models Arnav Gudibande, Eric Wallace, Charlie Victor Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, Dawn Song
ICLR 2024 Training Diffusion Models with Reinforcement Learning Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, Sergey Levine
ICMLW 2024 Unfamiliar Finetuning Examples Control How Language Models Hallucinate Katie Kang, Eric Wallace, Claire Tomlin, Aviral Kumar, Sergey Levine
ICMLW 2024 Unfamiliar Finetuning Examples Control How Language Models Hallucinate Katie Kang, Eric Wallace, Claire Tomlin, Aviral Kumar, Sergey Levine
ICMLW 2024 Unfamiliar Finetuning Examples Control How Language Models Hallucinate Katie Kang, Eric Wallace, Claire Tomlin, Aviral Kumar, Sergey Levine
ICML 2024 Unsupervised Zero-Shot Reinforcement Learning via Functional Reward Encodings Kevin Frans, Seohong Park, Pieter Abbeel, Sergey Levine
ICMLW 2024 Vision-Language Models Provide Promptable Representations for Reinforcement Learning William Chen, Oier Mees, Aviral Kumar, Sergey Levine
ICMLW 2024 Vision-Language Models Provide Promptable Representations for Reinforcement Learning William Chen, Oier Mees, Aviral Kumar, Sergey Levine
ICMLW 2024 Vision-Language Models Provide Promptable Representations for Reinforcement Learning William Chen, Oier Mees, Aviral Kumar, Sergey Levine
ICLR 2024 Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Rich Walke, Chelsea Finn, Aviral Kumar, Sergey Levine
ICML 2023 A Connection Between One-Step RL and Critic Regularization in Reinforcement Learning Benjamin Eysenbach, Matthieu Geist, Sergey Levine, Ruslan Salakhutdinov
NeurIPS 2023 Accelerating Exploration with Unlabeled Prior Data Qiyang Li, Jason Zhang, Dibya Ghosh, Amy Zhang, Sergey Levine
CoRL 2023 Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, Sergey Levine
ICML 2023 Adversarial Policies Beat Superhuman Go AIs Tony Tong Wang, Adam Gleave, Tom Tseng, Kellin Pelrine, Nora Belrose, Joseph Miller, Michael D Dennis, Yawen Duan, Viktor Pogrebniak, Sergey Levine, Stuart Russell
ICLR 2023 Bitrate-Constrained DRO: Beyond Worst Case Robustness to Unknown Group Shifts Amrith Setlur, Don Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine
CoRL 2023 BridgeData V2: A Dataset for Robot Learning at Scale Homer Rich Walke, Kevin Black, Tony Z. Zhao, Quan Vuong, Chongyi Zheng, Philippe Hansen-Estruch, Andre Wang He, Vivek Myers, Moo Jin Kim, Max Du, Abraham Lee, Kuan Fang, Chelsea Finn, Sergey Levine
NeurIPS 2023 Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning Mitsuhiko Nakamoto, Simon Zhai, Anikait Singh, Max Sobol Mark, Yi Ma, Chelsea Finn, Aviral Kumar, Sergey Levine
ICLRW 2023 Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning Mitsuhiko Nakamoto, Yuexiang Zhai, Anikait Singh, Yi Ma, Chelsea Finn, Aviral Kumar, Sergey Levine
ICMLW 2023 Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning Mitsuhiko Nakamoto, Yuexiang Zhai, Anikait Singh, Max Sobol Mark, Yi Ma, Chelsea Finn, Aviral Kumar, Sergey Levine
NeurIPSW 2023 Chain of Code: Reasoning with a Language Model-Augmented Code Emulator Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, Brian Ichter
NeurIPSW 2023 Confidence-Based Model Selection: When to Take Shortcuts in Spurious Settings Annie S Chen, Yoonho Lee, Amrith Setlur, Sergey Levine, Chelsea Finn
ICLR 2023 Confidence-Conditioned Value Functions for Offline Reinforcement Learning Joey Hong, Aviral Kumar, Sergey Levine
L4DC 2023 Contrastive Example-Based Control Kyle Beltran Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, Chelsea Finn
NeurIPSW 2023 Contrastive Representations Make Planning Easy Benjamin Eysenbach, Vivek Myers, Sergey Levine, Ruslan Salakhutdinov
ICLR 2023 Efficient Deep Reinforcement Learning Requires Regulating Overfitting Qiyang Li, Aviral Kumar, Ilya Kostrikov, Sergey Levine
ICML 2023 Efficient Online Reinforcement Learning with Offline Data Philip J. Ball, Laura Smith, Ilya Kostrikov, Sergey Levine
CoRL 2023 FastRLAP: A System for Learning High-Speed Driving via Deep RL and Autonomous Practicing Kyle Stachowicz, Dhruv Shah, Arjun Bhorkar, Ilya Kostrikov, Sergey Levine
CoRL 2023 Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control Vivek Myers, Andre Wang He, Kuan Fang, Homer Rich Walke, Philippe Hansen-Estruch, Ching-An Cheng, Mihai Jalobeanu, Andrey Kolobov, Anca Dragan, Sergey Levine
NeurIPS 2023 Grounded Decoding: Guiding Text Generation with Grounded Models for Embodied Agents Wenlong Huang, Fei Xia, Dhruv Shah, Danny Driess, Andy Zeng, Yao Lu, Pete Florence, Igor Mordatch, Sergey Levine, Karol Hausman, Brian Ichter
NeurIPS 2023 HIQL: Offline Goal-Conditioned RL with Latent States as Actions Seohong Park, Dibya Ghosh, Benjamin Eysenbach, Sergey Levine
ICLR 2023 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPS 2023 Ignorance Is Bliss: Robust Control via Information Gating Manan Tomar, Riashat Islam, Matthew Taylor, Sergey Levine, Philip Bachman
TMLR 2023 Improving Generalization with Approximate Factored Value Functions Shagun Sodhani, Sergey Levine, Amy Zhang
ICML 2023 Jump-Start Reinforcement Learning Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Joséphine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, Karol Hausman
NeurIPSW 2023 Latent Conservative Objective Models for Data-Driven Crystal Structure Prediction Han Qi, Stefano Rando, Xinyang Geng, Iku Ohama, Aviral Kumar, Sergey Levine
ICLRW 2023 Latent Conservative Objective Models for Offline Data-Driven Crystal Structure Prediction Han Qi, Stefano Rando, Xinyang Geng, Iku Ohama, Aviral Kumar, Sergey Levine
ICMLW 2023 Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware Tony Z. Zhao, Vikash Kumar, Sergey Levine, Chelsea Finn
NeurIPS 2023 Learning to Influence Human Behavior with Offline Reinforcement Learning Joey Hong, Sergey Levine, Anca Dragan
NeurIPSW 2023 METRA: Scalable Unsupervised RL with Metric-Aware Abstraction Seohong Park, Oleh Rybkin, Sergey Levine
NeurIPSW 2023 METRA: Scalable Unsupervised RL with Metric-Aware Abstraction Seohong Park, Oleh Rybkin, Sergey Levine
L4DC 2023 Multi-Task Imitation Learning for Linear Dynamical Systems Thomas T. Zhang, Katie Kang, Bruce D Lee, Claire Tomlin, Sergey Levine, Stephen Tu, Nikolai Matni
CoRL 2023 Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning Dhruv Shah, Michael Robert Equi, Błażej Osiński, Fei Xia, Brian Ichter, Sergey Levine
NeurIPSW 2023 NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration Ajay Sridhar, Dhruv Shah, Catherine Glossop, Sergey Levine
ICMLW 2023 Offline Goal-Conditioned RL with Latent States as Actions Seohong Park, Dibya Ghosh, Benjamin Eysenbach, Sergey Levine
ICLR 2023 Offline Q-Learning on Diverse Multi-Task Data Both Scales and Generalizes Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey Levine
ICLR 2023 Offline RL for Natural Language Generation with Implicit Language Q Learning Charlie Victor Snell, Ilya Kostrikov, Yi Su, Sherry Yang, Sergey Levine
ICML 2023 PaLM-E: An Embodied Multimodal Language Model Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, Pete Florence
ICML 2023 Predictable MDP Abstraction for Unsupervised Model-Based RL Seohong Park, Sergey Levine
ICLRW 2023 Project with Source, Probe with Target: Extracting Useful Features for Adaptation to Distribution Shifts Annie S Chen, Yoonho Lee, Amrith Setlur, Sergey Levine, Chelsea Finn
CoRL 2023 Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions Yevgen Chebotar, Quan Vuong, Karol Hausman, Fei Xia, Yao Lu, Alex Irpan, Aviral Kumar, Tianhe Yu, Alexander Herzog, Karl Pertsch, Keerthana Gopalakrishnan, Julian Ibarz, Ofir Nachum, Sumedh Anand Sontakke, Grecia Salazar, Huong T. Tran, Jodilyn Peralta, Clayton Tan, Deeksha Manjunath, Jaspiar Singh, Brianna Zitkovich, Tomas Jackson, Kanishka Rao, Chelsea Finn, Sergey Levine
CoRL 2023 REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation Zheyuan Hu, Aaron Rovinsky, Jianlan Luo, Vikash Kumar, Abhishek Gupta, Sergey Levine
CoRL 2023 RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control Brianna Zitkovich, Tianhe Yu, Sichun Xu, Peng Xu, Ted Xiao, Fei Xia, Jialin Wu, Paul Wohlhart, Stefan Welker, Ayzaan Wahid, Quan Vuong, Vincent Vanhoucke, Huong Tran, Radu Soricut, Anikait Singh, Jaspiar Singh, Pierre Sermanet, Pannag R. Sanketi, Grecia Salazar, Michael S. Ryoo, Krista Reymann, Kanishka Rao, Karl Pertsch, Igor Mordatch, Henryk Michalewski, Yao Lu, Sergey Levine, Lisa Lee, Tsang-Wei Edward Lee, Isabel Leal, Yuheng Kuang, Dmitry Kalashnikov, Ryan Julian, Nikhil J. Joshi, Alex Irpan, Brian Ichter, Jasmine Hsu, Alexander Herzog, Karol Hausman, Keerthana Gopalakrishnan, Chuyuan Fu, Pete Florence, Chelsea Finn, Kumar Avinava Dubey, Danny Driess, Tianli Ding, Krzysztof Marcin Choromanski, Xi Chen, Yevgen Chebotar, Justice Carbajal, Noah Brown, Anthony Brohan, Montserrat Gonzalez Arenas, Kehang Han
NeurIPS 2023 ReDS: Offline RL with Heteroskedastic Datasets via Support Constraints Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, Sergey Levine
ICML 2023 Reinforcement Learning from Passive Data via Latent Intentions Dibya Ghosh, Chethan Anand Bhateja, Sergey Levine
NeurIPSW 2023 Robotic Offline RL from Internet Videos via Value-Function Pre-Training Chethan Anand Bhateja, Derek Guo, Dibya Ghosh, Anikait Singh, Manan Tomar, Quan Vuong, Yevgen Chebotar, Sergey Levine, Aviral Kumar
ICLR 2023 Simplifying Model-Based RL: Learning Representations, Latent-Space Models, and Policies with One Objective Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, Russ Salakhutdinov
NeurIPSW 2023 Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine
NeurIPSW 2023 Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine
ICMLW 2023 Training Diffusion Models with Reinforcement Learning Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, Sergey Levine
ICMLW 2023 Training Diffusion Models with Reinforcement Learning Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, Sergey Levine
ICMLW 2023 Training Diffusion Models with Reinforcement Learning Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, Sergey Levine
ICML 2023 Understanding the Complexity Gains of Single-Task RL with a Curriculum Qiyang Li, Yuexiang Zhai, Yi Ma, Sergey Levine
CoRL 2023 ViNT: A Foundation Model for Visual Navigation Dhruv Shah, Ajay Sridhar, Nitish Dashora, Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine
ICMLW 2023 Video-Guided Skill Discovery Manan Tomar, Dibya Ghosh, Vivek Myers, Anca Dragan, Matthew E. Taylor, Philip Bachman, Sergey Levine
NeurIPSW 2023 Vision-Language Models Provide Promptable Representations for Reinforcement Learning William Chen, Oier Mees, Aviral Kumar, Sergey Levine
NeurIPSW 2023 Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations Joey Hong, Sergey Levine, Anca Dragan
NeurIPSW 2023 Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Walke, Chelsea Finn, Aviral Kumar, Sergey Levine
NeurIPSW 2023 Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Walke, Chelsea Finn, Aviral Kumar, Sergey Levine
NeurIPSW 2023 Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Walke, Chelsea Finn, Aviral Kumar, Sergey Levine
NeurIPSW 2022 A Connection Between One-Step Regularization and Critic Regularization in Reinforcement Learning Benjamin Eysenbach, Matthieu Geist, Ruslan Salakhutdinov, Sergey Levine
NeurIPSW 2022 A Connection Between One-Step Regularization and Critic Regularization in Reinforcement Learning Benjamin Eysenbach, Matthieu Geist, Sergey Levine, Ruslan Salakhutdinov
NeurIPSW 2022 Adversarial Policies Beat Professional-Level Go AIs Tony Tong Wang, Adam Gleave, Nora Belrose, Tom Tseng, Michael D Dennis, Yawen Duan, Viktor Pogrebniak, Joseph Miller, Sergey Levine, Stuart Russell
NeurIPSW 2022 Adversarial Policies Beat Professional-Level Go AIs Tony Tong Wang, Adam Gleave, Nora Belrose, Tom Tseng, Michael D Dennis, Yawen Duan, Viktor Pogrebniak, Sergey Levine, Stuart Russell
NeurIPS 2022 Adversarial Unlearning: Reducing Confidence Along Adversarial Directions Amrith Setlur, Benjamin Eysenbach, Virginia Smith, Sergey Levine
ICLR 2022 Autonomous Reinforcement Learning: Formalism and Benchmarking Archit Sharma, Kelvin Xu, Nikhil Sardana, Abhishek Gupta, Karol Hausman, Sergey Levine, Chelsea Finn
ICML 2022 Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, Sergey Levine
NeurIPSW 2022 Bitrate-Constrained DRO: Beyond Worst Case Robustness to Unknown Group Shifts Amrith Setlur, Don Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine
ICLR 2022 C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, Joseph E. Gonzalez
ICLR 2022 CoMPS: Continual Meta Policy Search Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn, Sergey Levine
NeurIPSW 2022 Confidence-Conditioned Value Functions for Offline Reinforcement Learning Joey Hong, Aviral Kumar, Sergey Levine
NeurIPSW 2022 Confidence-Conditioned Value Functions for Offline Reinforcement Learning Joey Hong, Aviral Kumar, Sergey Levine
NeurIPSW 2022 Contrastive Example-Based Control Kyle Beltran Hatch, Sarthak J Shetty, Benjamin Eysenbach, Tianhe Yu, Rafael Rafailov, Ruslan Salakhutdinov, Sergey Levine, Chelsea Finn
NeurIPSW 2022 Contrastive Example-Based Control Kyle Beltran Hatch, Sarthak J Shetty, Benjamin Eysenbach, Tianhe Yu, Rafael Rafailov, Ruslan Salakhutdinov, Sergey Levine, Chelsea Finn
NeurIPS 2022 Contrastive Learning as Goal-Conditioned Reinforcement Learning Benjamin Eysenbach, Tianjun Zhang, Sergey Levine, Ruslan Salakhutdinov
NeurIPS 2022 DASCO: Dual-Generator Adversarial Support Constrained Offline Reinforcement Learning Quan Vuong, Aviral Kumar, Sergey Levine, Yevgen Chebotar
ICMLW 2022 DASCO: Dual-Generator Adversarial Support Constrained Offline Reinforcement Learning Quan Vuong, Aviral Kumar, Sergey Levine, Yevgen Chebotar
ICLR 2022 DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine
NeurIPS 2022 Data-Driven Offline Decision-Making via Invariant Representation Learning Han Qi, Yi Su, Aviral Kumar, Sergey Levine
ICLR 2022 Data-Driven Offline Optimization for Architecting Hardware Accelerators Aviral Kumar, Amir Yazdanbakhsh, Milad Hashemi, Kevin Swersky, Sergey Levine
ICLRW 2022 Data-Driven Optimization for Protein Design: Workflows, Algorithms and Metrics Sathvik Kolli, Amy X. Lu, Xinyang Geng, Aviral Kumar, Sergey Levine
ICML 2022 Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization Brandon Trabucco, Xinyang Geng, Aviral Kumar, Sergey Levine
NeurIPS 2022 Distributionally Adaptive Meta Reinforcement Learning Anurag Ajay, Abhishek Gupta, Dibya Ghosh, Sergey Levine, Pulkit Agrawal
ICMLW 2022 Distributionally Adaptive Meta Reinforcement Learning Anurag Ajay, Dibya Ghosh, Sergey Levine, Pulkit Agrawal, Abhishek Gupta
CoRL 2022 Do as I Can, Not as I Say: Grounding Language in Robotic Affordances Brian Ichter, Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, Dmitry Kalashnikov, Sergey Levine, Yao Lu, Carolina Parada, Kanishka Rao, Pierre Sermanet, Alexander T Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Mengyuan Yan, Noah Brown, Michael Ahn, Omar Cortes, Nicolas Sievers, Clayton Tan, Sichun Xu, Diego Reyes, Jarek Rettinghouse, Jornell Quiambao, Peter Pastor, Linda Luu, Kuang-Huei Lee, Yuheng Kuang, Sally Jesmonth, Nikhil J. Joshi, Kyle Jeffrey, Rosario Jauregui Ruano, Jasmine Hsu, Keerthana Gopalakrishnan, Byron David, Andy Zeng, Chuyuan Kelly Fu
CoRL 2022 Don’t Start from Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning Homer Rich Walke, Jonathan Heewon Yang, Albert Yu, Aviral Kumar, Jędrzej Orbik, Avi Singh, Sergey Levine
ICMLW 2022 Effective Offline RL Needs Going Beyond Pessimism: Representations and Distributional Shift Xinyang Geng, Kevin Li, Abhishek Gupta, Aviral Kumar, Sergey Levine
NeurIPSW 2022 Efficient Deep Reinforcement Learning Requires Regulating Statistical Overfitting Qiyang Li, Aviral Kumar, Ilya Kostrikov, Sergey Levine
NeurIPSW 2022 Efficient Deep Reinforcement Learning Requires Regulating Statistical Overfitting Qiyang Li, Aviral Kumar, Ilya Kostrikov, Sergey Levine
ICLR 2022 Extending the WILDS Benchmark for Unsupervised Adaptation Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang
NeurIPS 2022 First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization Siddharth Reddy, Sergey Levine, Anca Dragan
NeurIPSW 2022 GNM: A General Navigation Model to Drive Any Robot Dhruv Shah, Ajay Sridhar, Arjun Bhorkar, Noriaki Hirose, Sergey Levine
CoRL 2022 GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots Gilbert Feng, Hongbo Zhang, Zhongyu Li, Xue Bin Peng, Bhuvan Basireddy, Linzhu Yue, Zhitao Song, Lizhi Yang, Yunhui Liu, Koushil Sreenath, Sergey Levine
CoRL 2022 Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks Kuan Fang, Patrick Yin, Ashvin Nair, Homer Rich Walke, Gengchen Yan, Sergey Levine
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
ICML 2022 How to Leverage Unlabeled Data in Offline Reinforcement Learning Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Chelsea Finn, Sergey Levine
NeurIPS 2022 Imitating past Successes Can Be Very Suboptimal Benjamin Eysenbach, Soumith Udatha, Ruslan Salakhutdinov, Sergey Levine
ICLRW 2022 Improving Generalization with Approximate Factored Value Functions Shagun Sodhani, Sergey Levine, Amy Zhang
ICLR 2022 Information Prioritization Through Empowerment in Visual Model-Based RL Homanga Bharadhwaj, Mohammad Babaeizadeh, Dumitru Erhan, Sergey Levine
CoRL 2022 Inner Monologue: Embodied Reasoning Through Planning with Language Models Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Tomas Jackson, Noah Brown, Linda Luu, Sergey Levine, Karol Hausman, Brian Ichter
CoRL 2022 Is Anyone There? Learning a Planner Contingent on Perceptual Uncertainty Charles Packer, Nicholas Rhinehart, Rowan Thomas McAllister, Matthew A. Wright, Xin Wang, Jeff He, Sergey Levine, Joseph E. Gonzalez
CoRL 2022 LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action Dhruv Shah, Błażej Osiński, Brian Ichter, Sergey Levine
ICML 2022 Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control Katie Kang, Paula Gradu, Jason J Choi, Michael Janner, Claire Tomlin, Sergey Levine
NeurIPS 2022 MEMO: Test Time Robustness via Adaptation and Augmentation Marvin Zhang, Sergey Levine, Chelsea Finn
ICLRW 2022 Maximizing Entropy on Adversarial Examples Can Improve Generalization Amrith Setlur, Benjamin Eysenbach, Virginia Smith, Sergey Levine
ICLR 2022 Maximum Entropy RL (Provably) Solves Some Robust RL Problems Benjamin Eysenbach, Sergey Levine
NeurIPS 2022 Mismatched No More: Joint Model-Policy Optimization for Model-Based RL Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, Ruslan Salakhutdinov
ICMLW 2022 Multimodal Masked Autoencoders Learn Transferable Representations Xinyang Geng, Hao Liu, Lisa Lee, Dale Schuurmans, Sergey Levine, Pieter Abbeel
ICLRW 2022 Object Representations as Equilibria: Training Iterative Inference Algorithms with Implicit Differentiation Michael Chang, Thomas L. Griffiths, Sergey Levine
ICLRW 2022 Object Representations as Fixed Points: Training Iterative Inference Algorithms with Implicit Differentiation Michael Chang, Thomas L. Griffiths, Sergey Levine
NeurIPS 2022 Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation Michael Chang, Tom Griffiths, Sergey Levine
ICLRW 2022 Object-Centric Learning as Nested Optimization Michael Chang, Sergey Levine, Thomas L. Griffiths
ICML 2022 Offline Meta-Reinforcement Learning with Online Self-Supervision Vitchyr H Pong, Ashvin V Nair, Laura M Smith, Catherine Huang, Sergey Levine
NeurIPSW 2022 Offline Q-Learning on Diverse Multi-Task Data Both Scales and Generalizes Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey Levine
NeurIPSW 2022 Offline Q-Learning on Diverse Multi-Task Data Both Scales and Generalizes Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey Levine
ICML 2022 Offline RL Policies Should Be Trained to Be Adaptive Dibya Ghosh, Anurag Ajay, Pulkit Agrawal, Sergey Levine
NeurIPSW 2022 Offline Reinforcement Learning for Customizable Visual Navigation Dhruv Shah, Arjun Bhorkar, Hrishit Leen, Ilya Kostrikov, Nicholas Rhinehart, Sergey Levine
NeurIPSW 2022 Offline Reinforcement Learning for Customizable Visual Navigation Dhruv Shah, Arjun Bhorkar, Hrishit Leen, Ilya Kostrikov, Nicholas Rhinehart, Sergey Levine
CoRL 2022 Offline Reinforcement Learning for Visual Navigation Dhruv Shah, Arjun Bhorkar, Hrishit Leen, Ilya Kostrikov, Nicholas Rhinehart, Sergey Levine
NeurIPSW 2022 Offline Reinforcement Learning from Heteroskedastic Data via Support Constraints Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, Sergey Levine
NeurIPSW 2022 Offline Reinforcement Learning from Heteroskedastic Data via Support Constraints Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, Sergey Levine
ICLR 2022 Offline Reinforcement Learning with Implicit Q-Learning Ilya Kostrikov, Ashvin Nair, Sergey Levine
ICLRW 2022 Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space Kuan Fang, Patrick Yin, Ashvin Nair, Sergey Levine
ICML 2022 Planning with Diffusion for Flexible Behavior Synthesis Michael Janner, Yilun Du, Joshua Tenenbaum, Sergey Levine
NeurIPSW 2022 Pre-Training for Robots: Leveraging Diverse Multitask Data via Offline Reinforcement Learning Anikait Singh, Aviral Kumar, Frederik Ebert, Yanlai Yang, Chelsea Finn, Sergey Levine
NeurIPSW 2022 Pre-Training for Robots: Leveraging Diverse Multitask Data via Offline Reinforcement Learning Aviral Kumar, Anikait Singh, Frederik Ebert, Yanlai Yang, Chelsea Finn, Sergey Levine
ICLR 2022 RvS: What Is Essential for Offline RL via Supervised Learning? Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, Sergey Levine
ICLR 2022 Should I Run Offline Reinforcement Learning or Behavioral Cloning? Aviral Kumar, Joey Hong, Anikait Singh, Sergey Levine
NeurIPSW 2022 Simplifying Model-Based RL: Learning Representations, Latent-Space Models, and Policies with One Objective Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, Russ Salakhutdinov
NeurIPSW 2022 Skill Acquisition by Instruction Augmentation on Offline Datasets Ted Xiao, Harris Chan, Pierre Sermanet, Ayzaan Wahid, Anthony Brohan, Karol Hausman, Sergey Levine, Jonathan Tompson
ICLR 2022 TRAIL: Near-Optimal Imitation Learning with Suboptimal Data Mengjiao Yang, Sergey Levine, Ofir Nachum
ICLR 2022 The Information Geometry of Unsupervised Reinforcement Learning Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine
NeurIPS 2022 Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity Abhishek Gupta, Aldo Pacchiano, Yuexiang Zhai, Sham Kakade, Sergey Levine
ICLR 2022 Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning Dhruv Shah, Peng Xu, Yao Lu, Ted Xiao, Alexander T Toshev, Sergey Levine, Brian Ichter
NeurIPS 2022 You Only Live Once: Single-Life Reinforcement Learning Annie Chen, Archit Sharma, Sergey Levine, Chelsea Finn
ICMLW 2022 You Only Live Once: Single-Life Reinforcement Learning via Learned Reward Shaping Annie S Chen, Archit Sharma, Sergey Levine, Chelsea Finn
CoRL 2021 A Workflow for Offline Model-Free Robotic Reinforcement Learning Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine
CoRL 2021 AW-Opt: Learning Robotic Skills with Imitation andReinforcement at Scale Yao Lu, Karol Hausman, Yevgen Chebotar, Mengyuan Yan, Eric Jang, Alexander Herzog, Ted Xiao, Alex Irpan, Mohi Khansari, Dmitry Kalashnikov, Sergey Levine
ICLRW 2021 Accelerating Online Reinforcement Learning via Model-Based Meta-Learning John D Co-Reyes, Sarah Feng, Glen Berseth, Jie Qui, Sergey Levine
ICML 2021 Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jacob Varley, Alex Irpan, Benjamin Eysenbach, Ryan C Julian, Chelsea Finn, Sergey Levine
NeurIPS 2021 Adaptive Risk Minimization: Learning to Adapt to Domain Shift Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, Chelsea Finn
ICML 2021 Amortized Conditional Normalized Maximum Likelihood: Reliable Out of Distribution Uncertainty Estimation Aurick Zhou, Sergey Levine
NeurIPS 2021 Autonomous Reinforcement Learning via Subgoal Curricula Archit Sharma, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn
CoRL 2021 BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch, Sergey Levine, Chelsea Finn
NeurIPS 2021 Bayesian Adaptation for Covariate Shift Aurick Zhou, Sergey Levine
ICLR 2021 Benchmarks for Deep Off-Policy Evaluation Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, Thomas Paine
ICLR 2021 C-Learning: Learning to Achieve Goals via Recursive Classification Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine
NeurIPSW 2021 C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, Joseph E. Gonzalez
NeurIPS 2021 COMBO: Conservative Offline Model-Based Policy Optimization Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn
NeurIPSW 2021 CoMPS: Continual Meta Policy Search Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn, Sergey Levine
NeurIPS 2021 Conservative Data Sharing for Multi-Task Offline Reinforcement Learning Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn
ICML 2021 Conservative Objective Models for Effective Offline Model-Based Optimization Brandon Trabucco, Aviral Kumar, Xinyang Geng, Sergey Levine
ICLR 2021 Conservative Safety Critics for Exploration Homanga Bharadhwaj, Aviral Kumar, Nicholas Rhinehart, Sergey Levine, Florian Shkurti, Animesh Garg
NeurIPSW 2021 DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine
NeurIPSW 2021 Data Sharing Without Rewards in Multi-Task Offline Reinforcement Learning Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Chelsea Finn, Sergey Levine, Karol Hausman
ICML 2021 Emergent Social Learning via Multi-Agent Reinforcement Learning Kamal K Ndousse, Douglas Eck, Sergey Levine, Natasha Jaques
ICLR 2021 Evolving Reinforcement Learning Algorithms John D Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Quoc V Le, Sergey Levine, Honglak Lee, Aleksandra Faust
ICMLW 2021 Explore and Control with Adversarial Surprise Arnaud Fickinger, Natasha Jaques, Samyak Parajuli, Michael Chang, Nicholas Rhinehart, Glen Berseth, Stuart Russell, Sergey Levine
NeurIPSW 2021 Extending the WILDS Benchmark for Unsupervised Adaptation Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang
ICLR 2021 Factorizing Declarative and Procedural Knowledge in Structured, Dynamical Environments Anirudh Goyal, Alex Lamb, Phanideep Gampa, Philippe Beaudoin, Charles Blundell, Sergey Levine, Yoshua Bengio, Michael Curtis Mozer
CoRL 2021 Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation Charles Sun, Jȩdrzej Orbik, Coline Manon Devin, Brian H. Yang, Abhishek Gupta, Glen Berseth, Sergey Levine
CoRL 2021 Hierarchically Integrated Models: Learning to Navigate from Heterogeneous Robots Katie Kang, Gregory Kahn, Sergey Levine
NeurIPSW 2021 Hybrid Imitative Planning with Geometric and Predictive Costs in Offroad Environments Nitish Dashora, Daniel Shin, Dhruv Shah, Henry Leopold, David Fan, Ali Agha, Nicholas Rhinehart, Sergey Levine
ICLR 2021 Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning Aviral Kumar, Rishabh Agarwal, Dibya Ghosh, Sergey Levine
NeurIPS 2021 Information Is Power: Intrinsic Control via Information Capture Nicholas Rhinehart, Jenny Wang, Glen Berseth, John Co-Reyes, Danijar Hafner, Chelsea Finn, Sergey Levine
ICMLW 2021 Intrinsic Control of Variational Beliefs in Dynamic Partially-Observed Visual Environments Nicholas Rhinehart, Jenny Wang, Glen Berseth, John D Co-Reyes, Danijar Hafner, Chelsea Finn, Sergey Levine
ICLR 2021 Learning Invariant Representations for Reinforcement Learning Without Reconstruction Amy Zhang, Rowan Thomas McAllister, Roberto Calandra, Yarin Gal, Sergey Levine
ICLR 2021 Learning to Reach Goals via Iterated Supervised Learning Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Manon Devin, Benjamin Eysenbach, Sergey Levine
NeurIPSW 2021 MEMO: Test Time Robustness via Adaptation and Augmentation Marvin Mengxin Zhang, Sergey Levine, Chelsea Finn
ICML 2021 MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning Kevin Li, Abhishek Gupta, Ashwin Reddy, Vitchyr H Pong, Aurick Zhou, Justin Yu, Sergey Levine
NeurIPSW 2021 Mismatched No More: Joint Model-Policy Optimization for Model-Based RL Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, Ruslan Salakhutdinov
ICML 2021 Model-Based Reinforcement Learning via Latent-Space Collocation Oleh Rybkin, Chuning Zhu, Anusha Nagabandi, Kostas Daniilidis, Igor Mordatch, Sergey Levine
ICLR 2021 Model-Based Visual Planning with Self-Supervised Functional Distances Stephen Tian, Suraj Nair, Frederik Ebert, Sudeep Dasari, Benjamin Eysenbach, Chelsea Finn, Sergey Levine
ICML 2021 Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment Michael Chang, Sid Kaushik, Sergey Levine, Tom Griffiths
ICLRW 2021 Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment Michael Chang, Sidhant Kaushik, Thomas L. Griffiths, Sergey Levine
ICLR 2021 OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum
ICLR 2021 Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers Benjamin Eysenbach, Shreyas Chaudhari, Swapnil Asawa, Sergey Levine, Ruslan Salakhutdinov
ICML 2021 Offline Meta-Reinforcement Learning with Advantage Weighting Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, Chelsea Finn
NeurIPSW 2021 Offline Meta-Reinforcement Learning with Online Self-Supervision Vitchyr H. Pong, Ashvin Nair, Laura Smith, Catherine Huang, Sergey Levine
NeurIPSW 2021 Offline Meta-Reinforcement Learning with Online Self-Supervision Vitchyr H. Pong, Ashvin Nair, Laura Smith, Catherine Huang, Sergey Levine
ICLR 2021 Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation Justin Fu, Sergey Levine
NeurIPS 2021 Offline Reinforcement Learning as One Big Sequence Modeling Problem Michael Janner, Qiyang Li, Sergey Levine
NeurIPSW 2021 Offline Reinforcement Learning with Implicit Q-Learning Ilya Kostrikov, Ashvin Nair, Sergey Levine
NeurIPS 2021 Outcome-Driven Reinforcement Learning via Variational Inference Tim G. J. Rudner, Vitchyr Pong, Rowan McAllister, Yarin Gal, Sergey Levine
ICLR 2021 Parrot: Data-Driven Behavioral Priors for Reinforcement Learning Avi Singh, Huihan Liu, Gaoyue Zhou, Albert Yu, Nicholas Rhinehart, Sergey Levine
ICML 2021 Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu
NeurIPS 2021 Pragmatic Image Compression for Human-in-the-Loop Decision-Making Sid Reddy, Anca Dragan, Sergey Levine
ICML 2021 PsiPhi-Learning: Reinforcement Learning with Demonstrations Using Successor Features and Inverse Temporal Difference Learning Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, Gregory Farquhar
CoRL 2021 Rapid Exploration for Open-World Navigation with Latent Goal Models Dhruv Shah, Benjamin Eysenbach, Nicholas Rhinehart, Sergey Levine
ICLR 2021 Recurrent Independent Mechanisms Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf
ICMLW 2021 Reinforcement Learning as One Big Sequence Modeling Problem Michael Janner, Qiyang Li, Sergey Levine
NeurIPS 2021 Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification Ben Eysenbach, Sergey Levine, Ruslan Salakhutdinov
NeurIPS 2021 Robust Predictable Control Ben Eysenbach, Ruslan Salakhutdinov, Sergey Levine
ICLR 2021 SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments Glen Berseth, Daniel Geng, Coline Manon Devin, Nicholas Rhinehart, Chelsea Finn, Dinesh Jayaraman, Sergey Levine
CoRL 2021 Scaling up Multi-Task Robotic Reinforcement Learning Dmitry Kalashnikov, Jake Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, Karol Hausman
NeurIPSW 2021 Should I Run Offline Reinforcement Learning or Behavioral Cloning? Aviral Kumar, Joey Hong, Anikait Singh, Sergey Levine
ICML 2021 Simple and Effective VAE Training with Calibrated Decoders Oleh Rybkin, Kostas Daniilidis, Sergey Levine
NeurIPSW 2021 The Information Geometry of Unsupervised Reinforcement Learning Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine
CoRL 2021 Understanding the World Through Action Sergey Levine
NeurIPSW 2021 Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning Dhruv Shah, Peng Xu, Yao Lu, Ted Xiao, Alexander T Toshev, Sergey Levine, Brian Ichter
ICML 2021 Variational Empowerment as Representation Learning for Goal-Conditioned Reinforcement Learning Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu
NeurIPS 2021 Which Mutual-Information Representation Learning Objectives Are Sufficient for Control? Kate Rakelly, Abhishek Gupta, Carlos Florensa, Sergey Levine
NeurIPS 2021 Why Generalization in RL Is Difficult: Epistemic POMDPs and Implicit Partial Observability Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P. Adams, Sergey Levine
ICLR 2021 X2T: Training an X-to-Text Typing Interface with Online Learning from User Feedback Jensen Gao, Siddharth Reddy, Glen Berseth, Nicholas Hardy, Nikhilesh Natraj, Karunesh Ganguly, Anca Dragan, Sergey Levine
ICLR 2020 Adversarial Policies: Attacking Deep Reinforcement Learning Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, Stuart Russell
CoRL 2020 Assisted Perception: Optimizing Observations to Communicate State Siddharth Reddy, Sergey Levine, Anca Dragan
ICML 2020 Can Autonomous Vehicles Identify, Recover from, and Adapt to Distribution Shifts? Angelos Filos, Panagiotis Tigkas, Rowan Mcallister, Nicholas Rhinehart, Sergey Levine, Yarin Gal
ICML 2020 Cautious Adaptation for Reinforcement Learning in Safety-Critical Settings Jesse Zhang, Brian Cheung, Chelsea Finn, Sergey Levine, Dinesh Jayaraman
CoRL 2020 Chaining Behaviors from Data with Model-Free Reinforcement Learning Avi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, Sergey Levine
NeurIPS 2020 Conservative Q-Learning for Offline Reinforcement Learning Aviral Kumar, Aurick Zhou, George Tucker, Sergey Levine
NeurIPS 2020 Continual Learning of Control Primitives : Skill Discovery via Reset-Games Kelvin Xu, Siddharth Verma, Chelsea Finn, Sergey Levine
ICML 2020 Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions Michael Chang, Sid Kaushik, S. Matthew Weinberg, Tom Griffiths, Sergey Levine
ICLR 2020 Deep Imitative Models for Flexible Inference, Planning, and Control Nicholas Rhinehart, Rowan McAllister, Sergey Levine
NeurIPS 2020 DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction Aviral Kumar, Abhishek Gupta, Sergey Levine
ICLR 2020 Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery Kristian Hartikainen, Xinyang Geng, Tuomas Haarnoja, Sergey Levine
ICLR 2020 Dynamics-Aware Unsupervised Skill Discovery Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman
ICMLW 2020 Efficient Adaptation for End-to-End Vision-Based Robotic Manipulation Ryan Julian, Benjamin Swanson, Gaurav S. Sukhatme, Sergey Levine, Chelsea Finn, Karol Hausman
NeurIPS 2020 Emergent Complexity and Zero-Shot Transfer via Unsupervised Environment Design Michael Dennis, Natasha Jaques, Eugene Vinitsky, Alexandre Bayen, Stuart J. Russell, Andrew Critch, Sergey Levine
NeurIPS 2020 Gamma-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction Michael Janner, Igor Mordatch, Sergey Levine
NeurIPS 2020 Gradient Surgery for Multi-Task Learning Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn
CoRL 2020 Inverting the Pose Forecasting Pipeline with SPF2: Sequential Pointcloud Forecasting for Sequential Pose Forecasting Xinshuo Weng, Jianren Wang, Sergey Levine, Kris Kitani, Nicholas Rhinehart
ICML 2020 Learning Human Objectives by Evaluating Hypothetical Behavior Siddharth Reddy, Anca Dragan, Sergey Levine, Shane Legg, Jan Leike
ECCV 2020 Learning Predictive Models from Observation and Interaction Karl Schmeckpeper, Annie Xie, Oleh Rybkin, Stephen Tian, Kostas Daniilidis, Sergey Levine, Chelsea Finn
CoRL 2020 Learning to Walk in the Real World with Minimal Human Effort Sehoon Ha, Peng Xu, Zhenyu Tan, Sergey Levine, Jie Tan
NeurIPS 2020 Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors Karl Pertsch, Oleh Rybkin, Frederik Ebert, Shenghao Zhou, Dinesh Jayaraman, Chelsea Finn, Sergey Levine
CoRL 2020 MELD: Meta-Reinforcement Learning from Images via Latent State Models Zihao Zhao, Anusha Nagabandi, Kate Rakelly, Chelsea Finn, Sergey Levine
NeurIPS 2020 MOPO: Model-Based Offline Policy Optimization Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Y Zou, Sergey Levine, Chelsea Finn, Tengyu Ma
ICLR 2020 Meta-Learning Without Memorization Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, Chelsea Finn
ICLR 2020 Model Based Reinforcement Learning for Atari Łukasz Kaiser, Mohammad Babaeizadeh, Piotr Miłos, Błażej Osiński, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, Henryk Michalewski
NeurIPS 2020 Model Inversion Networks for Model-Based Optimization Aviral Kumar, Sergey Levine
CoRL 2020 Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic Reinforcement Learning Ryan Julian, Benjamin Swanson, Gaurav Sukhatme, Sergey Levine, Chelsea Finn, Karol Hausman
ICMLW 2020 Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers Benjamin Eysenbach, Swapnil Asawa, Shreyas Chaudhari, Ruslan Salakhutdinov, Sergey Levine
NeurIPS 2020 One Solution Is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL Saurabh Kumar, Aviral Kumar, Sergey Levine, Chelsea Finn
ICLR 2020 Reinforcement Learning with Competitive Ensembles of Information-Constrained Primitives Anirudh Goyal, Shagun Sodhani, Jonathan Binas, Xue Bin Peng, Sergey Levine, Yoshua Bengio
CoRL 2020 Reinforcement Learning with Videos: Combining Offline Observations with Interaction Karl Schmeckpeper, Oleh Rybkin, Kostas Daniilidis, Sergey Levine, Chelsea Finn
NeurIPS 2020 Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement Ben Eysenbach, Xinyang Geng, Sergey Levine, Ruslan Salakhutdinov
ICMLW 2020 Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement Benjamin Eysenbach, Xinyang Geng, Sergey Levine, Ruslan Salakhutdinov
ICLR 2020 SQIL: Imitation Learning via Reinforcement Learning with Sparse Rewards Siddharth Reddy, Anca D. Dragan, Sergey Levine
ICML 2020 Skew-Fit: State-Covering Self-Supervised Reinforcement Learning Vitchyr Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl, Sergey Levine
NeurIPS 2020 Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model Alex X. Lee, Anusha Nagabandi, Pieter Abbeel, Sergey Levine
ICLR 2020 The Ingredients of Real World Robotic Reinforcement Learning Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine
ICLR 2020 The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget Anirudh Goyal, Yoshua Bengio, Matthew Botvinick, Sergey Levine
ICLR 2020 Thinking While Moving: Deep Reinforcement Learning with Concurrent Control Ted Xiao, Eric Jang, Dmitry Kalashnikov, Sergey Levine, Julian Ibarz, Karol Hausman, Alexander Herzog
ICLR 2020 Unsupervised Meta-Learning for Reinforcement Learning Abhishek Gupta, Benjamin Eysenbach, Chelsea Finn, Sergey Levine
ICLR 2020 VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation Manoj Kumar, Mohammad Babaeizadeh, Dumitru Erhan, Chelsea Finn, Sergey Levine, Laurent Dinh, Durk Kingma
ICLR 2020 Watch, Try, Learn: Meta-Learning from Demonstrations and Rewards Allan Zhou, Eric Jang, Daniel Kappler, Alex Herzog, Mohi Khansari, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Sergey Levine, Chelsea Finn
ICLR 2019 Reasoning About Physical Interactions with Object-Oriented Prediction and Planning Michael Janner, Sergey Levine, William T. Freeman, Joshua B. Tenenbaum, Chelsea Finn, Jiajun Wu
ICLR 2019 Automatically Composing Representation Transformations as a Means for Generalization Michael Chang, Abhishek Gupta, Sergey Levine, Thomas L. Griffiths
NeurIPS 2019 Causal Confusion in Imitation Learning Pim de Haan, Dinesh Jayaraman, Sergey Levine
NeurIPS 2019 Compositional Plan Vectors Coline Devin, Daniel Geng, Pieter Abbeel, Trevor Darrell, Sergey Levine
CoRL 2019 Contextual Imagined Goals for Self-Supervised Robotic Learning Ashvin Nair, Shikhar Bahl, Alexander Khazatsky, Vitchyr Pong, Glen Berseth, Sergey Levine
CoRL 2019 Deep Dynamics Models for Learning Dexterous Manipulation Anusha Nagabandi, Kurt Konolige, Sergey Levine, Vikash Kumar
ICLR 2019 Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL Anusha Nagabandi, Chelsea Finn, Sergey Levine
ICMLW 2019 Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Reward Signals Gerrit Schoettler, Ashvin Nair, Jianlan Luo, Shikhar Bahl, Juan Aparicio Ojea, Eugen Solowjow, Sergey Levine
ICML 2019 Diagnosing Bottlenecks in Deep Q-Learning Algorithms Justin Fu, Aviral Kumar, Matthew Soh, Sergey Levine
ICLR 2019 Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, Jonathan Tompson
ICLR 2019 Diversity Is All You Need: Learning Skills Without a Reward Function Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, Sergey Levine
ICML 2019 EMI: Exploration with Mutual Information Hyoungseok Kim, Jaekyeom Kim, Yeonwoo Jeong, Sergey Levine, Hyun Oh Song
ICML 2019 Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables Kate Rakelly, Aurick Zhou, Chelsea Finn, Sergey Levine, Deirdre Quillen
ICLRW 2019 Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, Sergey Levine
CoRL 2019 Entity Abstraction in Visual Model-Based Reinforcement Learning Rishi Veerapaneni, John D. Co-Reyes, Michael Chang, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua Tenenbaum, Sergey Levine
ICLR 2019 From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following Justin Fu, Anoop Korattikara, Sergey Levine, Sergio Guadarrama
NeurIPS 2019 Guided Meta-Policy Search Russell Mendonca, Abhishek Gupta, Rosen Kralev, Pieter Abbeel, Sergey Levine, Chelsea Finn
ICLR 2019 Guiding Policies with Language via Meta-Learning John D. Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, Jacob Andreas, John DeNero, Pieter Abbeel, Sergey Levine
ICLR 2019 InfoBot: Transfer and Exploration via the Information Bottleneck Anirudh Goyal, Riashat Islam, Dj Strouse, Zafarali Ahmed, Hugo Larochelle, Matthew Botvinick, Yoshua Bengio, Sergey Levine
ICLR 2019 Learning Actionable Representations with Goal Conditioned Policies Dibya Ghosh, Abhishek Gupta, Sergey Levine
CoRL 2019 Learning Latent Plans from Play Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet
ICML 2019 Learning a Prior over Intent via Meta-Inverse Reinforcement Learning Kelvin Xu, Ellis Ratner, Anca Dragan, Sergey Levine, Chelsea Finn
ICLR 2019 Learning to Adapt in Dynamic, Real-World Environments Through Meta-Reinforcement Learning Anusha Nagabandi, Ignasi Clavera, Simin Liu, Ronald S. Fearing, Pieter Abbeel, Sergey Levine, Chelsea Finn
NeurIPS 2019 MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies Xue Bin Peng, Michael Chang, Grace Zhang, Pieter Abbeel, Sergey Levine
NeurIPS 2019 Meta-Learning with Implicit Gradients Aravind Rajeswaran, Chelsea Finn, Sham M. Kakade, Sergey Levine
CoRL 2019 Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, Sergey Levine
ICLR 2019 Near-Optimal Representation Learning for Hierarchical Reinforcement Learning Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine
ICMLW 2019 Off-Policy Evaluation of Generalization for Deep Q-Learning in BinaryReward Tasks Alex Irpan, Kanishka Rao, Konstantinos Bousmalis, Chris Harris, Julian Ibarz, Sergey Levine
NeurIPS 2019 Off-Policy Evaluation via Off-Policy Classification Alexander Irpan, Kanishka Rao, Konstantinos Bousmalis, Chris Harris, Julian Ibarz, Sergey Levine
ICML 2019 Online Meta-Learning Chelsea Finn, Aravind Rajeswaran, Sham Kakade, Sergey Levine
ICLRW 2019 Online Meta-Learning Chelsea Finn, Aravind Rajeswaran, Sham Kakade, Sergey Levine
NeurIPS 2019 Planning with Goal-Conditioned Policies Soroush Nasiriany, Vitchyr Pong, Steven Lin, Sergey Levine
CoRL 2019 ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots Michael Ahn, Henry Zhu, Kristian Hartikainen, Hugo Ponte, Abhishek Gupta, Sergey Levine, Vikash Kumar
ICLR 2019 Recall Traces: Backtracking Models for Efficient Reinforcement Learning Anirudh Goyal, Philemon Brakel, William Fedus, Soumye Singhal, Timothy Lillicrap, Sergey Levine, Hugo Larochelle, Yoshua Bengio
CoRL 2019 Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, Karol Hausman
CoRL 2019 RoboNet: Large-Scale Multi-Robot Learning Sudeep Dasari, Frederik Ebert, Stephen Tian, Suraj Nair, Bernadette Bucher, Karl Schmeckpeper, Siddharth Singh, Sergey Levine, Chelsea Finn
ICML 2019 SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew Johnson, Sergey Levine
NeurIPS 2019 Search on the Replay Buffer: Bridging Planning and Reinforcement Learning Ben Eysenbach, Ruslan Salakhutdinov, Sergey Levine
NeurIPS 2019 Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, Sergey Levine
ICLR 2019 Time-Agnostic Prediction: Predicting Predictable Video Frames Dinesh Jayaraman, Frederik Ebert, Alexei Efros, Sergey Levine
NeurIPS 2019 Unsupervised Curricula for Visual Meta-Reinforcement Learning Allan Jabri, Kyle Hsu, Abhishek Gupta, Ben Eysenbach, Sergey Levine, Chelsea Finn
ICLR 2019 Unsupervised Learning via Meta-Learning Kyle Hsu, Sergey Levine, Chelsea Finn
ICLR 2019 Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow Xue Bin Peng, Angjoo Kanazawa, Sam Toyer, Pieter Abbeel, Sergey Levine
NeurIPS 2019 Wasserstein Dependency Measure for Representation Learning Sherjil Ozair, Corey Lynch, Yoshua Bengio, Aaron van den Oord, Sergey Levine, Pierre Sermanet
NeurIPS 2019 When to Trust Your Model: Model-Based Policy Optimization Michael Janner, Justin Fu, Marvin Zhang, Sergey Levine
ICMLW 2018 Automatically Constructing Compositional and Recursive Learners Michael Chang, Abhishek Gupta, Thomas Griffiths, Sergey Levine
CoRL 2018 Composable Action-Conditioned Predictors: Flexible Off-Policy Learning for Robot Navigation Gregory Kahn, Adam Villaflor, Pieter Abbeel, Sergey Levine
NeurIPS 2018 Data-Efficient Hierarchical Reinforcement Learning Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine
NeurIPS 2018 Deep Reinforcement Learning in a Handful of Trials Using Probabilistic Dynamics Models Kurtland Chua, Roberto Calandra, Rowan McAllister, Sergey Levine
ICLR 2018 Divide-and-Conquer Reinforcement Learning Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, Sergey Levine
CoRL 2018 Few-Shot Goal Inference for Visuomotor Learning and Planning Annie Xie, Avi Singh, Sergey Levine, Chelsea Finn
CoRL 2018 Grasp2Vec: Learning Object Representations from Self-Supervised Grasping Eric Jang, Coline Devin, Vincent Vanhoucke, Sergey Levine
ICML 2018 Latent Space Policies for Hierarchical Reinforcement Learning Tuomas Haarnoja, Kristian Hartikainen, Pieter Abbeel, Sergey Levine
CVPRW 2018 Learning Instance Segmentation by Interaction Deepak Pathak, Yide Shentu, Dian Chen, Pulkit Agrawal, Trevor Darrell, Sergey Levine, Jitendra Malik
ICLR 2018 Learning Robust Rewards with Adverserial Inverse Reinforcement Learning Justin Fu, Katie Luo, Sergey Levine
ICLR 2018 Leave No Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine
ICLR 2018 Meta-Learning and Universality: Deep Representations and Gradient Descent Can Approximate Any Learning Algorithm Chelsea Finn, Sergey Levine
NeurIPS 2018 Meta-Reinforcement Learning of Structured Exploration Strategies Abhishek Gupta, Russell Mendonca, YuXuan Liu, Pieter Abbeel, Sergey Levine
NeurIPS 2018 Probabilistic Model-Agnostic Meta-Learning Chelsea Finn, Kelvin Xu, Sergey Levine
ICLR 2018 Recasting Gradient-Based Meta-Learning as Hierarchical Bayes Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, Thomas Griffiths
ICML 2018 Regret Minimization for Partially Observable Deep Reinforcement Learning Peter Jin, Kurt Keutzer, Sergey Levine
CoRL 2018 Robustness via Retrying: Closed-Loop Robotic Manipulation with Self-Supervised Learning Frederik Ebert, Sudeep Dasari, Alex X. Lee, Sergey Levine, Chelsea Finn
CoRL 2018 Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, Sergey Levine
ICML 2018 Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings John Co-Reyes, YuXuan Liu, Abhishek Gupta, Benjamin Eysenbach, Pieter Abbeel, Sergey Levine
ICML 2018 Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine
ICLR 2018 Stochastic Variational Video Prediction Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy H. Campbell, Sergey Levine
ICLR 2018 Temporal Difference Models: Model-Free Deep RL for Model-Based Control Vitchyr Pong, Shixiang Gu, Murtaza Dalal, Sergey Levine
ICML 2018 The Mirage of Action-Dependent Baselines in Reinforcement Learning George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard Turner, Zoubin Ghahramani, Sergey Levine
ICML 2018 Universal Planning Networks: Learning Generalizable Representations for Visuomotor Control Aravind Srinivas, Allan Jabri, Pieter Abbeel, Sergey Levine, Chelsea Finn
NeurIPS 2018 Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition Justin Fu, Avi Singh, Dibya Ghosh, Larry Yang, Sergey Levine
NeurIPS 2018 Visual Memory for Robust Path Following Ashish Kumar, Saurabh Gupta, David Fouhey, Sergey Levine, Jitendra Malik
NeurIPS 2018 Visual Reinforcement Learning with Imagined Goals Ashvin V Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, Sergey Levine
NeurIPS 2018 Where Do You Think You're Going?: Inferring Beliefs About Dynamics from Behavior Sid Reddy, Anca Dragan, Sergey Levine
CVPR 2017 Cognitive Mapping and Planning for Visual Navigation Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, Jitendra Malik
ICML 2017 Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning Yevgen Chebotar, Karol Hausman, Marvin Zhang, Gaurav Sukhatme, Stefan Schaal, Sergey Levine
ICLR 2017 EPOpt: Learning Robust Neural Network Policies Using Model Ensembles Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, Sergey Levine
NeurIPS 2017 EX2: Exploration with Exemplar Models for Deep Reinforcement Learning Justin Fu, John Co-Reyes, Sergey Levine
CoRL 2017 End-to-End Learning of Semantic Grasping Eric Jang, Sudheendra Vijayanarasimhan, Peter Pastor, Julian Ibarz, Sergey Levine
ICCV 2017 GPLAC: Generalizing Vision-Based Robotic Skills Using Weakly Labeled Images Avi Singh, Larry Yang, Sergey Levine
ICLR 2017 Generalizing Skills with Semi-Supervised Reinforcement Learning Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine
NeurIPS 2017 Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning Shixiang Gu, Timothy Lillicrap, Richard E Turner, Zoubin Ghahramani, Bernhard Schölkopf, Sergey Levine
ICLR 2017 Learning Invariant Feature Spaces to Transfer Skills with Reinforcement Learning Abhishek Gupta, Coline Devin, Yuxuan Liu, Pieter Abbeel, Sergey Levine
CoRL 2017 Learning Robotic Manipulation of Granular Media Connor Schenck, Jonathan Tompson, Sergey Levine, Dieter Fox
ICLR 2017 Learning Visual Servoing with Deep Features and Fitted Q-Iteration Alex X. Lee, Sergey Levine, Pieter Abbeel
ICML 2017 Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks Chelsea Finn, Pieter Abbeel, Sergey Levine
ICML 2017 Modular Multitask Reinforcement Learning with Policy Sketches Jacob Andreas, Dan Klein, Sergey Levine
CoRL 2017 One-Shot Visual Imitation Learning via Meta-Learning Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, Sergey Levine
ICLR 2017 Q-Prop: Sample-Efficient Policy Gradient with an Off-Policy Critic Shixiang Gu, Timothy P. Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine
ICML 2017 Reinforcement Learning with Deep Energy-Based Policies Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, Sergey Levine
CoRL 2017 Self-Supervised Visual Planning with Temporal Skip Connections Frederik Ebert, Chelsea Finn, Alex X. Lee, Sergey Levine
CoRL 2017 The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes? Roberto Calandra, Andrew Owens, Manu Upadhyaya, Wenzhen Yuan, Justin Lin, Edward H. Adelson, Sergey Levine
CVPRW 2017 Time-Contrastive Networks: Self-Supervised Learning from Multi-View Observation Pierre Sermanet, Corey Lynch, Jasmine Hsu, Sergey Levine
ICLR 2017 Unsupervised Perceptual Rewards for Imitation Learning Pierre Sermanet, Kelvin Xu, Sergey Levine
IJCAI 2017 Value Iteration Networks Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel
NeurIPS 2016 Backprop KF: Learning Discriminative Deterministic State Estimators Tuomas Haarnoja, Anurag Ajay, Sergey Levine, Pieter Abbeel
ICML 2016 Continuous Deep Q-Learning with Model-Based Acceleration Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, Sergey Levine
JMLR 2016 End-to-End Training of Deep Visuomotor Policies Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel
ICML 2016 Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization Chelsea Finn, Sergey Levine, Pieter Abbeel
NeurIPS 2016 Guided Policy Search via Approximate Mirror Descent William H Montgomery, Sergey Levine
ICLR 2016 High-Dimensional Continuous Control Using Generalized Advantage Estimation John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan, Pieter Abbeel
ICLR 2016 Learning Visual Predictive Models of Physics for Playing Billiards Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, Jitendra Malik
NeurIPS 2016 Learning to Poke by Poking: Experiential Learning of Intuitive Physics Pulkit Agrawal, Ashvin V Nair, Pieter Abbeel, Jitendra Malik, Sergey Levine
ICLR 2016 MuProp: Unbiased Backpropagation for Stochastic Neural Networks Shixiang Gu, Sergey Levine, Ilya Sutskever, Andriy Mnih
NeurIPS 2016 Unsupervised Learning for Physical Interaction Through Video Prediction Chelsea Finn, Ian Goodfellow, Sergey Levine
NeurIPS 2016 Value Iteration Networks Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel
ICCV 2015 Recurrent Network Models for Human Dynamics Katerina Fragkiadaki, Sergey Levine, Panna Felsen, Jitendra Malik
ICML 2015 Trust Region Policy Optimization John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, Philipp Moritz
ICML 2014 Learning Complex Neural Network Policies with Trajectory Optimization Sergey Levine, Vladlen Koltun
NeurIPS 2014 Learning Neural Network Policies with Guided Policy Search Under Unknown Dynamics Sergey Levine, Pieter Abbeel
ICML 2013 Guided Policy Search Sergey Levine, Vladlen Koltun
NeurIPS 2013 Variational Policy Search via Trajectory Optimization Sergey Levine, Vladlen Koltun
ICML 2012 Continuous Inverse Optimal Control with Locally Optimal Examples Sergey Levine, Vladlen Koltun
NeurIPS 2011 Nonlinear Inverse Reinforcement Learning with Gaussian Processes Sergey Levine, Zoran Popovic, Vladlen Koltun
NeurIPS 2010 Feature Construction for Inverse Reinforcement Learning Sergey Levine, Zoran Popovic, Vladlen Koltun