Kumar, Aviral

112 publications

NeurIPS 2025 Bigger, Regularized, Categorical: High-Capacity Value Functions Are Efficient Multi-Task Learners Michal Nauman, Marek Cygan, Carmelo Sferrazza, Aviral Kumar, Pieter Abbeel
NeurIPS 2025 Compute-Optimal Scaling for Value-Based Deep RL Preston Fu, Oleh Rybkin, Zhiyuan Zhou, Michal Nauman, Pieter Abbeel, Sergey Levine, Aviral Kumar
ICLR 2025 Digi-Q: Learning VLM Q-Value Functions for Training Device-Control Agents Hao Bai, Yifei Zhou, Li Erran Li, Sergey Levine, Aviral Kumar
ICLR 2025 Efficient Online Reinforcement Learning Fine-Tuning Need Not Retain Offline Data Zhiyuan Zhou, Andy Peng, Qiyang Li, Sergey Levine, Aviral Kumar
ICLR 2025 Generative Verifiers: Reward Modeling as Next-Token Prediction Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, Rishabh Agarwal
NeurIPS 2025 Grounded Reinforcement Learning for Visual Reasoning Gabriel Herbert Sarch, Snigdha Saha, Naitik Khandelwal, Ayush Jain, Michael J. Tarr, Aviral Kumar, Katerina Fragkiadaki
NeurIPS 2025 Horizon Reduction Makes RL Scalable Seohong Park, Kevin Frans, Deepinder Mann, Benjamin Eysenbach, Aviral Kumar, Sergey Levine
ICLRW 2025 Improving Test-Time Search for LLMs with Backtracking Against In-Context Value Verifiers Anikait Singh, Kushal Arora, Sedrick Keh, Jean Mercat, Tatsunori Hashimoto, Chelsea Finn, Aviral Kumar
ICLR 2025 Inference-Aware Fine-Tuning for Best-of-N Sampling in Large Language Models Yinlam Chow, Guy Tennenholtz, Izzeddin Gur, Vincent Zhuang, Bo Dai, Aviral Kumar, Rishabh Agarwal, Sridhar Thiagarajan, Craig Boutilier, Aleksandra Faust
ICML 2025 Optimizing Test-Time Compute via Meta Reinforcement Finetuning Yuxiao Qu, Matthew Y. R. Yang, Amrith Setlur, Lewis Tunstall, Edward Emanuel Beeching, Ruslan Salakhutdinov, Aviral Kumar
ICLRW 2025 Optimizing Test-Time Compute via Meta Reinforcement Finetuning Yuxiao Qu, Matthew Y. R. Yang, Amrith Setlur, Lewis Tunstall, Edward Emanuel Beeching, Ruslan Salakhutdinov, Aviral Kumar
ICLRW 2025 Policy-Agnostic RL: Offline RL and Online RL Fine-Tuning of Any Class and Backbone Max Sobol Mark, Tian Gao, Georgia Gabriela Sampaio, Mohan Kumar Srirama, Archit Sharma, Chelsea Finn, Aviral Kumar
ICLR 2025 RRM: Robust Reward Model Training Mitigates Reward Hacking Tianqi Liu, Wei Xiong, Jie Ren, Lichang Chen, Junru Wu, Rishabh Joshi, Yang Gao, Jiaming Shen, Zhen Qin, Tianhe Yu, Daniel Sohn, Anastasia Makarova, Jeremiah Zhe Liu, Yuan Liu, Bilal Piot, Abe Ittycheriah, Aviral Kumar, Mohammad Saleh
NeurIPS 2025 Reasoning as an Adaptive Defense for Safety Taeyoun Kim, Fahim Tajwar, Aditi Raghunathan, Aviral Kumar
ICLR 2025 Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, Aviral Kumar
ICLR 2025 Scaling LLM Test-Time Compute Optimally Can Be More Effective than Scaling Parameters for Reasoning Charlie Victor Snell, Jaehoon Lee, Kelvin Xu, Aviral Kumar
ICML 2025 Scaling Test-Time Compute Without Verification or RL Is Suboptimal Amrith Setlur, Nived Rajaraman, Sergey Levine, Aviral Kumar
ICLRW 2025 Scaling Test-Time Compute Without Verification or RL Is Suboptimal Amrith Setlur, Nived Rajaraman, Sergey Levine, Aviral Kumar
NeurIPS 2025 Thinking vs. Doing: Improving Agent Reasoning by Scaling Test-Time Interaction Junhong Shen, Hao Bai, Lunjun Zhang, Yifei Zhou, Amrith Setlur, Shengbang Tong, Diego Caples, Nan Jiang, Tong Zhang, Ameet Talwalkar, Aviral Kumar
ICLR 2025 Training Language Models to Self-Correct via Reinforcement Learning Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, John D Co-Reyes, Avi Singh, Kate Baumli, Shariq Iqbal, Colton Bishop, Rebecca Roelofs, Lei M Zhang, Kay McKinney, Disha Shrivastava, Cosmin Paduraru, George Tucker, Doina Precup, Feryal Behbahani, Aleksandra Faust
ICML 2025 Value-Based Deep RL Scales Predictably Oleh Rybkin, Michal Nauman, Preston Fu, Charlie Victor Snell, Pieter Abbeel, Sergey Levine, Aviral Kumar
ICLRW 2025 Value-Based Deep RL Scales Predictably Oleh Rybkin, Michal Nauman, Preston Fu, Charlie Victor Snell, Pieter Abbeel, Sergey Levine, Aviral Kumar
TMLR 2025 Vision-Language Models Provide Promptable Representations for Reinforcement Learning William Chen, Oier Mees, Aviral Kumar, Sergey Levine
ICML 2025 What Do Learning Dynamics Reveal About Generalization in LLM Mathematical Reasoning? Katie Kang, Amrith Setlur, Dibya Ghosh, Jacob Steinhardt, Claire Tomlin, Sergey Levine, Aviral Kumar
ICML 2024 ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine, Aviral Kumar
ICLRW 2024 ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL Yifei Zhou, Andrea Zanette, Jiayi Pan, Aviral Kumar, Sergey Levine
NeurIPS 2024 Designing Cell-Type-Specific Promoter Sequences Using Conservative Model-Based Optimization Aniketh Janardhan Reddy, Xinyang Geng, Michael H. Herschl, Sathvik Kolli, Aviral Kumar, Patrick D. Hsu, Sergey Levine, Nilah M. Ioannidis
NeurIPS 2024 DigiRL: Training In-the-Wild Device-Control Agents with Autonomous Reinforcement Learning Hao Bai, Yifei Zhou, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, Aviral Kumar
ICMLW 2024 DigiRL: Training In-the-Wild Device-Control Agents with Autonomous Reinforcement Learning Yifei Zhou, Hao Bai, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, Aviral Kumar
ICMLW 2024 DigiRL: Training In-the-Wild Device-Control Agents with Autonomous Reinforcement Learning Hao Bai, Yifei Zhou, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, Aviral Kumar
ICMLW 2024 DigiRL: Training In-the-Wild Device-Control Agents with Autonomous Reinforcement Learning Hao Bai, Yifei Zhou, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, Aviral Kumar
NeurIPSW 2024 Generative Verifiers: Reward Modeling as Next-Token Prediction Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, Rishabh Agarwal
NeurIPSW 2024 Generative Verifiers: Reward Modeling as Next-Token Prediction Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, Rishabh Agarwal
NeurIPS 2024 Is Value Learning Really the Main Bottleneck in Offline RL? Seohong Park, Kevin Frans, Sergey Levine, Aviral Kumar
ICMLW 2024 Is Value Learning Really the Main Bottleneck in Offline RL? Seohong Park, Kevin Frans, Sergey Levine, Aviral Kumar
ICMLW 2024 Learning to Reason by Failing: Offline RL on Sub-Optimal Rollouts Scales Synthetic Data by 8x Amrith Setlur, Saurabh Garg, Xinyang Geng, Naman Garg, Virginia Smith, Aviral Kumar
ICML 2024 Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data Fahim Tajwar, Anikait Singh, Archit Sharma, Rafael Rafailov, Jeff Schneider, Tengyang Xie, Stefano Ermon, Chelsea Finn, Aviral Kumar
NeurIPS 2024 RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math Reasoning by Eight-Fold Amrith Setlur, Saurabh Garg, Xinyang Geng, Naman Garg, Virginia Smith, Aviral Kumar
ICMLW 2024 Recursive Introspection: Teaching Foundation Model Agents How to Self-Improve Yuxiao Qu, Tianjun Zhang, Naman Garg, Aviral Kumar
ICMLW 2024 Recursive Introspection: Teaching LLM Agents How to Self-Improve Yuxiao Qu, Tianjun Zhang, Naman Garg, Aviral Kumar
ICMLW 2024 Recursive Introspection: Teaching LLM Agents How to Self-Improve Yuxiao Qu, Tianjun Zhang, Naman Garg, Aviral Kumar
ICMLW 2024 Recursive Introspection: Teaching LLM Agents How to Self-Improve Yuxiao Qu, Tianjun Zhang, Naman Garg, Aviral Kumar
NeurIPS 2024 Recursive Introspection: Teaching Language Model Agents How to Self-Improve Yuxiao Qu, Tianjun Zhang, Naman Garg, Aviral Kumar
CoRL 2024 Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance Mitsuhiko Nakamoto, Oier Mees, Aviral Kumar, Sergey Levine
ICML 2024 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL Jesse Farebrother, Jordi Orbay, Quan Vuong, Adrien Ali Taiga, Yevgen Chebotar, Ted Xiao, Alex Irpan, Sergey Levine, Pablo Samuel Castro, Aleksandra Faust, Aviral Kumar, Rishabh Agarwal
ICMLW 2024 Unfamiliar Finetuning Examples Control How Language Models Hallucinate Katie Kang, Eric Wallace, Claire Tomlin, Aviral Kumar, Sergey Levine
ICMLW 2024 Unfamiliar Finetuning Examples Control How Language Models Hallucinate Katie Kang, Eric Wallace, Claire Tomlin, Aviral Kumar, Sergey Levine
ICMLW 2024 Unfamiliar Finetuning Examples Control How Language Models Hallucinate Katie Kang, Eric Wallace, Claire Tomlin, Aviral Kumar, Sergey Levine
ICMLW 2024 Vision-Language Models Provide Promptable Representations for Reinforcement Learning William Chen, Oier Mees, Aviral Kumar, Sergey Levine
ICMLW 2024 Vision-Language Models Provide Promptable Representations for Reinforcement Learning William Chen, Oier Mees, Aviral Kumar, Sergey Levine
ICMLW 2024 Vision-Language Models Provide Promptable Representations for Reinforcement Learning William Chen, Oier Mees, Aviral Kumar, Sergey Levine
ICLR 2024 Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Rich Walke, Chelsea Finn, Aviral Kumar, Sergey Levine
CoRL 2023 Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, Sergey Levine
NeurIPS 2023 Beyond Uniform Sampling: Offline Reinforcement Learning with Imbalanced Datasets Zhang-Wei Hong, Aviral Kumar, Sathwik Karnik, Abhishek Bhandwaldar, Akash Srivastava, Joni K. Pajarinen, Romain Laroche, Abhishek Gupta, Pulkit Agrawal
NeurIPS 2023 Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning Mitsuhiko Nakamoto, Simon Zhai, Anikait Singh, Max Sobol Mark, Yi Ma, Chelsea Finn, Aviral Kumar, Sergey Levine
ICLRW 2023 Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning Mitsuhiko Nakamoto, Yuexiang Zhai, Anikait Singh, Yi Ma, Chelsea Finn, Aviral Kumar, Sergey Levine
ICMLW 2023 Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning Mitsuhiko Nakamoto, Yuexiang Zhai, Anikait Singh, Max Sobol Mark, Yi Ma, Chelsea Finn, Aviral Kumar, Sergey Levine
ICLR 2023 Confidence-Conditioned Value Functions for Offline Reinforcement Learning Joey Hong, Aviral Kumar, Sergey Levine
ICLR 2023 Efficient Deep Reinforcement Learning Requires Regulating Overfitting Qiyang Li, Aviral Kumar, Ilya Kostrikov, Sergey Levine
NeurIPSW 2023 Latent Conservative Objective Models for Data-Driven Crystal Structure Prediction Han Qi, Stefano Rando, Xinyang Geng, Iku Ohama, Aviral Kumar, Sergey Levine
ICLRW 2023 Latent Conservative Objective Models for Offline Data-Driven Crystal Structure Prediction Han Qi, Stefano Rando, Xinyang Geng, Iku Ohama, Aviral Kumar, Sergey Levine
ICLR 2023 Offline Q-Learning on Diverse Multi-Task Data Both Scales and Generalizes Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey Levine
CoRL 2023 Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions Yevgen Chebotar, Quan Vuong, Karol Hausman, Fei Xia, Yao Lu, Alex Irpan, Aviral Kumar, Tianhe Yu, Alexander Herzog, Karl Pertsch, Keerthana Gopalakrishnan, Julian Ibarz, Ofir Nachum, Sumedh Anand Sontakke, Grecia Salazar, Huong T. Tran, Jodilyn Peralta, Clayton Tan, Deeksha Manjunath, Jaspiar Singh, Brianna Zitkovich, Tomas Jackson, Kanishka Rao, Chelsea Finn, Sergey Levine
NeurIPS 2023 ReDS: Offline RL with Heteroskedastic Datasets via Support Constraints Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, Sergey Levine
NeurIPSW 2023 Robotic Offline RL from Internet Videos via Value-Function Pre-Training Chethan Anand Bhateja, Derek Guo, Dibya Ghosh, Anikait Singh, Manan Tomar, Quan Vuong, Yevgen Chebotar, Sergey Levine, Aviral Kumar
NeurIPSW 2023 Scaling Offline Q-Learning with Vision Transformers Yingjie Miao, Jordi Orbay, Rishabh Agarwal, Aviral Kumar, George Tucker, Aleksandra Faust
NeurIPSW 2023 Vision-Language Models Provide Promptable Representations for Reinforcement Learning William Chen, Oier Mees, Aviral Kumar, Sergey Levine
NeurIPSW 2023 Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Walke, Chelsea Finn, Aviral Kumar, Sergey Levine
NeurIPSW 2023 Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Walke, Chelsea Finn, Aviral Kumar, Sergey Levine
NeurIPSW 2023 Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Walke, Chelsea Finn, Aviral Kumar, Sergey Levine
NeurIPSW 2022 Confidence-Conditioned Value Functions for Offline Reinforcement Learning Joey Hong, Aviral Kumar, Sergey Levine
NeurIPSW 2022 Confidence-Conditioned Value Functions for Offline Reinforcement Learning Joey Hong, Aviral Kumar, Sergey Levine
NeurIPS 2022 DASCO: Dual-Generator Adversarial Support Constrained Offline Reinforcement Learning Quan Vuong, Aviral Kumar, Sergey Levine, Yevgen Chebotar
ICMLW 2022 DASCO: Dual-Generator Adversarial Support Constrained Offline Reinforcement Learning Quan Vuong, Aviral Kumar, Sergey Levine, Yevgen Chebotar
ICLR 2022 DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine
NeurIPS 2022 Data-Driven Offline Decision-Making via Invariant Representation Learning Han Qi, Yi Su, Aviral Kumar, Sergey Levine
ICLR 2022 Data-Driven Offline Optimization for Architecting Hardware Accelerators Aviral Kumar, Amir Yazdanbakhsh, Milad Hashemi, Kevin Swersky, Sergey Levine
ICLRW 2022 Data-Driven Optimization for Protein Design: Workflows, Algorithms and Metrics Sathvik Kolli, Amy X. Lu, Xinyang Geng, Aviral Kumar, Sergey Levine
ICML 2022 Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization Brandon Trabucco, Xinyang Geng, Aviral Kumar, Sergey Levine
CoRL 2022 Don’t Start from Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning Homer Rich Walke, Jonathan Heewon Yang, Albert Yu, Aviral Kumar, Jędrzej Orbik, Avi Singh, Sergey Levine
ICMLW 2022 Effective Offline RL Needs Going Beyond Pessimism: Representations and Distributional Shift Xinyang Geng, Kevin Li, Abhishek Gupta, Aviral Kumar, Sergey Levine
NeurIPSW 2022 Efficient Deep Reinforcement Learning Requires Regulating Statistical Overfitting Qiyang Li, Aviral Kumar, Ilya Kostrikov, Sergey Levine
NeurIPSW 2022 Efficient Deep Reinforcement Learning Requires Regulating Statistical Overfitting Qiyang Li, Aviral Kumar, Ilya Kostrikov, Sergey Levine
ICML 2022 How to Leverage Unlabeled Data in Offline Reinforcement Learning Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Chelsea Finn, Sergey Levine
NeurIPSW 2022 Offline Q-Learning on Diverse Multi-Task Data Both Scales and Generalizes Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey Levine
NeurIPSW 2022 Offline Q-Learning on Diverse Multi-Task Data Both Scales and Generalizes Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey Levine
NeurIPSW 2022 Offline Reinforcement Learning from Heteroskedastic Data via Support Constraints Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, Sergey Levine
NeurIPSW 2022 Offline Reinforcement Learning from Heteroskedastic Data via Support Constraints Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, Sergey Levine
NeurIPSW 2022 Pre-Training for Robots: Leveraging Diverse Multitask Data via Offline Reinforcement Learning Anikait Singh, Aviral Kumar, Frederik Ebert, Yanlai Yang, Chelsea Finn, Sergey Levine
NeurIPSW 2022 Pre-Training for Robots: Leveraging Diverse Multitask Data via Offline Reinforcement Learning Aviral Kumar, Anikait Singh, Frederik Ebert, Yanlai Yang, Chelsea Finn, Sergey Levine
ICLR 2022 Should I Run Offline Reinforcement Learning or Behavioral Cloning? Aviral Kumar, Joey Hong, Anikait Singh, Sergey Levine
CoRL 2021 A Workflow for Offline Model-Free Robotic Reinforcement Learning Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine
ICLR 2021 Benchmarks for Deep Off-Policy Evaluation Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, Thomas Paine
NeurIPS 2021 COMBO: Conservative Offline Model-Based Policy Optimization Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn
NeurIPS 2021 Conservative Data Sharing for Multi-Task Offline Reinforcement Learning Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn
ICML 2021 Conservative Objective Models for Effective Offline Model-Based Optimization Brandon Trabucco, Aviral Kumar, Xinyang Geng, Sergey Levine
ICLR 2021 Conservative Safety Critics for Exploration Homanga Bharadhwaj, Aviral Kumar, Nicholas Rhinehart, Sergey Levine, Florian Shkurti, Animesh Garg
NeurIPSW 2021 DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine
NeurIPSW 2021 Data Sharing Without Rewards in Multi-Task Offline Reinforcement Learning Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Chelsea Finn, Sergey Levine, Karol Hausman
ICLR 2021 Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning Aviral Kumar, Rishabh Agarwal, Dibya Ghosh, Sergey Levine
ICLR 2021 OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum
NeurIPSW 2021 Should I Run Offline Reinforcement Learning or Behavioral Cloning? Aviral Kumar, Joey Hong, Anikait Singh, Sergey Levine
NeurIPS 2021 Why Generalization in RL Is Difficult: Epistemic POMDPs and Implicit Partial Observability Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P. Adams, Sergey Levine
CoRL 2020 Chaining Behaviors from Data with Model-Free Reinforcement Learning Avi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, Sergey Levine
NeurIPS 2020 Conservative Q-Learning for Offline Reinforcement Learning Aviral Kumar, Aurick Zhou, George Tucker, Sergey Levine
NeurIPS 2020 DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction Aviral Kumar, Abhishek Gupta, Sergey Levine
NeurIPS 2020 Model Inversion Networks for Model-Based Optimization Aviral Kumar, Sergey Levine
NeurIPS 2020 One Solution Is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL Saurabh Kumar, Aviral Kumar, Sergey Levine, Chelsea Finn
ICML 2019 Diagnosing Bottlenecks in Deep Q-Learning Algorithms Justin Fu, Aviral Kumar, Matthew Soh, Sergey Levine
NeurIPS 2019 Graph Normalizing Flows Jenny Liu, Aviral Kumar, Jimmy Ba, Jamie Kiros, Kevin Swersky
NeurIPS 2019 Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, Sergey Levine
ICML 2018 Trainable Calibration Measures for Neural Networks from Kernel Mean Embeddings Aviral Kumar, Sunita Sarawagi, Ujjwal Jain