Cheng, Ching-An

41 publications

ICLR 2025 Rapidly Adapting Policies to the Real-World via Simulation-Guided Fine-Tuning Patrick Yin, Tyler Westenbroek, Ching-An Cheng, Andrey Kolobov, Abhishek Gupta
NeurIPS 2024 How to Solve Contextual Goal-Oriented Problems with Offline Datasets? Ying Fan, Jingling Li, Adith Swaminathan, Aditya Modi, Ching-An Cheng
ICLR 2024 Improving Offline RL by Blending Heuristics Sinong Geng, Aldo Pacchiano, Andrey Kolobov, Ching-An Cheng
ICLRW 2024 LLF-Bench: Benchmark for Interactive Learning from Language Feedback Ching-An Cheng, Andrey Kolobov, Dipendra Misra, Allen Nie, Adith Swaminathan
ICML 2024 PRISE: LLM-Style Sequence Compression for Learning Temporal Action Abstractions in Control Ruijie Zheng, Ching-An Cheng, Hal Daumé Iii, Furong Huang, Andrey Kolobov
ICMLW 2024 Trace Is the New AutoDiff — Unlocking Efficient Optimization of Computational Workflows Ching-An Cheng, Allen Nie, Adith Swaminathan
NeurIPS 2024 Trace Is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMs Ching-An Cheng, Allen Nie, Adith Swaminathan
NeurIPS 2023 Adversarial Model for Offline Reinforcement Learning Mohak Bhardwaj, Tengyang Xie, Byron Boots, Nan Jiang, Ching-An Cheng
CoRL 2023 Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control Vivek Myers, Andre Wang He, Kuan Fang, Homer Rich Walke, Philippe Hansen-Estruch, Ching-An Cheng, Mihai Jalobeanu, Andrey Kolobov, Anca Dragan, Sergey Levine
ICML 2023 Hindsight Learning for MDPs with Exogenous Inputs Sean R. Sinclair, Felipe Vieira Frujeri, Ching-An Cheng, Luke Marshall, Hugo De Oliveira Barbalho, Jingling Li, Jennifer Neville, Ishai Menache, Adith Swaminathan
NeurIPSW 2023 Importance of Directional Feedback for LLM-Based Optimizers Allen Nie, Ching-An Cheng, Andrey Kolobov, Adith Swaminathan
ICML 2023 MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations Anqi Li, Byron Boots, Ching-An Cheng
CoRL 2023 PLEX: Making the Most of the Available Data for Robotic Manipulation Pretraining Garrett Thomas, Ching-An Cheng, Ricky Loynd, Felipe Vieira Frujeri, Vibhav Vineet, Mihai Jalobeanu, Andrey Kolobov
ICML 2023 Provable Reset-Free Reinforcement Learning by No-Regret Reduction Hoai-An Nguyen, Ching-An Cheng
ICLR 2023 Provably Efficient Lifelong Reinforcement Learning with Linear Representation Sanae Amani, Lin Yang, Ching-An Cheng
NeurIPSW 2023 Simple Data Sharing for Multi-Tasked Goal-Oriented Problems Ying Fan, Jingling Li, Adith Swaminathan, Aditya Modi, Ching-An Cheng
NeurIPSW 2023 Simple Data Sharing for Multi-Tasked Goal-Oriented Problems Ying Fan, Jingling Li, Adith Swaminathan, Aditya Modi, Ching-An Cheng
NeurIPS 2023 Survival Instinct in Offline Reinforcement Learning Anqi Li, Dipendra Misra, Andrey Kolobov, Ching-An Cheng
ICMLW 2023 Survival Instinct in Offline Reinforcement Learning and Implicit Human Bias in Data Anqi Li, Dipendra Misra, Andrey Kolobov, Ching-An Cheng
NeurIPSW 2022 AMORE: A Model-Based Framework for Improving Arbitrary Baseline Policies with Offline Data Tengyang Xie, Mohak Bhardwaj, Nan Jiang, Ching-An Cheng
ICML 2022 Adversarially Trained Actor Critic for Offline Reinforcement Learning Ching-An Cheng, Tengyang Xie, Nan Jiang, Alekh Agarwal
NeurIPS 2022 MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control Nolan Wagener, Andrey Kolobov, Felipe Vieira Frujeri, Ricky Loynd, Ching-An Cheng, Matthew Hausknecht
NeurIPS 2021 Bellman-Consistent Pessimism for Offline Reinforcement Learning Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, Alekh Agarwal
COLT 2021 Cautiously Optimistic Policy Optimization and Exploration with Linear Function Approximation Andrea Zanette, Ching-An Cheng, Alekh Agarwal
UAI 2021 Explaining Fast Improvement in Online Imitation Learning Xinyan Yan, Byron Boots, Ching-An Cheng
NeurIPS 2021 Heuristic-Guided Reinforcement Learning Ching-An Cheng, Andrey Kolobov, Adith Swaminathan
ICML 2021 Safe Reinforcement Learning Using Advantage-Based Intervention Nolan C Wagener, Byron Boots, Ching-An Cheng
AISTATS 2020 A Reduction from Reinforcement Learning to No-Regret Online Learning Ching-An Cheng, Remi Tachet Combes, Byron Boots, Geoff Gordon
NeurIPS 2020 Intra Order-Preserving Functions for Calibration of Multi-Class Neural Networks Amir Rahimi, Amirreza Shaban, Ching-An Cheng, Richard Hartley, Byron Boots
AISTATS 2020 Online Learning with Continuous Variations: Dynamic Regret and Reductions Ching-An Cheng, Jonathan Lee, Ken Goldberg, Byron Boots
NeurIPS 2020 Policy Improvement via Imitation of Multiple Oracles Ching-An Cheng, Andrey Kolobov, Alekh Agarwal
AISTATS 2019 Accelerating Imitation Learning with Predictive Models Ching-An Cheng, Xinyan Yan, Evangelos Theodorou, Byron Boots
ICML 2019 Predictor-Corrector Policy Optimization Ching-An Cheng, Xinyan Yan, Nathan Ratliff, Byron Boots
CoRL 2019 Riemannian Motion Policy Fusion Through Learnable Lyapunov Function Reshaping Mustafa Mukadam, Ching-An Cheng, Dieter Fox, Byron Boots, Nathan Ratliff
CoRL 2019 Trajectory-Wise Control Variates for Variance Reduction in Policy Gradient Methods Ching-An Cheng, Xinyan Yan, Byron Boots
AISTATS 2019 Truncated Back-Propagation for Bilevel Optimization Amirreza Shaban, Ching-An Cheng, Nathan Hatch, Byron Boots
AISTATS 2018 Convergence of Value Aggregation for Imitation Learning Ching-An Cheng, Byron Boots
UAI 2018 Fast Policy Learning Through Imitation and Reinforcement Ching-An Cheng, Xinyan Yan, Nolan Wagener, Byron Boots
NeurIPS 2018 Orthogonally Decoupled Variational Gaussian Processes Hugh Salimbeni, Ching-An Cheng, Byron Boots, Marc Deisenroth
NeurIPS 2017 Variational Inference for Gaussian Process Models with Linear Complexity Ching-An Cheng, Byron Boots
NeurIPS 2016 Incremental Variational Sparse Gaussian Process Regression Ching-An Cheng, Byron Boots