He, Jiafan

26 publications

ICML 2025 Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback Qiwei Di, Jiafan He, Quanquan Gu
TMLR 2025 Reinforcement Learning from Human Feedback with Active Queries Kaixuan Ji, Jiafan He, Quanquan Gu
NeurIPS 2024 A Nearly Optimal and Low-Switching Algorithm for Reinforcement Learning with General Function Approximation Heyang Zhao, Jiafan He, Quanquan Gu
NeurIPSW 2024 Accelerated Preference Optimization for Large Language Model Alignment Jiafan He, Huizhuo Yuan, Quanquan Gu
NeurIPS 2024 Achieving Constant Regret in Linear Markov Decision Processes Weitong Zhang, Zhiyuan Fan, Jiafan He, Quanquan Gu
ICLR 2024 Horizon-Free Reinforcement Learning in Adversarial Linear Mixture MDPs Kaixuan Ji, Qingyue Zhao, Jiafan He, Weitong Zhang, Quanquan Gu
ICLR 2024 Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning Qiwei Di, Heyang Zhao, Jiafan He, Quanquan Gu
ICML 2024 Towards Robust Model-Based Reinforcement Learning Against Adversarial Corruption Chenlu Ye, Jiafan He, Quanquan Gu, Tong Zhang
ICML 2023 Cooperative Multi-Agent Reinforcement Learning: Asynchronous Communication and Linear Function Approximation Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu
ICML 2023 Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic Shortest Path Qiwei Di, Jiafan He, Dongruo Zhou, Quanquan Gu
ICML 2023 Nearly Minimax Optimal Reinforcement Learning for Linear Markov Decision Processes Jiafan He, Heyang Zhao, Dongruo Zhou, Quanquan Gu
ICML 2023 On the Interplay Between Misspecification and Sub-Optimality Gap in Linear Contextual Bandits Weitong Zhang, Jiafan He, Zhiyuan Fan, Quanquan Gu
ICML 2023 Optimal Online Generalized Linear Regression with Stochastic Noise and Its Application to Heteroscedastic Bandits Heyang Zhao, Dongruo Zhou, Jiafan He, Quanquan Gu
UAI 2023 Uniform-PAC Guarantees for Model-Based RL with Bounded Eluder Dimension Yue Wu, Jiafan He, Quanquan Gu
COLT 2023 Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement Learning: Adaptivity and Computational Efficiency Heyang Zhao, Jiafan He, Dongruo Zhou, Tong Zhang, Quanquan Gu
AISTATS 2022 Near-Optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs Jiafan He, Dongruo Zhou, Quanquan Gu
NeurIPS 2022 A Simple and Provably Efficient Algorithm for Asynchronous Federated Contextual Linear Bandits Jiafan He, Tianhao Wang, Yifei Min, Quanquan Gu
ICML 2022 Learning Stochastic Shortest Path with Linear Function Approximation Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu
ACML 2022 Locally Differentially Private Reinforcement Learning for Linear Mixture Markov Decision Processes Chonghua Liao, Jiafan He, Quanquan Gu
NeurIPS 2022 Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions Jiafan He, Dongruo Zhou, Tong Zhang, Quanquan Gu
ICML 2022 On the Sample Complexity of Learning Infinite-Horizon Discounted Linear Kernel MDPs Yuanzhou Chen, Jiafan He, Quanquan Gu
ICML 2021 Logarithmic Regret for Reinforcement Learning with Linear Function Approximation Jiafan He, Dongruo Zhou, Quanquan Gu
NeurIPS 2021 Nearly Minimax Optimal Reinforcement Learning for Discounted MDPs Jiafan He, Dongruo Zhou, Quanquan Gu
ICML 2021 Provably Efficient Reinforcement Learning for Discounted MDPs with Feature Mapping Dongruo Zhou, Jiafan He, Quanquan Gu
NeurIPS 2021 Uniform-PAC Bounds for Reinforcement Learning with Linear Function Approximation Jiafan He, Dongruo Zhou, Quanquan Gu
IJCAI 2019 Achieving a Fairer Future by Changing the past Jiafan He, Ariel D. Procaccia, Alexandros Psomas, David Zeng