Jiang, Nan

106 publications

NeurIPS 2025 A Snapshot of Influence: A Local Data Attribution Framework for Online Reinforcement Learning Yuzheng Hu, Fan Wu, Haotian Ye, David Forsyth, James Zou, Nan Jiang, Jiaqi W. Ma, Han Zhao
AAAI 2025 Active Symbolic Discovery of Ordinary Differential Equations via Phase Portrait Sketching Nan Jiang, Md. Nasim, Yexiang Xue
ICLR 2025 Commit0: Library Generation from Scratch Wenting Zhao, Nan Jiang, Celine Lee, Justin T Chiu, Claire Cardie, Matthias Gallé, Alexander M Rush
CVPR 2025 Dynamic Motion Blending for Versatile Motion Editing Nan Jiang, Hongjie Li, Ziye Yuan, Zimo He, Yixin Chen, Tengyu Liu, Yixin Zhu, Siyuan Huang
ICLR 2025 GameArena: Evaluating LLM Reasoning Through Live Computer Games Lanxiang Hu, Qiyu Li, Anze Xie, Nan Jiang, Ion Stoica, Haojian Jin, Hao Zhang
NeurIPS 2025 Improving LLM General Preference Alignment via Optimistic Online Mirror Descent Yuheng Zhang, Dian Yu, Tao Ge, Linfeng Song, Zhichen Zeng, Haitao Mi, Nan Jiang, Dong Yu
ICML 2025 Is Best-of-N the Best of Them? Coverage, Scaling, and Optimality in Inference-Time Alignment Audrey Huang, Adam Block, Qinghua Liu, Nan Jiang, Akshay Krishnamurthy, Dylan J Foster
ICLR 2025 Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning Yuheng Zhang, Dian Yu, Baolin Peng, Linfeng Song, Ye Tian, Mingyue Huo, Nan Jiang, Haitao Mi, Dong Yu
AAAI 2025 LATTE: Improving LaTeX Recognition for Tables and Formulae with Iterative Refinement Nan Jiang, Shanchao Liang, Chengxiao Wang, Jiannan Wang, Lin Tan
CVPR 2025 MLLM-as-a-Judge for Image Safety Without Human Labeling Zhenting Wang, Shuming Hu, Shiyu Zhao, Xiaowen Lin, Felix Juefei-Xu, Zhuowei Li, Ligong Han, Harihar Subramanyam, Li Chen, Jianfa Chen, Nan Jiang, Lingjuan Lyu, Shiqing Ma, Dimitris N. Metaxas, Ankit Jain
NeurIPS 2025 Model Selection for Off-Policy Evaluation: New Algorithms and Experimental Protocol Pai Liu, LingfengZhao, Shivangi Agarwal, Jinghan Liu, Audrey Huang, Philip Amortila, Nan Jiang
ICLR 2025 Nova: Generative Language Models for Assembly Code with Hierarchical Attention and Contrastive Learning Nan Jiang, Chengxiao Wang, Kevin Liu, Xiangzhe Xu, Lin Tan, Xiangyu Zhang, Petr Babkin
NeurIPS 2025 Optimizing Chain-of-Thought Reasoners via Gradient Variance Minimization in Rejection Sampling and RL Jiarui Yao, Yifan Hao, Hanning Zhang, Hanze Dong, Wei Xiong, Nan Jiang, Tong Zhang
ICML 2025 Solving Satisfiability Modulo Counting Exactly with Probabilistic Circuits Jinzhao Li, Nan Jiang, Yexiang Xue
ICLR 2025 Statistical Tractability of Off-Policy Evaluation of History-Dependent Policies in POMDPs Yuheng Zhang, Nan Jiang
NeurIPS 2025 Thinking vs. Doing: Improving Agent Reasoning by Scaling Test-Time Interaction Junhong Shen, Hao Bai, Lunjun Zhang, Yifei Zhou, Amrith Setlur, Shengbang Tong, Diego Caples, Nan Jiang, Tong Zhang, Ameet Talwalkar, Aviral Kumar
ECCV 2024 F-HOI: Toward Fine-Grained Semantic-Aligned 3D Human-Object Interactions Jie Yang, Xuesong Niu, Nan Jiang, Ruimao Zhang, Siyuan Huang
ICMLW 2024 Get It Cooperating: Enhancing Generative Agent Cooperation with Commitment Devices Feng Yan, Qitian Jason Hu, Nan Jiang, Xinyuan Sun
ICLR 2024 Harnessing Density Ratios for Online Reinforcement Learning Philip Amortila, Dylan J Foster, Nan Jiang, Ayush Sekhari, Tengyang Xie
ICLR 2024 Is Attention Required for ICL? Exploring the Relationship Between Model Architecture and In-Context Learning Ability Ivan Lee, Nan Jiang, Taylor Berg-Kirkpatrick
ICML 2024 Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF Under KL-Constraint Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, Tong Zhang
NeurIPS 2024 LeDex: Training LLMs to Better Self-Debug and Explain Code Nan Jiang, Xiaopeng Li, Shiqi Wang, Qiang Zhou, Soneya Binta Hossain, Baishakhi Ray, Varun Kumar, Xiaofei Ma, Anoop Deoras
ICLRW 2024 MARS: A Benchmark for Multi-LLM Algorithmic Routing System Qitian Jason Hu, Jacob Bieker, Xiuyu Li, Nan Jiang, Benjamin Keigwin, Gaurav Ranganath, Kurt Keutzer, Shriyash Kaustubh Upadhyay
JMLR 2024 Model-Free Representation Learning and Exploration in Low-Rank MDPs Aditya Modi, Jinglin Chen, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal
NeurIPS 2024 Occupancy-Based Policy Gradient: Estimation, Convergence, and Optimality Audrey Huang, Nan Jiang
NeurIPS 2024 On the Curses of Future and History in Future-Dependent Value Functions for Off-Policy Evaluation Yuheng Zhang, Nan Jiang
NeurIPS 2024 Online Iterative Reinforcement Learning from Human Feedback with General Preference Model Chenlu Ye, Wei Xiong, Yuheng Zhang, Hanze Dong, Nan Jiang, Tong Zhang
NeurIPS 2024 PhyRecon: Physically Plausible Neural Scene Reconstruction Junfeng Ni, Yixin Chen, Bohan Jing, Nan Jiang, Bin Wang, Bo Dai, Puhao Li, Yixin Zhu, Song-Chun Zhu, Siyuan Huang
TMLR 2024 RLHF Workflow: From Reward Modeling to Online RLHF Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, Tong Zhang
AAAI 2024 Racing Control Variable Genetic Programming for Symbolic Regression Nan Jiang, Yexiang Xue
NeurIPS 2024 Reinforcement Learning Under Latent Dynamics: Toward Statistical and Algorithmic Modularity Philip Amortila, Dylan J. Foster, Nan Jiang, Akshay Krishnamurthy, Zakaria Mhammedi
ICMLW 2024 RouterBench: A Benchmark for Multi-LLM Routing System Qitian Jason Hu, Jacob Bieker, Xiuyu Li, Nan Jiang, Benjamin Keigwin, Gaurav Ranganath, Kurt Keutzer, Shriyash Kaustubh Upadhyay
CVPR 2024 Scaling up Dynamic Human-Scene Interaction Modeling Nan Jiang, Zhiyuan Zhang, Hongjie Li, Xiaoxuan Ma, Zan Wang, Yixin Chen, Tengyu Liu, Yixin Zhu, Siyuan Huang
AAAI 2024 Solving Satisfiability Modulo Counting for Symbolic and Statistical AI Integration with Provable Guarantees Jinzhao Li, Nan Jiang, Yexiang Xue
IJCAI 2024 Vertical Symbolic Regression via Deep Policy Gradient Nan Jiang, Md. Nasim, Yexiang Xue
NeurIPS 2023 Adversarial Model for Offline Reinforcement Learning Mohak Bhardwaj, Tengyang Xie, Byron Boots, Nan Jiang, Ching-An Cheng
ICLR 2023 Explaining RL Decisions with Trajectories Shripad Vilasrao Deshmukh, Arpan Dasgupta, Balaji Krishnamurthy, Nan Jiang, Chirag Agarwal, Georgios Theocharous, Jayakumar Subramanian
ICCV 2023 Full-Body Articulated Human-Object Interaction Nan Jiang, Tengyu Liu, Zhexuan Cao, Jieming Cui, Zhiyuan Zhang, Yixin Chen, He Wang, Yixin Zhu, Siyuan Huang
NeurIPS 2023 Future-Dependent Value-Based Off-Policy Evaluation in POMDPs Masatoshi Uehara, Haruka Kiyohara, Andrew Bennett, Victor Chernozhukov, Nan Jiang, Nathan Kallus, Chengchun Shi, Wen Sun
AAAI 2023 Learning Markov Random Fields for Combinatorial Structures via Sampling Through Lovász Local Lemma Nan Jiang, Yi Gu, Yexiang Xue
CoRL 2023 Marginalized Importance Sampling for Off-Environment Policy Evaluation Pulkit Katdare, Nan Jiang, Katherine Rose Driggs-Campbell
NeurIPSW 2023 Non-Adaptive Online Finetuning for Offline Reinforcement Learning Audrey Huang, Mohammad Ghavamzadeh, Nan Jiang, Marek Petrik
ICML 2023 Offline Learning in Markov Games with General Function Approximation Yuheng Zhang, Yu Bai, Nan Jiang
ICML 2023 Reinforcement Learning in Low-Rank MDPs with Density Features Audrey Huang, Jinglin Chen, Nan Jiang
NeurIPSW 2023 Solving Satisfiability Modulo Counting Problems in Computational Sustainability with Guarantees Jinzhao Li, Nan Jiang, Yexiang Xue
ECML-PKDD 2023 Symbolic Regression via Control Variable Genetic Programming Nan Jiang, Yexiang Xue
ICML 2023 The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation Philip Amortila, Nan Jiang, Csaba Szepesvari
ICLR 2023 The Role of Coverage in Online Reinforcement Learning Tengyang Xie, Dylan J Foster, Yu Bai, Nan Jiang, Sham M. Kakade
AISTATS 2022 On the Convergence Rate of Off-Policy Policy Optimization Methods with Density-Ratio Correction Jiawei Huang, Nan Jiang
NeurIPS 2022 A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation Philip Amortila, Nan Jiang, Dhruv Madeka, Dean P. Foster
ICML 2022 A Minimax Learning Approach to Off-Policy Evaluation in Confounded Partially Observable Markov Decision Processes Chengchun Shi, Masatoshi Uehara, Jiawei Huang, Nan Jiang
NeurIPSW 2022 AMORE: A Model-Based Framework for Improving Arbitrary Baseline Policies with Offline Data Tengyang Xie, Mohak Bhardwaj, Nan Jiang, Ching-An Cheng
ICML 2022 Adversarially Trained Actor Critic for Offline Reinforcement Learning Ching-An Cheng, Tengyang Xie, Nan Jiang, Alekh Agarwal
NeurIPS 2022 Beyond the Return: Off-Policy Function Estimation Under User-Specified Error-Measuring Distributions Audrey Huang, Nan Jiang
ICMLW 2022 Beyond the Return: Off-Policy Function Estimation Under User-Specified Error-Measuring Distributions Audrey Huang, Nan Jiang
JMLR 2022 Constraint Reasoning Embedded Structured Prediction Nan Jiang, Maosen Zhang, Willem-Jan van Hoeve, Yexiang Xue
NeurIPS 2022 Interaction-Grounded Learning with Action-Inclusive Feedback Tengyang Xie, Akanksha Saran, Dylan J Foster, Lekan Molu, Ida Momennejad, Nan Jiang, Paul Mineiro, John Langford
UAI 2022 Offline Reinforcement Learning Under Value and Density-Ratio Realizability: The Power of Gaps Jinglin Chen, Nan Jiang
COLT 2022 Offline Reinforcement Learning with Realizability and Single-Policy Concentrability Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, Jason Lee
NeurIPS 2022 On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL Jinglin Chen, Aditya Modi, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal
NeurIPS 2022 Tiered Reinforcement Learning: Pessimism in the Face of Uncertainty and Constant Regret Jiawei Huang, Li Zhao, Tao Qin, Wei Chen, Nan Jiang, Tie-Yan Liu
ICLR 2022 Towards Deployment-Efficient Reinforcement Learning: Lower Bound and Optimality Jiawei Huang, Jinglin Chen, Li Zhao, Tao Qin, Nan Jiang, Tie-Yan Liu
NeurIPSW 2022 Trajectory-Based Explainability Framework for Offline RL Shripad Vilasrao Deshmukh, Arpan Dasgupta, Chirag Agarwal, Nan Jiang, Balaji Krishnamurthy, Georgios Theocharous, Jayakumar Subramanian
AISTATS 2021 Minimax Model Learning Cameron Voloshin, Nan Jiang, Yisong Yue
ICML 2021 Batch Value-Function Approximation with Only Realizability Tengyang Xie, Nan Jiang
NeurIPS 2021 Bellman-Consistent Pessimism for Offline Reinforcement Learning Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, Alekh Agarwal
AAAI 2021 Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration Priyank Agrawal, Jinglin Chen, Nan Jiang
COLT 2021 On Query-Efficient Planning in MDPs Under Linear Realizability of the Optimal State-Value Function Gellert Weisz, Philip Amortila, Barnabás Janzer, Yasin Abbasi-Yadkori, Nan Jiang, Csaba Szepesvari
UAI 2021 PALM: Probabilistic Area Loss Minimization for Protein Sequence Alignment Fan Ding, Nan Jiang, Jianzhu Ma, Jian Peng, Jinbo Xu, Yexiang Xue
NeurIPS 2021 Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, Yu Bai
NeurIPS 2021 Towards Hyperparameter-Free Policy Selection for Offline Reinforcement Learning Siyuan Zhang, Nan Jiang
ICML 2020 From Importance Sampling to Doubly Robust Policy Gradient Jiawei Huang, Nan Jiang
NeurIPSW 2020 Language Generation via Combinatorial Constraint Satisfaction: A Tree Search Enhanced Monte-Carlo Approach Maosen Zhang, Nan Jiang, Lei Li, Yexiang Xue
NeurIPS 2020 Minimax Value Interval for Off-Policy Evaluation and Policy Optimization Nan Jiang, Jiawei Huang
ICML 2020 Minimax Weight and Q-Function Learning for Off-Policy Evaluation Masatoshi Uehara, Jiawei Huang, Nan Jiang
UAI 2020 Q* Approximation Schemes for Batch Reinforcement Learning: A Theoretical Comparison Tengyang Xie, Nan Jiang
AAAI 2020 RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement Learning Nan Jiang, Sheng Jin, Zhiyao Duan, Changshui Zhang
AISTATS 2020 Sample Complexity of Reinforcement Learning Using Linearly Combined Model Ensembles Aditya Modi, Nan Jiang, Ambuj Tewari, Satinder Singh
WACV 2020 Scale Match for Tiny Person Detection Xuehui Yu, Yuqi Gong, Nan Jiang, Qixiang Ye, Zhenjun Han
NeurIPS 2020 When Counterpoint Meets Chinese Folk Melodies Nan Jiang, Sheng Jin, Zhiyao Duan, Changshui Zhang
CVPRW 2019 Feature Hourglass Network for Skeleton Detection Nan Jiang, Yifei Zhang, Dezhao Luo, Chang Liu, Yu Zhou, Zhenjun Han
ICML 2019 Information-Theoretic Considerations in Batch Reinforcement Learning Jinglin Chen, Nan Jiang
COLT 2019 Model-Based RL in Contextual Decision Processes: PAC Bounds and Exponential Improvements over Model-Free Approaches Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford
NeurIPS 2019 Provably Efficient Q-Learning with Low Switching Cost Yu Bai, Tengyang Xie, Nan Jiang, Yu-Xiang Wang
ICML 2019 Provably Efficient RL with Rich Observations via Latent State Decoding Simon Du, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal, Miroslav Dudik, John Langford
NeurIPS 2018 Completing State Representations Using Spectral Learning Nan Jiang, Alex Kulesza, Satinder Singh
ICML 2018 Hierarchical Imitation and Reinforcement Learning Hoang Le, Nan Jiang, Alekh Agarwal, Miroslav Dudik, Yisong Yue, Hal Daumé
ALT 2018 Markov Decision Processes with Continuous Side Information Aditya Modi, Nan Jiang, Satinder Singh, Ambuj Tewari
NeurIPS 2018 On Oracle-Efficient PAC RL with Rich Observations Christoph Dann, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, Robert E. Schapire
COLT 2018 Open Problem: The Dependence of Sample Complexity Lower Bounds on Planning Horizon Nan Jiang, Alekh Agarwal
AAAI 2018 PAC Reinforcement Learning with an Imperfect Model Nan Jiang
ICML 2017 Contextual Decision Processes with Low Bellman Rank Are PAC-Learnable Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, Robert E. Schapire
IJCAI 2017 Exploration of Tree-Based Hierarchical SoftMax for Recurrent Language Models Nan Jiang, Wenge Rong, Min Gao, Yikang Shen, Zhang Xiong
NeurIPS 2017 Repeated Inverse Reinforcement Learning Kareem Amin, Nan Jiang, Satinder Singh
AAAI 2017 Word Embedding Based Correlation Model for Question/Answer Matching Yikang Shen, Wenge Rong, Nan Jiang, Baolin Peng, Jie Tang, Zhang Xiong
ICML 2016 Doubly Robust Off-Policy Value Evaluation for Reinforcement Learning Nan Jiang, Lihong Li
AAAI 2016 Improving Predictive State Representations via Gradient Descent Nan Jiang, Alex Kulesza, Satinder Singh
IJCAI 2016 On Structural Properties of MDPs That Bound Loss Due to Shallow Planning Nan Jiang, Satinder Singh, Ambuj Tewari
IJCAI 2016 The Dependence of Effective Planning Horizon on Model Accuracy Nan Jiang, Alex Kulesza, Satinder Singh, Richard L. Lewis
ICML 2015 Abstraction Selection in Model-Based Reinforcement Learning Nan Jiang, Alex Kulesza, Satinder Singh
AISTATS 2015 Low-Rank Spectral Learning with Weighted Loss Functions Alex Kulesza, Nan Jiang, Satinder Singh
AAAI 2015 Spectral Learning of Predictive State Representations with Insufficient Statistics Alex Kulesza, Nan Jiang, Satinder Singh
CVPR 2014 Unifying Spatial and Attribute Selection for Distracter-Resilient Tracking Nan Jiang, Ying Wu
CVPR 2012 Order Determination and Sparsity-Regularized Metric Learning Adaptive Visual Tracking Nan Jiang, Wenyu Liu, Ying Wu
CVPR 2011 Adaptive and Discriminative Metric Differential Tracking Nan Jiang, Wenyu Liu, Ying Wu
CVPR 2011 Tracking Low Resolution Objects by Metric Preservation Nan Jiang, Wenyu Liu, Heng Su, Ying Wu