Chi, Yuejie
68 publications
AISTATS
2025
Characterizing the Accuracy-Communication-Privacy Trade-Off in Distributed Stochastic Convex Optimization
NeurIPS
2025
Exploration from a Primal-Dual Lens: Value-Incentivized Actor-Critic Methods for Sample-Efficient Online RL
ICML
2025
Incentivize Without Bonus: Provably Efficient Model-Based Online Multi-Agent RL for Markov Games
NeurIPS
2025
Multi-Head Transformers Provably Learn Symbolic Multi-Step Reasoning via Gradient Descent
AISTATS
2024
Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression
NeurIPS
2024
Federated Natural Policy Gradient and Actor Critic Methods for Multi-Task Reinforcement Learning
ICML
2024
Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference
NeurIPS
2024
In-Context Learning with Representations: Contextual Generalization of Trained Transformers
ICMLW
2024
In-Context Learning with Representations: Contextual Generalization of Trained Transformers
NeurIPS
2024
Provably Robust Score-Based Diffusion Posterior Sampling for Plug-and-Play Image Reconstruction
ICML
2024
Sample-Efficient Robust Multi-Agent Reinforcement Learning in the Face of Environmental Uncertainty
UAI
2023
A Trajectory Is Worth Three Sentences: Multimodal Transformer for Offline Reinforcement Learning
NeurIPS
2023
Reward-Agnostic Fine-Tuning: Provable Statistical Benefits of Hybrid Reinforcement Learning
NeurIPS
2023
The Curious Price of Distributional Robustness in Reinforcement Learning with a Generative Model
NeurIPS
2022
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression
AAAI
2022
Batch Active Learning with Graph Neural Networks via Multi-Agent Deep Reinforcement Learning
ICML
2022
Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity
JMLR
2022
Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation from Incomplete Measurements
NeurIPS
2022
SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression
NeurIPS
2021
Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free Reinforcement Learning
NeurIPS
2021
Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting
NeurIPS
2020
Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model
AISTATS
2020
Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction