Kallus, Nathan

75 publications

NeurIPS 2025 $Q\sharp$: Provably Optimal Distributional RL for LLM Post-Training Jin Peng Zhou, Kaiwen Wang, Jonathan Daniel Chang, Zhaolin Gao, Nathan Kallus, Kilian Q Weinberger, Kianté Brantley, Wen Sun
ICML 2025 A Reductions Approach to Risk-Sensitive Reinforcement Learning with Optimized Certainty Equivalents Kaiwen Wang, Dawen Liang, Nathan Kallus, Wen Sun
AISTATS 2025 Anytime-Valid A/B Testing of Counting Processes Michael Lindon, Nathan Kallus
NeurIPS 2025 Efficient Adaptive Experimentation with Noncompliance Miruna Oprescu, Brian M Cho, Nathan Kallus
NeurIPS 2025 GST-UNet: A Neural Framework for Spatiotemporal Causal Inference with Time-Varying Confounding Miruna Oprescu, David Keetae Park, Xihaier Luo, Shinjae Yoo, Nathan Kallus
ICML 2025 Multi-Armed Bandits with Interference: Bridging Causal Inference and Adversarial Bandits Su Jia, Peter I. Frazier, Nathan Kallus
AISTATS 2025 Reward Maximization for Pure Exploration: Minimax Optimal Good Arm Identification for Nonparametric Multi-Armed Bandits Brian M Cho, Dominik Meier, Kyra Gan, Nathan Kallus
NeurIPS 2025 Simulation-Based Inference for Adaptive Experiments Brian M Cho, Aurelien Bibaut, Nathan Kallus
NeurIPS 2025 Value-Guided Search for Efficient Chain-of-Thought Reasoning Kaiwen Wang, Jin Peng Zhou, Jonathan Daniel Chang, Zhaolin Gao, Nathan Kallus, Kianté Brantley, Wen Sun
AISTATS 2025 Variation Due to Regularization Tractably Recovers Bayesian Deep Learning Uncertainty James McInerney, Nathan Kallus
MLJ 2024 Adjusting Regression Models for Conditional Uncertainty Calibration Ruijiang Gao, Mingzhang Yin, James McInerney, Nathan Kallus
NeurIPS 2024 Contextual Linear Optimization with Bandit Feedback Yichun Hu, Nathan Kallus, Xiaojie Mao, Yanchen Wu
NeurIPS 2024 Efficient and Sharp Off-Policy Evaluation in Robust Markov Decision Processes Andrew Bennett, Nathan Kallus, Miruna Oprescu, Wen Sun, Kaiwen Wang
NeurIPS 2024 Estimating Heterogeneous Treatment Effects by Combining Weak Instruments and Observational Data Miruna Oprescu, Nathan Kallus
ICML 2024 Inferring the Long-Term Causal Effects of Long-Term Treatments from Short-Term Experiments Allen Tran, Aurelien Bibaut, Nathan Kallus
JMLR 2024 Localized Debiased Machine Learning: Efficient Inference on Quantile Treatment Effects and Beyond Nathan Kallus, Xiaojie Mao, Masatoshi Uehara
AISTATS 2024 Low-Rank MDPs with Continuous Action Spaces Miruna Oprescu, Andrew Bennett, Nathan Kallus
ICML 2024 More Benefits of Being Distributional: Second-Order Bounds for Reinforcement Learning Kaiwen Wang, Owen Oertell, Alekh Agarwal, Nathan Kallus, Wen Sun
ICML 2024 Peeking with PEAK: Sequential, Nonparametric Composite Hypothesis Tests for Means of Multiple Data Streams Brian M Cho, Kyra Gan, Nathan Kallus
ICLR 2024 Provable Offline Preference-Based Reinforcement Learning Wenhao Zhan, Masatoshi Uehara, Nathan Kallus, Jason D. Lee, Wen Sun
ICML 2024 Switching the Loss Reduces the Cost in Batch Reinforcement Learning Alex Ayoub, Kaiwen Wang, Vincent Liu, Samuel Robertson, James Mcinerney, Dawen Liang, Nathan Kallus, Csaba Szepesvari
ICML 2023 B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under Hidden Confounding Miruna Oprescu, Jacob Dorn, Marah Ghoummaid, Andrew Jesson, Nathan Kallus, Uri Shalit
ICML 2023 Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun
NeurIPS 2023 Future-Dependent Value-Based Off-Policy Evaluation in POMDPs Masatoshi Uehara, Haruka Kiyohara, Andrew Bennett, Victor Chernozhukov, Nan Jiang, Nathan Kallus, Chengchun Shi, Wen Sun
NeurIPSW 2023 Hessian-Free Laplace in Bayesian Deep Learning James McInerney, Nathan Kallus
COLT 2023 Inference on Strongly Identified Functionals of Weakly Identified Functions Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara
COLT 2023 Minimax Instrumental Variable Regression and $l_2$ Convergence Guarantees Without Identification or Closedness Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara
ICML 2023 Near-Minimax-Optimal Risk-Sensitive Reinforcement Learning with CVaR Kaiwen Wang, Nathan Kallus, Wen Sun
NeurIPS 2023 Offline Minimax Soft-Q-Learning Under Realizability and Partial Coverage Masatoshi Uehara, Nathan Kallus, Jason Lee, Wen Sun
ICMLW 2023 Provable Offline Reinforcement Learning with Human Feedback Wenhao Zhan, Masatoshi Uehara, Nathan Kallus, Jason D. Lee, Wen Sun
ICMLW 2023 Provable Offline Reinforcement Learning with Human Feedback Wenhao Zhan, Masatoshi Uehara, Nathan Kallus, Jason D. Lee, Wen Sun
AISTATS 2023 Provable Safe Reinforcement Learning with Binary Feedback Andrew Bennett, Dipendra Misra, Nathan Kallus
AISTATS 2023 Robust and Agnostic Learning of Conditional Distributional Treatment Effects Nathan Kallus, Miruna Oprescu
ICML 2023 Smooth Non-Stationary Bandits Su Jia, Qian Xie, Nathan Kallus, Peter I. Frazier
NeurIPS 2023 The Benefits of Being Distributional: Small-Loss Bounds for Reinforcement Learning Kaiwen Wang, Kevin Zhou, Runzhe Wu, Nathan Kallus, Wen Sun
AISTATS 2022 Stateful Offline Contextual Policy Evaluation and Learning Nathan Kallus, Angela Zhou
ICML 2022 Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning Nathan Kallus, Xiaojie Mao, Kaiwen Wang, Zhengyuan Zhou
CVPR 2022 Estimating Structural Disparities for Face Models Shervin Ardeshir, Cristina Segalin, Nathan Kallus
JMLR 2022 Generalization Bounds and Representation Learning for Estimation of Potential Outcomes and Causal Effects Fredrik D. Johansson, Uri Shalit, Nathan Kallus, David Sontag
ICML 2022 Learning Bellman Complete Representations for Offline Policy Evaluation Jonathan Chang, Kaiwen Wang, Nathan Kallus, Wen Sun
NeurIPS 2022 Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems Masatoshi Uehara, Ayush Sekhari, Jason Lee, Nathan Kallus, Wen Sun
NeurIPS 2022 The Implicit Delta Method Nathan Kallus, James McInerney
NeurIPS 2022 What's the Harm? Sharp Bounds on the Fraction Negatively Affected by Treatment Nathan Kallus
AISTATS 2021 Off-Policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders Andrew Bennett, Nathan Kallus, Lihong Li, Ali Mousavi
NeurIPS 2021 Control Variates for Slate Off-Policy Evaluation Nikos Vlassis, Ashok Chandrashekar, Fernando Amat, Nathan Kallus
COLT 2021 Fast Rates for the Regret of Offline Reinforcement Learning Yichun Hu, Nathan Kallus, Masatoshi Uehara
ICML 2021 Optimal Off-Policy Evaluation from Multiple Logging Policies Nathan Kallus, Yuta Saito, Masatoshi Uehara
NeurIPS 2021 Post-Contextual-Bandit Inference Aurelien Bibaut, Maria Dimakopoulou, Nathan Kallus, Antoine Chambaz, Mark van der Laan
NeurIPS 2021 Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning Aurelien Bibaut, Nathan Kallus, Maria Dimakopoulou, Antoine Chambaz, Mark van der Laan
NeurIPS 2020 Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning Nathan Kallus, Angela Zhou
ICML 2020 DeepMatch: Balancing Deep Covariate Representations for Causal Inference Using Adversarial Training Nathan Kallus
JMLR 2020 Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes Nathan Kallus, Masatoshi Uehara
ICML 2020 Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation Nathan Kallus, Masatoshi Uehara
NeurIPS 2020 Doubly Robust Off-Policy Value and Gradient Estimation for Deterministic Policies Nathan Kallus, Masatoshi Uehara
ICML 2020 Efficient Policy Learning from Surrogate-Loss Classification Reductions Andrew Bennett, Nathan Kallus
JMLR 2020 Generalized Optimal Matching Methods for Causal Inference Nathan Kallus
COLT 2020 Smooth Contextual Bandits: Bridging the Parametric and Non-Differentiable Regret Regimes Yichun Hu, Nathan Kallus, Xiaojie Mao
ICML 2020 Statistically Efficient Off-Policy Policy Gradients Nathan Kallus, Masatoshi Uehara
NeurIPS 2019 Assessing Disparate Impact of Personalized Interventions: Identifiability and Bounds Nathan Kallus, Angela Zhou
ICML 2019 Classifying Treatment Responders Under Causal Effect Monotonicity Nathan Kallus
NeurIPS 2019 Deep Generalized Method of Moments for Instrumental Variable Analysis Andrew Bennett, Nathan Kallus, Tobias Schnabel
AISTATS 2019 Interval Estimation of Individual-Level Causal Effects Under Unobserved Confounding Nathan Kallus, Xiaojie Mao, Angela Zhou
NeurIPS 2019 Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning Nathan Kallus, Masatoshi Uehara
NeurIPS 2019 Policy Evaluation with Latent Confounders via Optimal Balance Andrew Bennett, Nathan Kallus
NeurIPS 2019 The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the XAUC Metric Nathan Kallus, Angela Zhou
NeurIPS 2018 Balanced Policy Evaluation and Learning Nathan Kallus
NeurIPS 2018 Causal Inference with Noisy and Missing Covariates via Matrix Factorization Nathan Kallus, Xiaojie Mao, Madeleine Udell
NeurIPS 2018 Confounding-Robust Policy Improvement Nathan Kallus, Angela Zhou
ALT 2018 Instrument-Armed Bandits Nathan Kallus
AISTATS 2018 Policy Evaluation and Optimization with Continuous Treatments Nathan Kallus, Angela Zhou
NeurIPS 2018 Removing Hidden Confounding by Experimental Grounding Nathan Kallus, Aahlad Manas Puli, Uri Shalit
ICML 2018 Residual Unfairness in Fair Machine Learning from Prejudiced Data Nathan Kallus, Angela Zhou
AISTATS 2017 A Framework for Optimal Matching for Causal Inference Nathan Kallus
ICML 2017 Recursive Partitioning for Personalization Using Observational Data Nathan Kallus
UAI 2016 Causal Inference by Minimizing the Dual Norm of Bias: Kernel Matching & Weighting Estimators for Causal Effects Nathan Kallus