Saha, Aadirupa

48 publications

ICML 2025 Dueling Convex Optimization with General Preferences Aadirupa Saha, Tomer Koren, Yishay Mansour
NeurIPS 2025 Efficient and Near-Optimal Algorithm for Contextual Dueling Bandits with Offline Regression Oracles Aadirupa Saha, Robert E. Schapire
ICLR 2025 Finally Rank-Breaking Conquers MNL Bandits: Optimal and Efficient Algorithms for MNL Assortment Aadirupa Saha, Pierre Gaillard
ICLRW 2025 Hybrid Preference Optimization for Alignment: Provably Faster Convergence Rates by Combining Offline Preferences with Online Exploration Avinandan Bose, Zhihan Xiong, Aadirupa Saha, Simon Shaolei Du, Maryam Fazel
NeurIPS 2025 Imitation Beyond Expectation Using Pluralistic Stochastic Dominance Ali Farajzadeh, Danyal Saeed, Syed M Abbas, Rushit N. Shah, Aadirupa Saha, Brian D Ziebart
ICML 2025 Tracking the Best Expert Privately Hilal Asi, Vinod Raman, Aadirupa Saha
UAI 2024 A Graph Theoretic Approach for Preference Learning with Feature Information Aadirupa Saha, Arun Rajkumar
ICLR 2024 Bandits Meet Mechanism Design to Combat Clickbait in Online Recommendation Thomas Kleine Buening, Aadirupa Saha, Christos Dimitrakakis, Haifeng Xu
ALT 2024 Dueling Optimization with a Monotone Adversary Avrim Blum, Meghal Gupta, Gene Li, Naren Sarayu Manoj, Aadirupa Saha, Yuanyuan Yang
NeurIPSW 2024 Dueling in the Dark: An Efficient and Optimal Mirror Descent Approach for Online Optimization with Adversarial Preferences Aadirupa Saha, Yonathan Efroni, Barry-John Theobald
ICLRW 2024 Efficient Private Federated Non-Convex Optimization with Shuffled Model Lingxiao Wang, Xingyu Zhou, Kumar Kshitij Patel, Lawrence Tang, Aadirupa Saha
AISTATS 2024 Faster Convergence with MultiWay Preferences Aadirupa Saha, Vitaly Feldman, Yishay Mansour, Tomer Koren
AISTATS 2024 On the Vulnerability of Fairness Constrained Learning to Malicious Noise Avrim Blum, Princewill Okoroafor, Aadirupa Saha, Kevin M. Stangl
ICLR 2024 Only Pay for What Is Uncertain: Variance-Adaptive Thompson Sampling Aadirupa Saha, Branislav Kveton
NeurIPS 2024 Strategic Linear Contextual Bandits Thomas Kleine Buening, Aadirupa Saha, Christos Dimitrakakis, Haifeng Xu
AISTATS 2024 Think Before You Duel: Understanding Complexities of Preference Learning Under Constrained Resources Rohan Deb, Aadirupa Saha, Arindam Banerjee
AISTATS 2023 ANACONDA: An Improved Dynamic Regret Algorithm for Adaptive Non-Stationary Dueling Bandits Thomas Kleine Buening, Aadirupa Saha
ICMLW 2023 Bandits Meet Mechanism Design to Combat Clickbait in Online Recommendation Thomas Kleine Buening, Aadirupa Saha, Christos Dimitrakakis, Haifeng Xu
NeurIPSW 2023 Dueling Optimization with a Monotone Adversary Avrim Blum, Meghal Gupta, Gene Li, Naren Sarayu Manoj, Aadirupa Saha, Yuanyuan Yang
AISTATS 2023 Dueling RL: Reinforcement Learning with Trajectory Preferences Aadirupa Saha, Aldo Pacchiano, Jonathan Lee
NeurIPS 2023 Eliciting User Preferences for Personalized Multi-Objective Decision Making Through Comparative Feedback Han Shao, Lee Cohen, Avrim Blum, Yishay Mansour, Aadirupa Saha, Matthew Walter
ICML 2023 Federated Online and Bandit Convex Optimization Kumar Kshitij Patel, Lingxiao Wang, Aadirupa Saha, Nathan Srebro
AISTATS 2023 One Arrow, Two Kills: A Unified Framework for Achieving Optimal Regret Guarantees in Sleeping Bandits Pierre Gaillard, Aadirupa Saha, Soham Dan
AISTATS 2022 Exploiting Correlation to Achieve Faster Learning Rates in Low-Rank Preference Bandits Aadirupa Saha, Suprovat Ghoshal
NeurIPSW 2022 Distributed Online and Bandit Convex Optimization Kumar Kshitij Patel, Aadirupa Saha, Lingxiao Wang, Nathan Srebro
ALT 2022 Efficient and Optimal Algorithms for Contextual Dueling Bandits Under Realizability Aadirupa Saha, Akshay Krishnamurthy
ICML 2022 Optimal and Efficient Dynamic Regret Algorithms for Non-Stationary Dueling Bandits Aadirupa Saha, Shubham Gupta
ICML 2022 Stochastic Contextual Dueling Bandits Under Linear Stochastic Transitivity Models Viktor Bengs, Aadirupa Saha, Eyke Hüllermeier
ICML 2022 Versatile Dueling Bandits: Best-of-Both World Analyses for Learning from Relative Preferences Aadirupa Saha, Pierre Gaillard
ICML 2021 Adversarial Dueling Bandits Aadirupa Saha, Tomer Koren, Yishay Mansour
ICML 2021 Confidence-Budget Matching for Sequential Budgeted Learning Yonathan Efroni, Nadav Merlis, Aadirupa Saha, Shie Mannor
NeurIPS 2021 Dueling Bandits with Adversarial Sleeping Aadirupa Saha, Pierre Gaillard
ICML 2021 Dueling Convex Optimization Aadirupa Saha, Tomer Koren, Yishay Mansour
NeurIPS 2021 Optimal Algorithms for Stochastic Contextual Preference Bandits Aadirupa Saha
ICML 2021 Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization Aadirupa Saha, Nagarajan Natarajan, Praneeth Netrapalli, Prateek Jain
UAI 2021 Strategically Efficient Exploration in Competitive Multi-Agent Reinforcement Learning Robert Loftin, Aadirupa Saha, Sam Devlin, Katja Hofmann
AISTATS 2020 Best-Item Learning in Random Utility Models with Subset Choices Aadirupa Saha, Aditya Gopalan
ICML 2020 From PAC to Instance-Optimal Sample Complexity in the Plackett-Luce Model Aadirupa Saha, Aditya Gopalan
ICML 2020 Improved Sleeping Bandits with Stochastic Action Sets and Adversarial Rewards Aadirupa Saha, Pierre Gaillard, Michal Valko
ACML 2020 Polytime Decomposition of Generalized Submodular Base Polytopes with Efficient Sampling Aadirupa Saha
AISTATS 2019 Active Ranking with Subset-Wise Preferences Aadirupa Saha, Aditya Gopalan
UAI 2019 Be Greedy: How Chromatic Number Meets Regret Minimization in Graph Bandits Shreyas S, Aadirupa Saha, Chiranjib Bhattacharyya
NeurIPS 2019 Combinatorial Bandits with Relative Feedback Aadirupa Saha, Aditya Gopalan
AAAI 2019 How Many Pairwise Preferences Do We Need to Rank a Graph Consistently? Aadirupa Saha, Rakesh Shivanna, Chiranjib Bhattacharyya
ALT 2019 PAC Battling Bandits in the Plackett-Luce Model Aadirupa Saha, Aditya Gopalan
UAI 2018 Battle of Bandits Aadirupa Saha, Aditya Gopalan
AAAI 2018 Online Learning for Structured Loss Spaces Siddharth Barman, Aditya Gopalan, Aadirupa Saha
ICML 2015 Consistent Multiclass Algorithms for Complex Performance Measures Harikrishna Narasimhan, Harish Ramaswamy, Aadirupa Saha, Shivani Agarwal