Saha, Aadirupa
48 publications
NeurIPS
2025
Efficient and Near-Optimal Algorithm for Contextual Dueling Bandits with Offline Regression Oracles
ICLR
2025
Finally Rank-Breaking Conquers MNL Bandits: Optimal and Efficient Algorithms for MNL Assortment
NeurIPSW
2024
Dueling in the Dark: An Efficient and Optimal Mirror Descent Approach for Online Optimization with Adversarial Preferences
AISTATS
2024
Think Before You Duel: Understanding Complexities of Preference Learning Under Constrained Resources
AISTATS
2023
ANACONDA: An Improved Dynamic Regret Algorithm for Adaptive Non-Stationary Dueling Bandits
NeurIPS
2023
Eliciting User Preferences for Personalized Multi-Objective Decision Making Through Comparative Feedback
AISTATS
2023
One Arrow, Two Kills: A Unified Framework for Achieving Optimal Regret Guarantees in Sleeping Bandits