Slivkins, Aleksandrs

34 publications

NeurIPS 2025 Greedy Algorithms for Structured Bandits: A Sharp Characterization of Asymptotic Success / Failure Aleksandrs Slivkins, Yunzong Xu, Shiliang Zuo
AAAI 2025 Robust Performance Incentivizing Algorithms for Multi-Armed Bandits with Strategic Agents Seyed A. Esmaeili, Suho Shin, Aleksandrs Slivkins
COLT 2024 Autobidders with Budget and ROI Constraints: Efficiency, Regret, and Pacing Dynamics Brendan Lucier, Sarath Pattathil, Aleksandrs Slivkins, Mengxiao Zhang
NeurIPS 2024 Can Large Language Models Explore In-Context? Akshay Krishnamurthy, Keegan Harris, Dylan J. Foster, Cyril Zhang, Aleksandrs Slivkins
ICMLW 2024 Can Large Language Models Explore In-Context? Akshay Krishnamurthy, Keegan Harris, Dylan J Foster, Cyril Zhang, Aleksandrs Slivkins
AAAI 2024 Content Filtering with Inattentive Information Consumers Ian Ball, James W. Bono, Justin Grana, Nicole Immorlica, Brendan Lucier, Aleksandrs Slivkins
JMLR 2024 Contextual Bandits with Packing and Covering Constraints: A Modular Lagrangian Approach via Regression Aleksandrs Slivkins, Xingyu Zhou, Karthik Abinav Sankararaman, Dylan J. Foster
ICML 2024 Impact of Decentralized Learning on Player Utilities in Stackelberg Games Kate Donahue, Nicole Immorlica, Meena Jagadeesan, Brendan Lucier, Aleksandrs Slivkins
NeurIPS 2023 Bandit Social Learning Under Myopic Behavior Kiarash Banihashem, MohammadTaghi Hajiaghayi, Suho Shin, Aleksandrs Slivkins
COLT 2023 Contextual Bandits with Packing and Covering Constraints: A Modular Lagrangian Approach via Regression Aleksandrs Slivkins, Karthik Abinav Sankararaman, Dylan J Foster
NeurIPS 2022 Incentivizing Combinatorial Bandit Exploration Xinyan Hu, Dung Ngo, Aleksandrs Slivkins, Steven Z. Wu
NeurIPS 2021 Bandits with Knapsacks Beyond the Worst Case Karthik Abinav Sankararaman, Aleksandrs Slivkins
NeurIPS 2020 Constrained Episodic Reinforcement Learning in Concave-Convex and Knapsack Settings Kianté Brantley, Miro Dudik, Thodoris Lykouris, Sobhan Miryoosefi, Max Simchowitz, Aleksandrs Slivkins, Wen Sun
JMLR 2020 Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins, Chicheng Zhang
NeurIPS 2020 Efficient Contextual Bandits with Continuous Actions Maryam Majzoubi, Chicheng Zhang, Rajan Chari, Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins
COLT 2019 Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins, Chicheng Zhang
FnTML 2019 Introduction to Multi-Armed Bandits Aleksandrs Slivkins
AISTATS 2018 Combinatorial Semi-Bandits with Knapsacks Karthik Abinav Sankararaman, Aleksandrs Slivkins
COLT 2018 The Externalities of Exploration and How Data Diversity Helps Exploitation Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu
JAIR 2016 Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems Chien-Ju Ho, Aleksandrs Slivkins, Jennifer Wortman Vaughan
COLT 2015 Contextual Dueling Bandits Miroslav Dudík, Katja Hofmann, Robert E. Schapire, Aleksandrs Slivkins, Masrour Zoghi
JMLR 2014 Contextual Bandits with Similarity Information Aleksandrs Slivkins
ICML 2014 One Practical Algorithm for Both Stochastic and Adversarial Bandits Yevgeny Seldin, Aleksandrs Slivkins
COLT 2014 Resourceful Contextual Bandits Ashwinkumar Badanidiyuru, John Langford, Aleksandrs Slivkins
COLT 2014 Robust Multi-Objective Learning with Mentor Feedback Alekh Agarwal, Ashwinkumar Badanidiyuru, Miroslav Dudík, Robert E. Schapire, Aleksandrs Slivkins
COLT 2013 Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem Ittai Abraham, Omar Alonso, Vasilis Kandylas, Aleksandrs Slivkins
JMLR 2013 Ranked Bandits in Metric Spaces: Learning Diverse Rankings over Large Document Collections Aleksandrs Slivkins, Filip Radlinski, Sreenivas Gollapudi
COLT 2012 The Best of Both Worlds: Stochastic and Adversarial Bandits Sébastien Bubeck, Aleksandrs Slivkins
COLT 2011 Contextual Bandits with Similarity Information Aleksandrs Slivkins
COLT 2011 Monotone Multi-Armed Bandit Allocations Aleksandrs Slivkins
NeurIPS 2011 Multi-Armed Bandits on Implicit Metric Spaces Aleksandrs Slivkins
ICML 2010 Learning Optimally Diverse Rankings over Large Document Collections Aleksandrs Slivkins, Filip Radlinski, Sreenivas Gollapudi
NeurIPS 2009 Adapting to the Shifting Intent of Search Queries Umar Syed, Aleksandrs Slivkins, Nina Mishra
COLT 2008 Adapting to a Changing Environment: The Brownian Restless Bandits Aleksandrs Slivkins, Eli Upfal