ML Anthology
Authors
Search
About
Kaufmann, Emilie
49 publications
AISTATS
2025
Bandit Pareto Set Identification in a Multi-Output Linear Model
Cyrille Kone
,
Emilie Kaufmann
,
Laura Richert
AISTATS
2025
Best-Arm Identification in Unimodal Bandits
Riccardo Poiani
,
Marc Jourdan
,
Emilie Kaufmann
,
Rémy Degenne
ICML
2025
Constrained Pareto Set Identification with Bandit Feedback
Cyrille Kone
,
Emilie Kaufmann
,
Laura Richert
AISTATS
2025
Pareto Set Identification with Posterior Sampling
Cyrille Kone
,
Marc Jourdan
,
Emilie Kaufmann
AISTATS
2024
Bandit Pareto Set Identification: The Fixed Budget Setting
Cyrille Kone
,
Emilie Kaufmann
,
Laura Richert
NeurIPS
2024
Finding Good Policies in Average-Reward Markov Decision Processes Without Prior Knowledge
Adrienne Tuynman
,
Rémy Degenne
,
Emilie Kaufmann
NeurIPS
2024
Optimal Multi-Fidelity Best-Arm Identification
Riccardo Poiani
,
Rémy Degenne
,
Emilie Kaufmann
,
Alberto Maria Metelli
,
Marcello Restelli
UAI
2024
Power Mean Estimation in Stochastic Monte-Carlo Tree Search
Tuan Dam
,
Odalric-Ambrym Maillard
,
Emilie Kaufmann
ICMLW
2024
Power Mean Estimation in Stochastic Monte-Carlo Tree Search
Tuan Quang Dam
,
Odalric-Ambrym Maillard
,
Emilie Kaufmann
COLT
2023
Active Coverage for PAC Reinforcement Learning
Aymen Al-Marjani
,
Andrea Tirinzoni
,
Emilie Kaufmann
NeurIPS
2023
Adaptive Algorithms for Relaxed Pareto Set Identification
Cyrille Kone
,
Emilie Kaufmann
,
Laura Richert
NeurIPS
2023
An $\varepsilon$-Best-Arm Identification Algorithm for Fixed-Confidence and Beyond
Marc Jourdan
,
Rémy Degenne
,
Emilie Kaufmann
ALT
2023
Optimistic PAC Reinforcement Learning: The Instance-Dependent View
Andrea Tirinzoni
,
Aymen Al-Marjani
,
Emilie Kaufmann
AISTATS
2022
Efficient Algorithms for Extreme Bandits
Dorian Baudry
,
Yoan Russac
,
Emilie Kaufmann
JMLR
2022
Efficient Change-Point Detection for Tackling Piecewise-Stationary Bandits
Lilian Besson
,
Emilie Kaufmann
,
Odalric-Ambrym Maillard
,
Julien Seznec
NeurIPS
2022
Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs
Andrea Tirinzoni
,
Aymen Al Marjani
,
Emilie Kaufmann
NeurIPS
2022
Near-Optimal Collaborative Learning in Bandits
Clémence Réda
,
Sattar Vakili
,
Emilie Kaufmann
NeurIPS
2022
Top Two Algorithms Revisited
Marc Jourdan
,
Rémy Degenne
,
Dorian Baudry
,
Rianne de Heide
,
Emilie Kaufmann
AISTATS
2021
A Kernel-Based Approach to Non-Stationary Reinforcement Learning in Metric Spaces
Omar Darwiche Domingues
,
Pierre Menard
,
Matteo Pirotta
,
Emilie Kaufmann
,
Michal Valko
AISTATS
2021
Top-M Identification for Linear Bandits
Clémence Réda
,
Emilie Kaufmann
,
Andrée Delahaye-Duriez
ALT
2021
Adaptive Reward-Free Exploration
Emilie Kaufmann
,
Pierre Ménard
,
Omar Darwiche Domingues
,
Anders Jonsson
,
Edouard Leurent
,
Michal Valko
ALT
2021
Episodic Reinforcement Learning in Finite MDPs: Minimax Lower Bounds Revisited
Omar Darwiche Domingues
,
Pierre Ménard
,
Emilie Kaufmann
,
Michal Valko
ICML
2021
Fast Active Learning for Pure Exploration in Reinforcement Learning
Pierre Menard
,
Omar Darwiche Domingues
,
Anders Jonsson
,
Emilie Kaufmann
,
Edouard Leurent
,
Michal Valko
ICML
2021
Kernel-Based Reinforcement Learning: A Finite-Time Analysis
Omar Darwiche Domingues
,
Pierre Menard
,
Matteo Pirotta
,
Emilie Kaufmann
,
Michal Valko
JMLR
2021
Mixture Martingales Revisited with Applications to Sequential Tests and Confidence Intervals
Emilie Kaufmann
,
Wouter M. Koolen
JMLR
2021
On Multi-Armed Bandit Designs for Dose-Finding Trials
Maryam Aziz
,
Emilie Kaufmann
,
Marie-Karelle Riviere
ICML
2021
Optimal Thompson Sampling Strategies for Support-Aware CVaR Bandits
Dorian Baudry
,
Romain Gautron
,
Emilie Kaufmann
,
Odalric Maillard
AISTATS
2020
A Practical Algorithm for Multiplayer Bandits When Arm Means Vary Among Players
Abbas Mehrabian
,
Etienne Boursier
,
Emilie Kaufmann
,
Vianney Perchet
AISTATS
2020
Fixed-Confidence Guarantees for Bayesian Best-Arm Identification
Xuedong Shang
,
Rianne Heide
,
Pierre Menard
,
Emilie Kaufmann
,
Michal Valko
NeurIPS
2020
Planning in Markov Decision Processes with Gap-Dependent Sample Complexity
Anders Jonsson
,
Emilie Kaufmann
,
Pierre Menard
,
Omar Darwiche Domingues
,
Edouard Leurent
,
Michal Valko
ALT
2020
Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling
Cindy Trinh
,
Emilie Kaufmann
,
Claire Vernade
,
Richard Combes
NeurIPS
2020
Sub-Sampling for Efficient Non-Parametric Bandit Exploration
Dorian Baudry
,
Emilie Kaufmann
,
Odalric-Ambrym Maillard
MLJ
2019
Asymptotically Optimal Algorithms for Budgeted Multiple Play Bandits
Alexander Luedtke
,
Emilie Kaufmann
,
Antoine Chambaz
ALT
2019
General Parallel Optimization a Without Metric
Xuedong Shang
,
Emilie Kaufmann
,
Michal Valko
ALT
2018
Corrupt Bandits for Preserving Local Privacy
Pratik Gajane
,
Tanguy Urvoy
,
Emilie Kaufmann
ALT
2018
Multi-Player Bandits Revisited
Lilian Besson
,
Emilie Kaufmann
ALT
2018
Pure Exploration in Infinitely-Armed Bandit Models with Fixed-Confidence
Maryam Aziz
,
Jesse Anderton
,
Emilie Kaufmann
,
Javed Aslam
NeurIPS
2018
Sequential Test for the Lowest Mean: From Thompson to Murphy Sampling
Emilie Kaufmann
,
Wouter M. Koolen
,
Aurélien Garivier
NeurIPS
2017
Monte-Carlo Tree Search by Best Arm Identification
Emilie Kaufmann
,
Wouter M. Koolen
ALT
2016
A Spectral Algorithm with Additive Clustering for the Recovery of Overlapping Communities in Networks
Emilie Kaufmann
,
Thomas Bonald
,
Marc Lelarge
COLT
2016
Maximin Action Identification: A New Bandit Framework for Games
Aurélien Garivier
,
Emilie Kaufmann
,
Wouter M. Koolen
NeurIPS
2016
On Explore-Then-Commit Strategies
Aurelien Garivier
,
Tor Lattimore
,
Emilie Kaufmann
JMLR
2016
On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models
Emilie Kaufmann
,
Olivier Cappé
,
Aurélien Garivier
COLT
2016
Optimal Best Arm Identification with Fixed Confidence
Aurélien Garivier
,
Emilie Kaufmann
COLT
2014
On the Complexity of A/B Testing
Emilie Kaufmann
,
Olivier Cappé
,
Aurélien Garivier
COLT
2013
Information Complexity in Bandit Subset Selection
Emilie Kaufmann
,
Shivaram Kalyanakrishnan
NeurIPS
2013
Thompson Sampling for 1-Dimensional Exponential Family Bandits
Nathaniel Korda
,
Emilie Kaufmann
,
Remi Munos
AISTATS
2012
On Bayesian Upper Confidence Bounds for Bandit Problems
Emilie Kaufmann
,
Olivier Cappe
,
Aurelien Garivier
ALT
2012
Thompson Sampling: An Asymptotically Optimal Finite-Time Analysis
Emilie Kaufmann
,
Nathaniel Korda
,
Rémi Munos