Vernade, Claire

28 publications

L4DC 2025 A Pontryagin Perspective on Reinforcement Learning Onno Eberhard, Claire Vernade, Michael Muehlebach
NeurIPS 2025 Non-Stationary Lipschitz Bandits Nicolas Nguyen, Solenne Gaucher, Claire Vernade
ICML 2025 Partially Observable Reinforcement Learning with Memory Traces Onno Eberhard, Michael Muehlebach, Claire Vernade
AISTATS 2025 Prior-Dependent Allocations for Bayesian Fixed-Budget Best-Arm Identification in Structured Bandits Nicolas Nguyen, Imad Aouali, András György, Claire Vernade
NeurIPS 2025 Put CASH on Bandits: A Max K-Armed Problem for Automated Machine Learning Amir Rezaei Balef, Claire Vernade, Katharina Eggensperger
NeurIPS 2025 Quantization-Free Autoregressive Action Transformer Ziyad Sheebaelhamd, Michael Tschannen, Michael Muehlebach, Claire Vernade
ICMLW 2024 A Pontryagin Perspective on Reinforcement Learning Onno Eberhard, Claire Vernade, Michael Muehlebach
ALT 2024 Algorithmic Learning Theory 2024: Preface Claire Vernade, Daniel Hsu
ICLRW 2024 Towards Bandit-Based Optimization for Automated Machine Learning Amir Rezaei Balef, Claire Vernade, Katharina Eggensperger
NeurIPS 2023 Beyond Average Return in Markov Decision Processes Alexandre Marthe, Aurélien Garivier, Claire Vernade
TMLR 2023 POMRL: No-Regret Learning-to-Plan with Increasing Horizons Khimya Khetarpal, Claire Vernade, Brendan O'Donoghue, Satinder Singh, Tom Zahavy
NeurIPSW 2023 POMRL: No-Regret Learning-to-Plan with Increasing Horizons Khimya Khetarpal, Claire Vernade, Brendan O'Donoghue, Satinder Singh, Tom Zahavy
ICLR 2022 EigenGame Unloaded: When Playing Games Is Better than Optimizing Ian Gemp, Brian McWilliams, Claire Vernade, Thore Graepel
ICLRW 2022 EigenGame Unloaded: When Playing Games Is Better than Optimizing Ian Gemp, Brian McWilliams, Claire Vernade, Thore Graepel
AISTATS 2021 Confident Off-Policy Evaluation and Selection Through Self-Normalized Importance Weighting Ilja Kuzborskij, Claire Vernade, Andras Gyorgy, Csaba Szepesvari
COLT 2021 Asymptotically Optimal Information-Directed Sampling Johannes Kirschner, Tor Lattimore, Claire Vernade, Csaba Szepesvari
ICLR 2021 EigenGame: PCA as a Nash Equilibrium Ian Gemp, Brian McWilliams, Claire Vernade, Thore Graepel
ICML 2020 Linear Bandits with Stochastic Delayed Feedback Claire Vernade, Alexandra Carpentier, Tor Lattimore, Giovanni Zappella, Beyza Ermis, Michael Brückner
ICML 2020 Non-Stationary Delayed Bandits with Intermediate Observations Claire Vernade, Andras Gyorgy, Timothy Mann
ALT 2020 Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling Cindy Trinh, Emilie Kaufmann, Claire Vernade, Richard Combes
ICML 2020 Stochastic Bandits with Arm-Dependent Delays Manegueu Anne Gael, Claire Vernade, Alexandra Carpentier, Michal Valko
NeurIPS 2019 Weighted Linear Bandits for Non-Stationary Environments Yoan Russac, Claire Vernade, Olivier Cappé
IJCAI 2017 Bernoulli Rank-1 Bandits for Click Feedback Sumeet Katariya, Branislav Kveton, Csaba Szepesvári, Claire Vernade, Zheng Wen
ECML-PKDD 2017 Max K-Armed Bandit: On the ExtremeHunter Algorithm and Beyond Mastane Achab, Stéphan Clémençon, Aurélien Garivier, Anne Sabourin, Claire Vernade
COLT 2017 Sparse Stochastic Bandits Joon Kwon, Vianney Perchet, Claire Vernade
UAI 2017 Stochastic Bandit Models for Delayed Conversions Claire Vernade, Olivier Cappé, Vianney Perchet
AISTATS 2017 Stochastic Rank-1 Bandits Sumeet Katariya, Branislav Kveton, Csaba Szepesvári, Claire Vernade, Zheng Wen
NeurIPS 2016 Multiple-Play Bandits in the Position-Based Model Paul Lagrée, Claire Vernade, Olivier Cappe