Calandriello, Daniele

34 publications

ICLR 2025 Building Math Agents with Multi-Turn Iterative Preference Learning Wei Xiong, Chengshuai Shi, Jiaming Shen, Aviv Rosenberg, Zhen Qin, Daniele Calandriello, Misha Khalman, Rishabh Joshi, Bilal Piot, Mohammad Saleh, Chi Jin, Tong Zhang, Tianqi Liu
ICML 2025 On Teacher Hacking in Language Model Distillation Daniil Tiapkin, Daniele Calandriello, Johan Ferret, Sarah Perrin, Nino Vieillard, Alexandre Rame, Mathieu Blondel
AISTATS 2024 A General Theoretical Paradigm to Understand Learning from Human Preferences Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, Daniele Calandriello
ICML 2024 Decoding-Time Realignment of Language Models Tianlin Liu, Shangmin Guo, Leonardo Bianco, Daniele Calandriello, Quentin Berthet, Felipe Llinares-López, Jessica Hoffmann, Lucas Dixon, Michal Valko, Mathieu Blondel
ICLR 2024 Demonstration-Regularized RL Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Alexey Naumov, Pierre Perrault, Michal Valko, Pierre Menard
ICML 2024 Generalized Preference Optimization: A Unified Approach to Offline Alignment Yunhao Tang, Zhaohan Daniel Guo, Zeyu Zheng, Daniele Calandriello, Remi Munos, Mark Rowland, Pierre Harvey Richemond, Michal Valko, Bernardo Avila Pires, Bilal Piot
ICML 2024 Human Alignment of Large Language Models Through Online Preference Optimisation Daniele Calandriello, Zhaohan Daniel Guo, Remi Munos, Mark Rowland, Yunhao Tang, Bernardo Avila Pires, Pierre Harvey Richemond, Charline Le Lan, Michal Valko, Tianqi Liu, Rishabh Joshi, Zeyu Zheng, Bilal Piot
NeurIPS 2024 Multi-Turn Reinforcement Learning with Preference Human Feedback Lior Shani, Aviv Rosenberg, Asaf Cassel, Oran Lang, Daniele Calandriello, Avital Zipori, Hila Noga, Orgad Keller, Bilal Piot, Idan Szpektor, Avinatan Hassidim, Yossi Matias, Rémi Munos
ICML 2024 Nash Learning from Human Feedback Remi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Côme Fiegel, Andrea Michi, Marco Selvi, Sertan Girgin, Nikola Momchev, Olivier Bachem, Daniel J Mankowitz, Doina Precup, Bilal Piot
ICLR 2024 Unlocking the Power of Representations in Long-Term Novelty-Based Exploration Alaa Saade, Steven Kapturowski, Daniele Calandriello, Charles Blundell, Pablo Sprechmann, Leopoldo Sarra, Oliver Groth, Michal Valko, Bilal Piot
ICML 2023 Fast Rates for Maximum Entropy Exploration Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Pierre Perrault, Yunhao Tang, Michal Valko, Pierre Menard
NeurIPS 2023 Model-Free Posterior Sampling via Learning Rate Randomization Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Pierre Perrault, Michal Valko, Pierre Ménard
ICML 2023 Understanding Self-Predictive Learning for Reinforcement Learning Yunhao Tang, Zhaohan Daniel Guo, Pierre Harvey Richemond, Bernardo Avila Pires, Yash Chandak, Remi Munos, Mark Rowland, Mohammad Gheshlaghi Azar, Charline Le Lan, Clare Lyle, András György, Shantanu Thakoor, Will Dabney, Bilal Piot, Daniele Calandriello, Michal Valko
NeurIPSW 2023 Unlocking the Power of Representations in Long-Term Novelty-Based Exploration Steven Kapturowski, Alaa Saade, Daniele Calandriello, Charles Blundell, Pablo Sprechmann, Leopoldo Sarra, Oliver Groth, Michal Valko, Bilal Piot
NeurIPS 2022 BYOL-Explore: Exploration by Bootstrapped Prediction Zhaohan Guo, Shantanu Thakoor, Miruna Pislar, Bernardo Avila Pires, Florent Altché, Corentin Tallec, Alaa Saade, Daniele Calandriello, Jean-Bastien Grill, Yunhao Tang, Michal Valko, Remi Munos, Mohammad Gheshlaghi Azar, Bilal Piot
ICLR 2022 Information-Theoretic Online Memory Selection for Continual Learning Shengyang Sun, Daniele Calandriello, Huiyi Hu, Ang Li, Michalis Titsias
NeurIPS 2022 Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Mark Rowland, Michal Valko, Pierre Ménard
ICML 2022 Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco
NeurIPSW 2021 One Pass ImageNet Huiyi Hu, Ang Li, Daniele Calandriello, Dilan Gorur
NeurIPS 2021 ParK: Sound and Efficient Kernel Ridge Regression by Feature Space Partitions Luigi Carratino, Stefano Vigogna, Daniele Calandriello, Lorenzo Rosasco
JMLR 2021 Safe Policy Iteration: A Monotonically Improving Approximate Policy Iteration Approach Alberto Maria Metelli, Matteo Pirotta, Daniele Calandriello, Marcello Restelli
ICML 2020 Near-Linear Time Gaussian Process Optimization with Adaptive Batching and Resparsification Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco
NeurIPS 2020 Sampling from a K-DPP Without Looking at All Items Daniele Calandriello, Michal Derezinski, Michal Valko
NeurIPS 2019 Exact Sampling of Determinantal Point Processes with Sublinear Time Preprocessing Michal Derezinski, Daniele Calandriello, Michal Valko
COLT 2019 Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco
ICML 2018 Improved Large-Scale Graph Learning Through Ridge Spectral Sparsification Daniele Calandriello, Alessandro Lazaric, Ioannis Koutis, Michal Valko
NeurIPS 2018 On Fast Leverage Score Sampling and Optimal Learning Alessandro Rudi, Daniele Calandriello, Luigi Carratino, Lorenzo Rosasco
NeurIPS 2018 Statistical and Computational Trade-Offs in Kernel K-Means Daniele Calandriello, Lorenzo Rosasco
AISTATS 2017 Distributed Adaptive Sampling for Kernel Matrix Approximation Daniele Calandriello, Alessandro Lazaric, Michal Valko
NeurIPS 2017 Efficient Second-Order Online Kernel Learning with Adaptive Embedding Daniele Calandriello, Alessandro Lazaric, Michal Valko
ICML 2017 Second-Order Kernel Online Convex Optimization with Adaptive Sketching Daniele Calandriello, Alessandro Lazaric, Michal Valko
UAI 2016 Analysis of Nyström Method with Sequential Ridge Leverage Scores Daniele Calandriello, Alessandro Lazaric, Michal Valko
NeurIPS 2014 Sparse Multi-Task Reinforcement Learning Daniele Calandriello, Alessandro Lazaric, Marcello Restelli
ICML 2013 Safe Policy Iteration Matteo Pirotta, Marcello Restelli, Alessio Pecorino, Daniele Calandriello