Vieillard, Nino

15 publications

ICLR 2025 BOND: Aligning LLMs with Best-of-N Distillation Pier Giuseppe Sessa, Robert Dadashi-Tazehozi, Leonard Hussenot, Johan Ferret, Nino Vieillard, Alexandre Rame, Bobak Shahriari, Sarah Perrin, Abram L. Friesen, Geoffrey Cideron, Sertan Girgin, Piotr Stanczyk, Andrea Michi, Danila Sinopalnikov, Sabela Ramos Garea, Amélie Héliou, Aliaksei Severyn, Matthew Hoffman, Nikola Momchev, Olivier Bachem
ICML 2025 Loss Functions and Operators Generated by F-Divergences Vincent Roulet, Tianlin Liu, Nino Vieillard, Michael Eli Sander, Mathieu Blondel
ICML 2025 On Teacher Hacking in Language Model Distillation Daniil Tiapkin, Daniele Calandriello, Johan Ferret, Sarah Perrin, Nino Vieillard, Alexandre Rame, Mathieu Blondel
NeurIPS 2024 Imitating Language via Scalable Inverse Reinforcement Learning Markus Wulfmeier, Michael Bloesch, Nino Vieillard, Arun Ahuja, Jörg Bornschein, Sandy Huang, Artem Sokolov, Matt Barnes, Guillaume Desjardins, Alex Bewley, Sarah Maria Elisabeth Bechtle, Jost Tobias Springenberg, Nikola Momchev, Olivier Bachem, Matthieu Geist, Martin Riedmiller
ICLR 2024 On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes Rishabh Agarwal, Nino Vieillard, Yongchao Zhou, Piotr Stanczyk, Sabela Ramos Garea, Matthieu Geist, Olivier Bachem
ICML 2024 WARM: On the Benefits of Weight Averaged Reward Models Alexandre Rame, Nino Vieillard, Leonard Hussenot, Robert Dadashi-Tazehozi, Geoffrey Cideron, Olivier Bachem, Johan Ferret
ICML 2023 Regularization and Variance-Weighted Regression Achieves Minimax Optimality in Linear MDPs: Theory and Practice Toshinori Kitamura, Tadashi Kozuno, Yunhao Tang, Nino Vieillard, Michal Valko, Wenhao Yang, Jincheng Mei, Pierre Menard, Mohammad Gheshlaghi Azar, Remi Munos, Olivier Pietquin, Matthieu Geist, Csaba Szepesvari, Wataru Kumagai, Yutaka Matsuo
AISTATS 2022 Implicitly Regularized RL with Implicit Q-Values Nino Vieillard, Marcin Andrychowicz, Anton Raichuk, Olivier Pietquin, Matthieu Geist
AAAI 2022 Offline Reinforcement Learning as Anti-Exploration Shideh Rezaeifar, Robert Dadashi, Nino Vieillard, Léonard Hussenot, Olivier Bachem, Olivier Pietquin, Matthieu Geist
NeurIPSW 2021 Implicitly Regularized RL with Implicit Q-Values Nino Vieillard, Marcin Andrychowicz, Anton Raichuk, Olivier Pietquin, Matthieu Geist
ICML 2021 Offline Reinforcement Learning with Pseudometric Learning Robert Dadashi, Shideh Rezaeifar, Nino Vieillard, Léonard Hussenot, Olivier Pietquin, Matthieu Geist
AAAI 2020 Deep Conservative Policy Iteration Nino Vieillard, Olivier Pietquin, Matthieu Geist
NeurIPS 2020 Leverage the Average: An Analysis of KL Regularization in Reinforcement Learning Nino Vieillard, Tadashi Kozuno, Bruno Scherrer, Olivier Pietquin, Remi Munos, Matthieu Geist
AISTATS 2020 Momentum in Reinforcement Learning Nino Vieillard, Bruno Scherrer, Olivier Pietquin, Matthieu Geist
NeurIPS 2020 Munchausen Reinforcement Learning Nino Vieillard, Olivier Pietquin, Matthieu Geist