Naumov, Alexey

19 publications

ICLR 2025 Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson–Romberg Extrapolation Marina Sheshukova, Denis Belomestny, Alain Oliviero Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov
NeurIPS 2025 Statistical Inference for Linear Stochastic Approximation with Markovian Noise Sergey Samsonov, Marina Sheshukova, Eric Moulines, Alexey Naumov
ICLR 2024 Demonstration-Regularized RL Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Alexey Naumov, Pierre Perrault, Michal Valko, Pierre Menard
NeurIPS 2024 Gaussian Approximation and Multiplier Bootstrap for Polyak-Ruppert Averaged Linear Stochastic Approximation with Applications to TD Learning Sergey Samsonov, Eric Moulines, Qi-Man Shao, Zhuo-Song Zhang, Alexey Naumov
AISTATS 2024 Generative Flow Networks as Entropy-Regularized RL Daniil Tiapkin, Nikita Morozov, Alexey Naumov, Dmitry P Vetrov
NeurIPS 2024 Group and Shuffle: Efficient Structured Orthogonal Parametrization Mikhail Gorbunov, Nikolay Yudin, Vera Soboleva, Aibek Alanov, Alexey Naumov, Maxim Rakhuba
COLT 2024 Improved High-Probability Bounds for the Temporal Difference Learning Algorithm via Exponential Stability Sergey Samsonov, Daniil Tiapkin, Alexey Naumov, Eric Moulines
ICMLW 2024 Improving GFlowNets with Monte Carlo Tree Search Nikita Morozov, Daniil Tiapkin, Sergey Samsonov, Alexey Naumov, Dmitry Vetrov
JMLR 2024 Rates of Convergence for Density Estimation with Generative Adversarial Networks Nikita Puchkin, Sergey Samsonov, Denis Belomestny, Eric Moulines, Alexey Naumov
NeurIPS 2024 SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning Paul Mangold, Sergey Samsonov, Safwan Labbi, Ilya Levin, Reda Alami, Alexey Naumov, Eric Moulines
ICML 2023 Fast Rates for Maximum Entropy Exploration Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Pierre Perrault, Yunhao Tang, Michal Valko, Pierre Menard
NeurIPS 2023 First Order Methods with Markovian Noise: From Acceleration to Variational Inequalities Aleksandr Beznosikov, Sergey Samsonov, Marina Sheshukova, Alexander Gasnikov, Alexey Naumov, Eric Moulines
NeurIPS 2023 Model-Free Posterior Sampling via Learning Rate Randomization Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Pierre Perrault, Michal Valko, Pierre Ménard
ICML 2022 From Dirichlet to Rubin: Optimistic Exploration in RL Without Bonuses Daniil Tiapkin, Denis Belomestny, Eric Moulines, Alexey Naumov, Sergey Samsonov, Yunhao Tang, Michal Valko, Pierre Menard
NeurIPS 2022 Local-Global MCMC Kernels: The Best of Both Worlds Sergey Samsonov, Evgeny Lagutin, Marylou Gabrié, Alain Durmus, Alexey Naumov, Eric Moulines
NeurIPS 2022 Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Mark Rowland, Michal Valko, Pierre Ménard
COLT 2021 On the Stability of Random Matrix Product with Markovian Noise: Application to Linear Stochastic Approximation and TD Learning Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov, Hoi-To Wai
NeurIPS 2021 Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov, Kevin Scaman, Hoi-To Wai
COLT 2020 Finite Time Analysis of Linear Two-Timescale Stochastic Approximation with Markovian Noise Maxim Kaledin, Eric Moulines, Alexey Naumov, Vladislav Tadic, Hoi-To Wai