Rebeschini, Patrick

29 publications

AISTATS 2025 Black-Box Uniform Stability for Non-Euclidean Empirical Risk Minimization Simon Vary, David Martínez-Rubio, Patrick Rebeschini
NeurIPS 2025 Does Stochastic Gradient Really Succeed for Bandits? Dorian Baudry, Emmeran Johnson, Simon Vary, Ciara Pike-Burke, Patrick Rebeschini
ICLR 2025 Learning Mirror Maps in Policy Mirror Descent Carlo Alfano, Sebastian Rene Towers, Silvia Sapora, Chris Lu, Patrick Rebeschini
NeurIPS 2025 Meta-Learning Objectives for Preference Optimization Carlo Alfano, Silvia Sapora, Jakob Nicolaus Foerster, Patrick Rebeschini, Yee Whye Teh
NeurIPS 2025 Non-Stationary Bandit Convex Optimization: A Comprehensive Study Xiaoqi Liu, Dorian Baudry, Julian Zimmert, Patrick Rebeschini, Arya Akhavan
NeurIPS 2025 On the Necessity of Adaptive Regularisation: Optimal Anytime Online Learning on $\boldsymbol{\ell_p}$-Balls Emmeran Johnson, David Martínez-Rubio, Ciara Pike-Burke, Patrick Rebeschini
AISTATS 2025 Robust Gradient Descent for Phase Retrieval Alex Buna, Patrick Rebeschini
NeurIPS 2025 Stochastic Shortest Path with Sparse Adversarial Costs Emmeran Johnson, Alberto Rumi, Ciara Pike-Burke, Patrick Rebeschini
ICMLW 2024 Differentiable Cost-Parameterized Monge mAP Estimators Samuel Howard, George Deligiannidis, Patrick Rebeschini, James Thornton
JMLR 2024 Exponential Tail Local Rademacher Complexity Risk Bounds Without the Bernstein Condition Varun Kanade, Patrick Rebeschini, Tomas Vaskevicius
AISTATS 2024 Generalization Bounds for Label Noise Stochastic Gradient Descent Jung Eun Huh, Patrick Rebeschini
ICLR 2024 Sample-Efficiency in Multi-Batch Reinforcement Learning: The Need for Dimension-Dependent Adaptivity Emmeran Johnson, Ciara Pike-Burke, Patrick Rebeschini
NeurIPS 2023 A Novel Framework for Policy Mirror Descent with General Parameterization and Linear Convergence Carlo Alfano, Rui Yuan, Patrick Rebeschini
NeurIPS 2023 Optimal Convergence Rate for Exact Policy Mirror Descent in Discounted Markov Decision Processes Emmeran Johnson, Ciara Pike-Burke, Patrick Rebeschini
AISTATS 2021 Hadamard Wirtinger Flow for Sparse Phase Retrieval Fan Wu, Patrick Rebeschini
NeurIPS 2021 Distributed Machine Learning with Sparse Heterogeneous Data Dominic Richards, Sahand Negahban, Patrick Rebeschini
NeurIPS 2021 Implicit Regularization in Matrix Sensing via Mirror Descent Fan Wu, Patrick Rebeschini
NeurIPS 2021 On Optimal Interpolation in Linear Regression Eduard Oravkin, Patrick Rebeschini
NeurIPS 2021 Time-Independent Generalization Bounds for SGLD in Non-Convex Settings Tyler Farghly, Patrick Rebeschini
NeurIPS 2020 A Continuous-Time Mirror Descent Approach to Sparse Phase Retrieval Fan Wu, Patrick Rebeschini
ICML 2020 Decentralised Learning with Random Features and Distributed Gradient Descent Dominic Richards, Patrick Rebeschini, Lorenzo Rosasco
JMLR 2020 Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent Dominic Richards, Patrick Rebeschini
NeurIPS 2020 The Statistical Complexity of Early-Stopped Mirror Descent Tomas Vaskevicius, Varun Kanade, Patrick Rebeschini
JMLR 2019 A New Approach to Laplacian Solvers and Flow Problems Patrick Rebeschini, Sekhar Tatikonda
NeurIPS 2019 Decentralized Cooperative Stochastic Bandits David Martínez-Rubio, Varun Kanade, Patrick Rebeschini
NeurIPS 2019 Implicit Regularization for Optimal Sparse Recovery Tomas Vaskevicius, Varun Kanade, Patrick Rebeschini
NeurIPS 2019 Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-up Dominic Richards, Patrick Rebeschini
NeurIPS 2017 Accelerated Consensus via Min-Sum Splitting Patrick Rebeschini, Sekhar C Tatikonda
COLT 2015 Fast Mixing for Discrete Point Processes Patrick Rebeschini, Amin Karbasi