Ryabinin, Max

16 publications

NeurIPS 2025 AutoJudge: Judge Decoding Without Manual Annotation Roman Garipov, Fedor Velikonivtsev, Ivan Ermakov, Ruslan Svirschevski, Vage Egiazarian, Max Ryabinin
ICML 2025 TOPLOC: A Locality Sensitive Hashing Scheme for Trustless Verifiable Inference Jack Min Ong, Matthew Di Ferrante, Aaron Pazdera, Ryan Garner, Sami Jaghouar, Manveer Basra, Max Ryabinin, Johannes Hagemann
NeurIPS 2024 RedPajama: An Open Dataset for Training Large Language Models Maurice Weber, Daniel Y. Fu, Quentin Anthony, Yonatan Oren, Shane Adams, Anton Alexandrov, Xiaozhong Lyu, Huu Nguyen, Xiaozhe Yao, Virginia Adams, Ben Athiwaratkun, Rahul Chalamala, Kezhen Chen, Max Ryabinin, Tri Dao, Percy Liang, Christopher RĂ©, Irina Rish, Ce Zhang
NeurIPS 2024 Sequoia: Scalable and Robust Speculative Decoding Zhuoming Chen, Avner May, Ruslan Svirschevski, Yuhsun Huang, Max Ryabinin, Zhihao Jia, Beidi Chen
NeurIPS 2024 SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices Ruslan Svirschevski, Avner May, Zhuoming Chen, Beidi Chen, Zhihao Jia, Max Ryabinin
NeurIPS 2023 Distributed Inference and Fine-Tuning of Large Language Models over the Internet Alexander Borzunov, Max Ryabinin, Artem Chumachenko, Dmitry Baranchuk, Tim Dettmers, Younes Belkada, Pavel Samygin, Colin A Raffel
ICML 2023 FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Beidi Chen, Percy Liang, Christopher Re, Ion Stoica, Ce Zhang
NeurIPS 2023 Is This Loss Informative? Faster Text-to-Image Customization by Tracking Objective Dynamics Anton Voronov, Mikhail Khoroshikh, Artem Babenko, Max Ryabinin
ICML 2023 SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient Max Ryabinin, Tim Dettmers, Michael Diskin, Alexander Borzunov
NeurIPS 2022 Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees Aleksandr Beznosikov, Peter Richtarik, Michael Diskin, Max Ryabinin, Alexander Gasnikov
NeurIPSW 2022 Petals: Collaborative Inference and Fine-Tuning of Large Models Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, Colin Raffel
ICML 2022 Secure Distributed Training at Scale Eduard Gorbunov, Alexander Borzunov, Michael Diskin, Max Ryabinin
NeurIPS 2021 Distributed Deep Learning in Open Collaborations Michael Diskin, Alexey Bukhtiyarov, Max Ryabinin, Lucile Saulnier, Quentin Lhoest, Anton Sinitsin, Dmitry Popov, Dmitry V. Pyrkin, Maxim Kashirin, Alexander Borzunov, Albert Villanova del Moral, Denis Mazur, Ilia Kobelev, Yacine Jernite, Thomas Wolf, Gennady Pekhimenko
NeurIPS 2021 Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices Max Ryabinin, Eduard Gorbunov, Vsevolod Plokhotnyuk, Gennady Pekhimenko
NeurIPS 2021 Scaling Ensemble Distribution Distillation to Many Classes with Proxy Targets Max Ryabinin, Andrey Malinin, Mark Gales
NeurIPS 2020 Towards Crowdsourced Training of Large Neural Networks Using Decentralized Mixture-of-Experts Max Ryabinin, Anton Gusev