Lobacheva, Ekaterina

13 publications

ICMLW 2024 Gradient Dissent in Language Model Training and Saturation Andrei Mircea, Ekaterina Lobacheva, Irina Rish
NeurIPSW 2024 How Learning Rates Shape Neural Network Focus: Insights from Example Ranking Ekaterina Lobacheva, Keller Jordan, Aristide Baratin, Nicolas Le Roux
NeurIPSW 2024 Language Model Scaling Laws and Zero-Sum Learning Andrei Mircea, Ekaterina Lobacheva, Supriyo Chakraborty, Nima Chitsazan, Irina Rish
NeurIPS 2024 Where Do Large Learning Rates Lead Us? Ildus Sadrtdinov, Maxim Kodryan, Eduard Pokonechny, Ekaterina Lobacheva, Dmitry Vetrov
ICMLW 2024 Where Do Large Learning Rates Lead Us? a Feature Learning Perspective Ildus Sadrtdinov, Maxim Kodryan, Eduard Pokonechny, Ekaterina Lobacheva, Dmitry Vetrov
NeurIPSW 2023 Large Learning Rates Improve Generalization: But How Large Are We Talking About? Ekaterina Lobacheva, Eduard Pokonechny, Maxim Kodryan, Dmitry Vetrov
NeurIPS 2023 To Stay or Not to Stay in the Pre-Train Basin: Insights on Ensembling in Transfer Learning Ildus Sadrtdinov, Dmitrii Pozdeev, Dmitry P Vetrov, Ekaterina Lobacheva
NeurIPS 2022 Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes Maxim Kodryan, Ekaterina Lobacheva, Maksim Nakhodnov, Dmitry P Vetrov
NeurIPS 2021 On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay Ekaterina Lobacheva, Maxim Kodryan, Nadezhda Chirkova, Andrey Malinin, Dmitry P Vetrov
NeurIPS 2020 On Power Laws in Deep Ensembles Ekaterina Lobacheva, Nadezhda Chirkova, Maxim Kodryan, Dmitry P Vetrov
AAAI 2020 Structured Sparsification of Gated Recurrent Neural Networks Ekaterina Lobacheva, Nadezhda Chirkova, Alexander Markovich, Dmitry P. Vetrov
ICLR 2017 Semantic Embeddings for Program Behaviour Patterns Alexander Chistyakov, Ekaterina Lobacheva, Arseny Kuznetsov, Alexey Romanenko
ICCV 2015 Joint Optimization of Segmentation and Color Clustering Ekaterina Lobacheva, Olga Veksler, Yuri Boykov