Vladymyrov, Max

21 publications

ICLR 2025 How New Data Permeates LLM Knowledge and How to Dilute It Chen Sun, Renat Aksitov, Andrey Zhmoginov, Nolan Andrew Miller, Max Vladymyrov, Ulrich Rueckert, Been Kim, Mark Sandler
TMLR 2024 Continual HyperTransformer: A Meta-Learner for Continual Few-Shot Learning Max Vladymyrov, Andrey Zhmoginov, Mark Sandler
ICMLW 2024 Efficient Linear System Solver with Transformers Max Vladymyrov, Johannes von Oswald, Nolan Andrew Miller, Mark Sandler
NeurIPSW 2024 How New Data Pollutes LLM Knowledge and How to Dilute It Chen Sun, Renat Aksitov, Andrey Zhmoginov, Nolan Andrew Miller, Max Vladymyrov, Ulrich Rueckert, Been Kim, Mark Sandler
ICMLW 2024 Learning Fast and Slow: Representations for In-Context Weight Modulation Andrey Zhmoginov, Jihwan Lee, Max Vladymyrov, Mark Sandler
ICMLW 2024 Learning and Unlearning of Fabricated Knowledge in Language Models Chen Sun, Nolan Andrew Miller, Andrey Zhmoginov, Max Vladymyrov, Mark Sandler
NeurIPS 2024 Linear Transformers Are Versatile In-Context Learners Max Vladymyrov, Johannes von Oswald, Mark Sandler, Rong Ge
ICMLW 2024 Linear Transformers Are Versatile In-Context Learners Max Vladymyrov, Johannes von Oswald, Mark Sandler, Rong Ge
CVPR 2023 Decentralized Learning with Multi-Headed Distillation Andrey Zhmoginov, Mark Sandler, Nolan Miller, Gus Kristiansen, Max Vladymyrov
ICML 2023 Transformers Learn In-Context by Gradient Descent Johannes Von Oswald, Eyvind Niklasson, Ettore Randazzo, Joao Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, Max Vladymyrov
CVPR 2022 Fine-Tuning Image Transformers Using Learnable Memory Mark Sandler, Andrey Zhmoginov, Max Vladymyrov, Andrew Jackson
ICLR 2022 GradMax: Growing Neural Networks Using Gradient Information Utku Evci, Bart van Merrienboer, Thomas Unterthiner, Fabian Pedregosa, Max Vladymyrov
JMLR 2022 Underspecification Presents Challenges for Credibility in Modern Machine Learning Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yian Ma, Cory McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley
ICML 2021 Meta-Learning Bidirectional Update Rules Mark Sandler, Max Vladymyrov, Andrey Zhmoginov, Nolan Miller, Tom Madams, Andrew Jackson, Blaise Agüera Y Arcas
NeurIPS 2019 No Pressure! Addressing the Problem of Local Minima in Manifold Learning Algorithms Max Vladymyrov
ICML 2016 The Variational Nystrom Method for Large-Scale Spectral Problems Max Vladymyrov, Miguel Carreira-Perpinan
NeurIPS 2015 A Fast, Universal Algorithm to Learn Parametric Nonlinear Embeddings Miguel A. Carreira-Perpinan, Max Vladymyrov
AISTATS 2014 Linear-Time Training of Nonlinear Low-Dimensional Embeddings Max Vladymyrov, Miguel Á. Carreira-Perpiñán
ICML 2013 Entropic Affinities: Properties and Efficient Numerical Computation Max Vladymyrov, Miguel Carreira-Perpinan
ECML-PKDD 2013 Locally Linear Landmarks for Large-Scale Manifold Learning Max Vladymyrov, Miguel Á. Carreira-Perpiñán
ICML 2012 Fast Training of Nonlinear Embedding Algorithms Max Vladymyrov, Miguel Á. Carreira-Perpiñán