Hahn, Michael

9 publications

TMLR 2026 When Does LoRA Reuse Work? Theoretical Limits and Mechanisms for Recycling LoRAs Without Data Access Mei-Yen Chen, Thi Thu Uyen Hoang, Michael Hahn, M. Saquib Sarfraz
ICLR 2025 A Formal Framework for Understanding Length Generalization in Transformers Xinting Huang, Andy Yang, Satwik Bhattamishra, Yash Sarrof, Andreas Krebs, Hattie Zhou, Preetum Nakkiran, Michael Hahn
NeurIPS 2025 Born a Transformer -- Always a Transformer? on the Effect of Pretraining on Architectural Abilities Mayank Jobanputra, Yana Veitsman, Yash Sarrof, Aleksandra Bakalova, Vera Demberg, Ellie Pavlick, Michael Hahn
ICLRW 2025 Emergent Stack Representations in Modeling Counter Languages Using Transformers Utkarsh Tiwari, Aviral Gupta, Michael Hahn
ICML 2025 Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers Alireza Amiri Bavandpour, Xinting Huang, Mark Rofin, Michael Hahn
NeurIPS 2024 InversionView: A General-Purpose Method for Reading Information from Neural Activations Xinting Huang, Madhur Panwar, Navin Goyal, Michael Hahn
ICMLW 2024 InversionView: A General-Purpose Method for Reading Information from Neural Activations Xinting Huang, Madhur Panwar, Navin Goyal, Michael Hahn
NeurIPS 2024 Separations in the Representational Capabilities of Transformers and Recurrent Architectures Satwik Bhattamishra, Michael Hahn, Phil Blunsom, Varun Kanade
NeurIPS 2024 The Expressive Capacity of State Space Models: A Formal Language Perspective Yash Sarrof, Yana Veitsman, Michael Hahn