Jobanputra, Mayank

2 publications

TMLR 2025 B-Cos LM: Efficiently Transforming Pre-Trained Language Models for Improved Explainability Yifan Wang, Sukrut Rao, Ji-Ung Lee, Mayank Jobanputra, Vera Demberg
NeurIPS 2025 Born a Transformer -- Always a Transformer? on the Effect of Pretraining on Architectural Abilities Mayank Jobanputra, Yana Veitsman, Yash Sarrof, Aleksandra Bakalova, Vera Demberg, Ellie Pavlick, Michael Hahn