Arora, Aryaman

5 publications

ICML 2025 AxBench: Steering LLMs? Even Simple Baselines Outperform Sparse Autoencoders Zhengxuan Wu, Aryaman Arora, Atticus Geiger, Zheng Wang, Jing Huang, Dan Jurafsky, Christopher D Manning, Christopher Potts
JMLR 2025 Causal Abstraction: A Theoretical Foundation for Mechanistic Interpretability Atticus Geiger, Duligur Ibeling, Amir Zur, Maheep Chaudhary, Sonakshi Chauhan, Jing Huang, Aryaman Arora, Zhengxuan Wu, Noah Goodman, Christopher Potts, Thomas Icard
NeurIPS 2025 Improved Representation Steering for Language Models Zhengxuan Wu, Qinan Yu, Aryaman Arora, Christopher D Manning, Christopher Potts
NeurIPS 2024 ReFT: Representation Finetuning for Language Models Zhengxuan Wu, Aryaman Arora, Zheng Wang, Atticus Geiger, Dan Jurafsky, Christopher D. Manning, Christopher Potts
ICCVW 2023 Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP Vedant Palit, Rohan Pandey, Aryaman Arora, Paul Pu Liang