Arora, Simran

19 publications

ICML 2025 KernelBench: Can LLMs Write Efficient GPU Kernels? Anne Ouyang, Simon Guo, Simran Arora, Alex L Zhang, William Hu, Christopher Re, Azalia Mirhoseini
ICLRW 2025 KernelBench: Can LLMs Write Efficient GPU Kernels? Anne Ouyang, Simon Guo, Simran Arora, Alex L Zhang, William Hu, Christopher Re, Azalia Mirhoseini
ICLR 2025 LoLCATs: On Low-Rank Linearizing of Large Language Models Michael Zhang, Simran Arora, Rahul Chalamala, Benjamin Frederick Spector, Alan Wu, Krithik Ramesh, Aaryan Singhal, Christopher Re
ICLR 2025 ThunderKittens: Simple, Fast, and $\textit{Adorable}$ Kernels Benjamin Frederick Spector, Simran Arora, Aaryan Singhal, Arjun Parthasarathy, Daniel Y Fu, Christopher Re
ICLR 2025 Towards Learning High-Precision Least Squares Algorithms with Sequence Models Jerry Weihong Liu, Jessica Grogan, Owen M Dugan, Ashish Rao, Simran Arora, Atri Rudra, Christopher Re
ICML 2024 Benchmarking and Building Long-Context Retrieval Models with LoCo and M2-BERT Jon Saad-Falcon, Daniel Y Fu, Simran Arora, Neel Guha, Christopher Re
ICMLW 2024 Can Transformers Solve Least Squares to High Precision? Jerry Weihong Liu, Jessica Grogan, Owen M Dugan, Simran Arora, Atri Rudra, Christopher Re
ICMLW 2024 Can Transformers Solve Least Squares to High Precision? Jerry Weihong Liu, Jessica Grogan, Owen M Dugan, Simran Arora, Atri Rudra, Christopher Re
ICMLW 2024 Just Read Twice: Closing the Recall Gap for Recurrent Language Models Simran Arora, Aman Timalsina, Aaryan Singhal, Sabri Eyuboglu, Xinyi Zhao, Ashish Rao, Atri Rudra, Christopher Re
ICMLW 2024 Low-Rank Linearization of Large Language Models Michael Zhang, Aaryan Singhal, Benjamin Frederick Spector, Simran Arora, Christopher Re
NeurIPS 2024 Optimistic Verifiable Training by Controlling Hardware Nondeterminism Megha Srivastava, Simran Arora, Dan Boneh
ICMLW 2024 Optimistic Verifiable Training by Controlling Hardware Nondeterminism Megha Srivastava, Simran Arora, Dan Boneh
ICML 2024 Simple Linear Attention Language Models Balance the Recall-Throughput Tradeoff Simran Arora, Sabri Eyuboglu, Michael Zhang, Aman Timalsina, Silas Alberti, James Zou, Atri Rudra, Christopher Re
ICMLW 2024 Simple Linear Attention Language Models Balance the Recall-Throughput Tradeoff Simran Arora, Sabri Eyuboglu, Michael Zhang, Aman Timalsina, Silas Alberti, Dylan Zinsley, James Zou, Atri Rudra, Christopher Re
ICMLW 2024 Towards Smaller Language Models via Layer Looping Sabri Eyuboglu, Dylan Zinsley, Jon Saad-Falcon, Simran Arora, Atri Rudra, James Zou, Christopher Re
ICLR 2024 Zoology: Measuring and Improving Recall in Efficient Language Models Simran Arora, Sabri Eyuboglu, Aman Timalsina, Isys Johnson, Michael Poli, James Zou, Atri Rudra, Christopher Re
ICLR 2023 Ask Me Anything: A Simple Strategy for Prompting Language Models Simran Arora, Avanika Narayan, Mayee F Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Christopher Re
NeurIPS 2023 DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
NeurIPS 2023 Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture Dan Fu, Simran Arora, Jessica Grogan, Isys Johnson, Evan Sabri Eyuboglu, Armin Thomas, Benjamin Spector, Michael Poli, Atri Rudra, Christopher RĂ©