Dao, Tri

49 publications

ICLRW 2025 HybriDNA: A Hybird Transformer-Mamba2 Long-Range DNA Language Model Mingqian Ma, Guoqing Liu, Chuan Cao, Pan Deng, Tri Dao, Albert Gu, Peiran Jin, Zhao Yang, Yingce Xia, Renqian Luo, Pipi Hu, Zun Wang, Yuan-Jyue Chen, Haiguang Liu, Tao Qin
ICML 2025 Ladder-Residual: Parallelism-Aware Architecture for Accelerating Large Model Inference with Communication Overlapping Muru Zhang, Mayank Mishra, Zhongzhu Zhou, William Brandon, Jue Wang, Yoon Kim, Jonathan Ragan-Kelley, Shuaiwen Leon Song, Ben Athiwaratkun, Tri Dao
ICCV 2025 Long-Context State-Space Video World Models Ryan Po, Yotam Nitzan, Richard Zhang, Berlin Chen, Tri Dao, Eli Shechtman, Gordon Wetzstein, Xun Huang
ICLRW 2025 Thinking Slow, Fast: Scaling Inference Compute with Distilled Reasoners Daniele Paliotta, Junxiong Wang, Matteo Pagliardini, Kevin Li, Aviv Bick, Albert Gu, François Fleuret, Tri Dao
NeurIPS 2024 BitDelta: Your Fine-Tune May Only Be Worth One Bit James Liu, Guangxuan Xiao, Kai Li, Jason D. Lee, Song Han, Tri Dao, Tianle Cai
ICML 2024 Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling Yair Schiff, Chia Hsiang Kao, Aaron Gokaslan, Tri Dao, Albert Gu, Volodymyr Kuleshov
ICMLW 2024 Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling Yair Schiff, Chia Hsiang Kao, Aaron Gokaslan, Tri Dao, Albert Gu, Volodymyr Kuleshov
ICMLW 2024 Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling Yair Schiff, Chia Hsiang Kao, Aaron Gokaslan, Tri Dao, Albert Gu, Volodymyr Kuleshov
ICMLW 2024 Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling Yair Schiff, Chia Hsiang Kao, Aaron Gokaslan, Tri Dao, Albert Gu, Volodymyr Kuleshov
ICLR 2024 FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning Tri Dao
NeurIPS 2024 FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-Precision Jay Shah, Ganesh Bikshandi, Ying Zhang, Vijay Thakkar, Pradeep Ramani, Tri Dao
NeurIPS 2024 Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers Sukjun Hwang, Aakash Lahoti, Ratish Puduppully, Tri Dao, Albert Gu
ICML 2024 Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao
NeurIPS 2024 RedPajama: An Open Dataset for Training Large Language Models Maurice Weber, Daniel Y. Fu, Quentin Anthony, Yonatan Oren, Shane Adams, Anton Alexandrov, Xiaozhong Lyu, Huu Nguyen, Xiaozhe Yao, Virginia Adams, Ben Athiwaratkun, Rahul Chalamala, Kezhen Chen, Max Ryabinin, Tri Dao, Percy Liang, Christopher Ré, Irina Rish, Ce Zhang
NeurIPS 2024 The Mamba in the Llama: Distilling and Accelerating Hybrid Models Junxiong Wang, Daniele Paliotta, Avner May, Alexander M. Rush, Tri Dao
ICMLW 2024 The Mamba in the Llama: Distilling and Accelerating Hybrid Models Junxiong Wang, Daniele Paliotta, Avner May, Alexander M Rush, Tri Dao
ICML 2024 Transformers Are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality Tri Dao, Albert Gu
ICML 2023 Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, Beidi Chen
ICLR 2023 Effectively Modeling Time Series with Simple Discrete State Spaces Michael Zhang, Khaled Kamal Saab, Michael Poli, Tri Dao, Karan Goel, Christopher Re
ICLR 2023 Hungry Hungry Hippos: Towards Language Modeling with State Space Models Daniel Y Fu, Tri Dao, Khaled Kamal Saab, Armin W Thomas, Atri Rudra, Christopher Re
ICML 2023 Hyena Hierarchy: Towards Larger Convolutional Language Models Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, Christopher Re
ICML 2023 Simple Hardware-Efficient Long Convolutions for Sequence Modeling Daniel Y Fu, Elliot L Epstein, Eric Nguyen, Armin W Thomas, Michael Zhang, Tri Dao, Atri Rudra, Christopher Re
ICLRW 2023 Simple Hardware-Efficient Long Convolutions for Sequence Modeling Daniel Y Fu, Elliot L Epstein, Eric Nguyen, Armin W Thomas, Michael Zhang, Tri Dao, Atri Rudra, Christopher Re
TMLR 2023 StarCoder: May the Source Be with You! Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Joel Lamy-Poirier, Joao Monteiro, Nicolas Gontier, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Ben Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason T Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Urvashi Bhattacharyya, Wenhao Yu, Sasha Luccioni, Paulo Villegas, Fedor Zhdanov, Tony Lee, Nadav Timor, Jennifer Ding, Claire S Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro Von Werra, Harm de Vries
ICML 2022 ButterflyFlow: Building Invertible Layers with Butterfly Matrices Chenlin Meng, Linqi Zhou, Kristy Choi, Tri Dao, Stefano Ermon
NeurIPS 2022 Decentralized Training of Foundation Models in Heterogeneous Environments Binhang Yuan, Yongjun He, Jared Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Ré, Ce Zhang
NeurIPS 2022 Fine-Tuning Language Models over Slow Networks Using Activation Quantization with Guarantees Jue Wang, Binhang Yuan, Luka Rimanic, Yongjun He, Tri Dao, Beidi Chen, Christopher Ré, Ce Zhang
NeurIPS 2022 FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, Christopher Ré
ICML 2022 Monarch: Expressive Structured Matrices for Efficient and Accurate Training Tri Dao, Beidi Chen, Nimit S Sohoni, Arjun Desai, Michael Poli, Jessica Grogan, Alexander Liu, Aniruddh Rao, Atri Rudra, Christopher Re
ICLR 2022 Pixelated Butterfly: Simple and Efficient Sparse Training for Neural Network Models Beidi Chen, Tri Dao, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, Christopher Re
NeurIPS 2022 S4ND: Modeling Images and Videos as Multidimensional Signals with State Spaces Eric Nguyen, Karan Goel, Albert Gu, Gordon Downs, Preey Shah, Tri Dao, Stephen Baccus, Christopher Ré
NeurIPS 2022 Transform Once: Efficient Operator Learning in Frequency Domain Michael Poli, Stefano Massaroli, Federico Berto, Jinkyoo Park, Tri Dao, Christopher Ré, Stefano Ermon
ICMLW 2022 Transform Once: Efficient Operator Learning in Frequency Domain Michael Poli, Stefano Massaroli, Federico Berto, Jinkyoo Park, Tri Dao, Christopher Re, Stefano Ermon
ICML 2021 Catformer: Designing Stable Transformers via Sensitivity Analysis Jared Q Davis, Albert Gu, Krzysztof Choromanski, Tri Dao, Christopher Re, Chelsea Finn, Percy Liang
NeurIPS 2021 Combining Recurrent, Convolutional, and Continuous-Time Models with Linear State Space Layers Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, Christopher Ré
ICLR 2021 Knowledge Distillation as Semiparametric Inference Tri Dao, Govinda M Kamath, Vasilis Syrgkanis, Lester Mackey
ICLR 2021 MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training Beidi Chen, Zichang Liu, Binghui Peng, Zhaozhuo Xu, Jonathan Lingjie Li, Tri Dao, Zhao Song, Anshumali Shrivastava, Christopher Re
NeurIPS 2021 Rethinking Neural Operations for Diverse Tasks Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher Ré, Ameet Talwalkar
NeurIPS 2021 Scatterbrain: Unifying Sparse and Low-Rank Attention Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, Christopher Ré
NeurIPS 2020 HiPPO: Recurrent Memory with Optimal Polynomial Projections Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, Christopher Ré
ICLR 2020 Kaleidoscope: An Efficient, Learnable Representation for All Structured Linear Maps Tri Dao, Nimit Sohoni, Albert Gu, Matthew Eichhorn, Amit Blonder, Megan Leszczynski, Atri Rudra, Christopher Ré
ICML 2019 A Kernel Theory of Modern Data Augmentation Tri Dao, Albert Gu, Alexander Ratner, Virginia Smith, Chris De Sa, Christopher Re
UAI 2019 Adaptive Hashing for Model Counting Jonathan Kuck, Tri Dao, Shengjia Zhao, Burak Bartan, Ashish Sabharwal, Stefano Ermon
NeurIPS 2019 Approximating the Permanent by Sampling from Adaptive Partitions Jonathan Kuck, Tri Dao, Hamid Rezatofighi, Ashish Sabharwal, Stefano Ermon
ICML 2019 Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations Tri Dao, Albert Gu, Matthew Eichhorn, Atri Rudra, Christopher Re
AISTATS 2019 Low-Precision Random Fourier Features for Memory-Constrained Kernel Approximation Jian Zhang, Avner May, Tri Dao, Christopher Re
NeurIPS 2019 On the Downstream Performance of Compressed Word Embeddings Avner May, Jian Zhang, Tri Dao, Christopher Ré
NeurIPS 2018 Learning Compressed Transforms with Low Displacement Rank Anna Thomas, Albert Gu, Tri Dao, Atri Rudra, Christopher Ré
NeurIPS 2017 Gaussian Quadrature for Kernel Features Tri Dao, Christopher M De Sa, Christopher Ré