Tay, Yi

45 publications

CVPR 2024 On Scaling up a Multilingual Vision and Language Model Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, Siamak Shakeri, Mostafa Dehghani, Daniel Salz, Mario Lucic, Michael Tschannen, Arsha Nagrani, Hexiang Hu, Mandar Joshi, Bo Pang, Ceslee Montgomery, Paulina Pietrzyk, Marvin Ritter, Aj Piergiovanni, Matthias Minderer, Filip Pavetic, Austin Waters, Gang Li, Ibrahim Alabdulmohsin, Lucas Beyer, Julien Amelot, Kenton Lee, Andreas Peter Steiner, Yang Li, Daniel Keysers, Anurag Arnab, Yuanzhong Xu, Keran Rong, Alexander Kolesnikov, Mojtaba Seyedhosseini, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut
JMLR 2024 Scaling Instruction-Finetuned Language Models Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei
ICLR 2023 Language Models Are Multilingual Chain-of-Thought Reasoners Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei
JMLR 2023 PaLM: Scaling Language Modeling with Pathways Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel
TMLR 2023 PolyViT: Co-Training Vision Transformers on Images, Videos and Audio Valerii Likhosherstov, Anurag Arnab, Krzysztof Marcin Choromanski, Mario Lucic, Yi Tay, Mostafa Dehghani
ICLR 2023 Recitation-Augmented Language Models Zhiqing Sun, Xuezhi Wang, Yi Tay, Yiming Yang, Denny Zhou
NeurIPS 2023 Recommender Systems with Generative Retrieval Shashank Rajput, Nikhil Mehta, Anima Singh, Raghunandan Hulikal Keshavan, Trung Vu, Lukasz Heldt, Lichan Hong, Yi Tay, Vinh Tran, Jonah Samost, Maciej Kula, Ed Chi, Maheswaran Sathiamoorthy
ICML 2023 Scaling Vision Transformers to 22 Billion Parameters Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme Ruiz, Matthias Minderer, Joan Puigcerver, Utku Evci, Manoj Kumar, Sjoerd Van Steenkiste, Gamaleldin Fathy Elsayed, Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmijn Bastings, Mark Collier, Alexey A. Gritsenko, Vighnesh Birodkar, Cristina Nader Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Pavetic, Dustin Tran, Thomas Kipf, Mario Lucic, Xiaohua Zhai, Daniel Keysers, Jeremiah J. Harmsen, Neil Houlsby
ICLR 2023 Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints Aran Komatsuzaki, Joan Puigcerver, James Lee-Thorp, Carlos Riquelme Ruiz, Basil Mustafa, Joshua Ainslie, Yi Tay, Mostafa Dehghani, Neil Houlsby
ICML 2023 The FLAN Collection: Designing Data and Methods for Effective Instruction Tuning Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, Adam Roberts
ICLR 2023 UL2: Unifying Language Learning Paradigms Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung, Dara Bahri, Tal Schuster, Steven Zheng, Denny Zhou, Neil Houlsby, Donald Metzler
ICLR 2023 UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant
ICLR 2022 Charformer: Fast Character Transformers via Gradient-Based Subword Tokenization Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, Donald Metzler
NeurIPS 2022 Confident Adaptive Language Modeling Tal Schuster, Adam Fisch, Jai Gupta, Mostafa Dehghani, Dara Bahri, Vinh Tran, Yi Tay, Donald Metzler
TMLR 2022 Emergent Abilities of Large Language Models Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus
ICLR 2022 ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, Donald Metzler
ICML 2022 HyperPrompt: Prompt-Based Task-Conditioning of Transformers Yun He, Steven Zheng, Yi Tay, Jai Gupta, Yu Du, Vamsi Aribandi, Zhe Zhao, Yaguang Li, Zhao Chen, Donald Metzler, Heng-Tze Cheng, Ed H. Chi
ICLR 2022 Scale Efficiently: Insights from Pretraining and Finetuning Transformers Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler
ICLR 2022 Scarf: Self-Supervised Contrastive Learning Using Random Feature Corruption Dara Bahri, Heinrich Jiang, Yi Tay, Donald Metzler
CVPR 2022 Scenic: A JAX Library for Computer Vision Research and Beyond Mostafa Dehghani, Alexey Gritsenko, Anurag Arnab, Matthias Minderer, Yi Tay
ICLR 2022 The Efficiency Misnomer Mostafa Dehghani, Yi Tay, Anurag Arnab, Lucas Beyer, Ashish Vaswani
NeurIPS 2022 Transformer Memory as a Differentiable Search Index Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler
ICLR 2021 Are Neural Rankers Still Outperformed by Gradient Boosted Decision Trees? Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Kumar Pasumarthi, Xuanhui Wang, Michael Bendersky, Marc Najork
ICLR 2021 Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with $1/n$ Parameters Aston Zhang, Yi Tay, Shuai Zhang, Alvin Chan, Anh Tuan Luu, Siu Hui, Jie Fu
ICLR 2021 HyperGrid Transformers: Towards a Single Model for Multiple Tasks Yi Tay, Zhe Zhao, Dara Bahri, Donald Metzler, Da-Cheng Juan
ICLR 2021 Long Range Arena : A Benchmark for Efficient Transformers Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, Donald Metzler
ICML 2021 OmniNet: Omnidirectional Representations from Transformers Yi Tay, Mostafa Dehghani, Vamsi Aribandi, Jai Gupta, Philip M Pham, Zhen Qin, Dara Bahri, Da-Cheng Juan, Donald Metzler
NeurIPS 2021 Self-Instantiated Recurrent Units with Dynamic Soft Recursion Aston Zhang, Yi Tay, Yikang Shen, Alvin Chan, Shuai Zhang
ICML 2021 Synthesizer: Rethinking Self-Attention for Transformer Models Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, Che Zheng
ICLR 2020 Jacobian Adversarially Regularized Networks for Robustness Alvin Chan, Yi Tay, Yew Soon Ong, Jie Fu
ICLR 2020 Metagross: Meta Gated Recursive Controller Units for Sequence Modeling Yi Tay, Yikang Shen, Alvin Chan, Yew Soon Ong
AAAI 2020 Multi-Level Head-Wise Match and Aggregation in Transformer for Textual Sequence Matching Shuohang Wang, Yunshi Lan, Yi Tay, Jing Jiang, Jingjing Liu
ICML 2020 Sparse Sinkhorn Attention Yi Tay, Dara Bahri, Liu Yang, Donald Metzler, Da-Cheng Juan
NeurIPS 2019 Compositional De-Attention Networks Yi Tay, Anh Tuan Luu, Aston Zhang, Shuohang Wang, Siu Cheung Hui
IJCAI 2019 DeepRec: An Open-Source Toolkit for Deep Learning Based Recommendation Shuai Zhang, Yi Tay, Lina Yao, Bin Wu, Aixin Sun
AAAI 2019 Holographic Factorization Machines for Recommendation Yi Tay, Shuai Zhang, Anh Tuan Luu, Siu Cheung Hui, Lina Yao, Tran Dang Quang Vinh
IJCAI 2019 Quaternion Collaborative Filtering for Recommendation Shuai Zhang, Lina Yao, Lucas Vinh Tran, Aston Zhang, Yi Tay
NeurIPS 2019 Quaternion Knowledge Graph Embeddings Shuai Zhang, Yi Tay, Lina Yao, Qi Liu
AAAI 2018 Cross Temporal Recurrent Networks for Ranking Question Answer Pairs Yi Tay, Luu Anh Tuan, Siu Cheung Hui
NeurIPS 2018 Densely Connected Attention Propagation for Reading Comprehension Yi Tay, Anh Tuan Luu, Siu Cheung Hui, Jian Su
IJCAI 2018 Hermitian Co-Attention Networks for Text Matching in Asymmetrical Domains Yi Tay, Anh Tuan Luu, Siu Cheung Hui
AAAI 2018 Learning to Attend via Word-Aspect Associative Fusion for Aspect-Based Sentiment Analysis Yi Tay, Luu Anh Tuan, Siu Cheung Hui
NeurIPS 2018 Recurrently Controlled Recurrent Networks Yi Tay, Anh Tuan Luu, Siu Cheung Hui
AAAI 2018 SkipFlow: Incorporating Neural Coherence Features for End-to-End Automatic Text Scoring Yi Tay, Minh C. Phan, Luu Anh Tuan, Siu Cheung Hui
AAAI 2017 Non-Parametric Estimation of Multiple Embeddings for Link Prediction on Dynamic Knowledge Graphs Yi Tay, Anh Tuan Luu, Siu Cheung Hui