Yang, Yinfei

35 publications

NeurIPS 2025 CAR-Flow: Condition-Aware Reparameterization Aligns Source and Target for Better Flow Matching Chen Chen, Pengsheng Guo, Liangchen Song, Jiasen Lu, Rui Qian, Tsu-Jui Fu, Xinze Wang, Wei Liu, Yinfei Yang, Alex Schwing
ICML 2025 Contrastive Localized Language-Image Pre-Training Hong-You Chen, Zhengfeng Lai, Haotian Zhang, Xinze Wang, Marcin Eichner, Keen You, Meng Cao, Bowen Zhang, Yinfei Yang, Zhe Gan
ICLR 2025 Ferret-UI 2: Mastering Universal User Interface Understanding Across Platforms Zhangheng Li, Keen You, Haotian Zhang, Di Feng, Harsh Agrawal, Xiujun Li, Mohana Prasad Sathya Moorthy, Jeffrey Nichols, Yinfei Yang, Zhe Gan
ICLR 2025 MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs Yusu Qian, Hanrong Ye, Jean-Philippe Fauconnier, Peter Grasch, Yinfei Yang, Zhe Gan
ICCV 2025 MM-Spatial: Exploring 3D Spatial Understanding in Multimodal LLMs Erik Daxberger, Nina Wenzel, David Griffiths, Haiming Gang, Justin Lazarow, Gefen Kohavi, Kai Kang, Marcin Eichner, Yinfei Yang, Afshin Dehghan, Peter Grasch
ICLR 2025 MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-Tuning Haotian Zhang, Mingfei Gao, Zhe Gan, Philipp Dufter, Nina Wenzel, Forrest Huang, Dhruti Shah, Xianzhi Du, Bowen Zhang, Yanghao Li, Sam Dodge, Keen You, Zhen Yang, Aleksei Timofeev, Mingze Xu, Hong-You Chen, Jean-Philippe Fauconnier, Zhengfeng Lai, Haoxuan You, Zirui Wang, Afshin Dehghan, Peter Grasch, Yinfei Yang
ICLR 2025 MMEgo: Towards Building Egocentric Multimodal LLMs for Video QA Hanrong Ye, Haotian Zhang, Erik Daxberger, Lin Chen, Zongyu Lin, Yanghao Li, Bowen Zhang, Haoxuan You, Dan Xu, Zhe Gan, Jiasen Lu, Yinfei Yang
CVPR 2025 Multimodal Autoregressive Pre-Training of Large Vision Encoders Enrico Fini, Mustafa Shukor, Xiujun Li, Philipp Dufter, Michal Klein, David Haldimann, Sai Aitharaju, Victor G. Turrisi da Costa, Louis Béthune, Zhe Gan, Alexander Toshev, Marcin Eichner, Moin Nabi, Yinfei Yang, Joshua Susskind, Alaaeldin El-Nouby
ICLR 2025 Revisit Large-Scale Image-Caption Data in Pre-Training Multimodal Foundation Models Zhengfeng Lai, Vasileios Saveris, Chen Chen, Hong-You Chen, Haotian Zhang, Bowen Zhang, Wenze Hu, Juan Lao Tebar, Zhe Gan, Peter Grasch, Meng Cao, Yinfei Yang
ICCV 2025 STIV: Scalable Text and Image Conditioned Video Generation Zongyu Lin, Wei Liu, Chen Chen, Jiasen Lu, Wenze Hu, Tsu-Jui Fu, Jesse Allardice, Zhengfeng Lai, Liangchen Song, Bowen Zhang, Cha Chen, Yiran Fei, Lezhi Li, Yinfei Yang, Yizhou Sun, Kai-Wei Chang
ICLRW 2025 Stiv: Scalable Text and Image Conditioned Video Generation Zongyu Lin, Wei Liu, Chen Chen, Jiasen Lu, Wenze Hu, Tsu-Jui Fu, Jesse Allardice, Zhengfeng Lai, Liangchen Song, Bowen Zhang, Cha Chen, Yiran Fei, Yifan Jiang, Lezhi Li, Yizhou Sun, Kai-Wei Chang, Yinfei Yang
NeurIPS 2025 UniGen: Enhanced Training & Test-Time Strategies for Unified Multimodal Understanding and Generation Rui Tian, Mingfei Gao, Mingze Xu, Jiaming Hu, Jiasen Lu, Zuxuan Wu, Yinfei Yang, Afshin Dehghan
ICCV 2025 UniVG: A Generalist Diffusion Model for Unified Image Generation and Editing Tsu-Jui Fu, Yusu Qian, Chen Chen, Wenze Hu, Zhe Gan, Yinfei Yang
ICLR 2024 Compressing LLMs: The Truth Is Rarely Pure and Never Simple Ajay Kumar Jaiswal, Zhe Gan, Xianzhi Du, Bowen Zhang, Zhangyang Wang, Yinfei Yang
WACV 2024 Empowering Unsupervised Domain Adaptation with Large-Scale Pre-Trained Vision-Language Models Zhengfeng Lai, Haoping Bai, Haotian Zhang, Xianzhi Du, Jiulong Shan, Yinfei Yang, Chen-Nee Chuah, Meng Cao
ECCV 2024 Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs Keen You, Haotian Zhang, Eldon Schoop, Floris Weers, Amanda Swearngin, Jeff Nichols, Yinfei Yang, Zhe Gan
ICLR 2024 Ferret: Refer and Ground Anything Anywhere at Any Granularity Haoxuan You, Haotian Zhang, Zhe Gan, Xianzhi Du, Bowen Zhang, Zirui Wang, Liangliang Cao, Shih-Fu Chang, Yinfei Yang
ICLR 2024 Guiding Instruction-Based Image Editing via Multimodal Large Language Models Tsu-Jui Fu, Wenze Hu, Xianzhi Du, William Yang Wang, Yinfei Yang, Zhe Gan
NeurIPSW 2024 How Easy Is It to Fool Your Multimodal LLMs? an Empirical Analysis on Deceptive Prompt Yusu Qian, Haotian Zhang, Yinfei Yang, Zhe Gan
ECCV 2024 MM1: Methods, Analysis & Insights from Multimodal LLM Pre-Training Brandon McKinzie, Zhe Gan, Jean-Philippe Fauconnier, Samuel Dodge, Bowen Zhang, Philipp Dufter, Dhruti Shah, Futang Peng, Anton Belyi, Max A Schwarzer, Hongyu Hè, Xianzhi Du, Haotian Zhang, Karanjeet Singh, Doug Kang, Tom Gunter, Xiang Kong, Aonan Zhang, Jianyu Wang, Chong Wang, Nan Du, Tao Lei, Sam Wiseman, Mark Lee, Zirui Wang, Ruoming Pang, Peter Grasch, Alexander Toshev, Yinfei Yang
ICLR 2024 MOFI: Learning Image Representations from Noisy Entity Annotated Images Wentao Wu, Aleksei Timofeev, Chen Chen, Bowen Zhang, Kun Duan, Shuangning Liu, Yantao Zheng, Jonathon Shlens, Xianzhi Du, Yinfei Yang
ECCV 2024 VeCLIP: Improving CLIP Training via Visual-Enriched Captions Zhengfeng Lai, Haotian Zhang, Bowen Zhang, Wentao Wu, Haoping Bai, Aleksei Timofeev, Xianzhi Du, Zhe Gan, Jiulong Shan, Chen-Nee Chuah, Yinfei Yang, Meng Cao
CVPR 2023 A New Path: Scaling Vision-and-Language Navigation with Synthetic Instructions and Imitation Learning Aishwarya Kamath, Peter Anderson, Su Wang, Jing Yu Koh, Alexander Ku, Austin Waters, Yinfei Yang, Jason Baldridge, Zarana Parekh
CVPR 2023 Masked Autoencoding Does Not Help Natural Language Supervision at Scale Floris Weers, Vaishaal Shankar, Angelos Katharopoulos, Yinfei Yang, Tom Gunter
ICCV 2023 Perceptual Grouping in Contrastive Vision-Language Models Kanchana Ranasinghe, Brandon McKinzie, Sachin Ravi, Yinfei Yang, Alexander Toshev, Jonathon Shlens
ICML 2023 Robustness in Multimodal Learning Under Train-Test Modality Mismatch Brandon Mckinzie, Vaishaal Shankar, Joseph Yitan Cheng, Yinfei Yang, Jonathon Shlens, Alexander T Toshev
AAAI 2023 Simple and Effective Synthesis of Indoor 3D Scenes Jing Yu Koh, Harsh Agrawal, Dhruv Batra, Richard Tucker, Austin Waters, Honglak Lee, Yinfei Yang, Jason Baldridge, Peter Anderson
TMLR 2022 Scaling Autoregressive Models for Content-Rich Text-to-Image Generation Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, Ben Hutchinson, Wei Han, Zarana Parekh, Xin Li, Han Zhang, Jason Baldridge, Yonghui Wu
CVPR 2021 Cross-Modal Contrastive Learning for Text-to-Image Generation Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang
ICCV 2021 Pathdreamer: A World Model for Indoor Navigation Jing Yu Koh, Honglak Lee, Yinfei Yang, Jason Baldridge, Peter Anderson
ICML 2021 Scaling up Visual and Vision-Language Representation Learning with Noisy Text Supervision Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, Tom Duerig
WACV 2021 Text-to-Image Generation Grounded by Fine-Grained User Attention Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang
IJCAI 2019 Improving Multilingual Sentence Embedding Using Bi-Directional Dual Encoder with Additive Margin SoftMax Yinfei Yang, Gustavo Hernández Ábrego, Steve Yuan, Mandy Guo, Qinlan Shen, Daniel Cer, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil
JAIR 2017 Combining Lexical and Syntactic Features for Detecting Content-Dense Texts in News Yinfei Yang, Ani Nenkova
AAAI 2014 Detecting Information-Dense Texts in Multiple News Domains Yinfei Yang, Ani Nenkova