Pouransari, Hadi

23 publications

NeurIPS 2025 Datasets, Documents, and Repetitions: The Practicalities of Unequal Data Quality Alex Fang, Hadi Pouransari, Matt Jordan, Alexander T Toshev, Vaishaal Shankar, Ludwig Schmidt, Tom Gunter
CVPR 2025 FastVLM: Efficient Vision Encoding for Vision Language Models Pavan Kumar Anasosalu Vasu, Fartash Faghri, Chun-Liang Li, Cem Koc, Nate true, Albert Antony, Gokula Santhanam, James Gabriel, Peter Grasch, Oncel Tuzel, Hadi Pouransari
ICLRW 2025 FocalLens: Instruction Tuning Enables Zero-Shot Conditional Image Representations Cheng-Yu Hsieh, Pavan Kumar Anasosalu Vasu, Fartash Faghri, Raviteja Vemulapalli, Chun-Liang Li, Ranjay Krishna, Oncel Tuzel, Hadi Pouransari
TMLR 2025 MobileCLIP2: Improving Multi-Modal Reinforced Training Fartash Faghri, Pavan Kumar Anasosalu Vasu, Cem Koc, Vaishaal Shankar, Alexander T Toshev, Oncel Tuzel, Hadi Pouransari
ICML 2025 Proxy-FDA: Proxy-Based Feature Distribution Alignment for Fine-Tuning Vision Foundation Models Without Forgetting Chen Huang, Skyler Seto, Hadi Pouransari, Mehrdad Farajtabar, Raviteja Vemulapalli, Fartash Faghri, Oncel Tuzel, Barry-John Theobald, Joshua M. Susskind
TMLR 2024 CLIP Meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement Mohammadreza Salehi, Mehrdad Farajtabar, Maxwell Horton, Fartash Faghri, Hadi Pouransari, Raviteja Vemulapalli, Oncel Tuzel, Ali Farhadi, Mohammad Rastegari, Sachin Mehta
NeurIPS 2024 DataComp-LM: In Search of the Next Generation of Training Sets for Language Models Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Sham Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, Vaishaal Shankar
NeurIPS 2024 Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum Hadi Pouransari, Chun-Liang Li, Jen-Hao Rick Chang, Pavan Kumar Anasosalu Vasu, Cem Koc, Vaishaal Shankar, Oncel Tuzel
ICML 2024 Knowledge Transfer from Vision Foundation Models for Efficient Training of Small Task-Specific Models Raviteja Vemulapalli, Hadi Pouransari, Fartash Faghri, Sachin Mehta, Mehrdad Farajtabar, Mohammad Rastegari, Oncel Tuzel
CVPR 2024 MobileCLIP: Fast Image-Text Models Through Multi-Modal Reinforced Training Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel
NeurIPSW 2024 Promoting Cross-Modal Representations to Improve Multimodal Foundation Models for Physiological Signals Ching Fang, Christopher Michael Sandino, Behrooz Mahasseni, Juri Minxha, Hadi Pouransari, Erdrin Azemi, Ali Moin, Ellen L. Zippi
CVPRW 2024 SAM-CLIP: Merging Vision Foundation Models Towards Semantic and Spatial Understanding Haoxiang Wang, Pavan Kumar Anasosalu Vasu, Fartash Faghri, Raviteja Vemulapalli, Mehrdad Farajtabar, Sachin Mehta, Mohammad Rastegari, Oncel Tuzel, Hadi Pouransari
ICLR 2024 TiC-CLIP: Continual Training of CLIP Models Saurabh Garg, Mehrdad Farajtabar, Hadi Pouransari, Raviteja Vemulapalli, Sachin Mehta, Oncel Tuzel, Vaishaal Shankar, Fartash Faghri
NeurIPSW 2024 TiC-LM: A Multi-Year Benchmark for Continual Pretraining of Language Models Jeffrey Li, Mohammadreza Armandpour, Seyed Iman Mirzadeh, Sachin Mehta, Vaishaal Shankar, Raviteja Vemulapalli, Oncel Tuzel, Mehrdad Farajtabar, Hadi Pouransari, Fartash Faghri
NeurIPSW 2023 CLIP Meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement Mohammadreza Salehi, Mehrdad Farajtabar, Maxwell Horton, Fartash Faghri, Hadi Pouransari, Raviteja Vemulapalli, Oncel Tuzel, Ali Farhadi, Mohammad Rastegari, Sachin Mehta
ICLR 2023 FastFill: Efficient Compatible Model Update Florian Jaeckle, Fartash Faghri, Ali Farhadi, Oncel Tuzel, Hadi Pouransari
ICCV 2023 Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement Fartash Faghri, Hadi Pouransari, Sachin Mehta, Mehrdad Farajtabar, Ali Farhadi, Mohammad Rastegari, Oncel Tuzel
NeurIPSW 2023 SAM-CLIP: Merging Vision Foundation Models Towards Semantic and Spatial Understanding Haoxiang Wang, Pavan Kumar Anasosalu Vasu, Fartash Faghri, Raviteja Vemulapalli, Mehrdad Farajtabar, Sachin Mehta, Mohammad Rastegari, Oncel Tuzel, Hadi Pouransari
NeurIPSW 2023 TiC-CLIP: Continual Training of CLIP Models Saurabh Garg, Mehrdad Farajtabar, Hadi Pouransari, Raviteja Vemulapalli, Sachin Mehta, Oncel Tuzel, Vaishaal Shankar, Fartash Faghri
NeurIPSW 2022 APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations Elan Rosenfeld, Preetum Nakkiran, Hadi Pouransari, Oncel Tuzel, Fartash Faghri
CVPR 2022 Forward Compatible Training for Large-Scale Embedding Retrieval Systems Vivek Ramanujan, Pavan Kumar Anasosalu Vasu, Ali Farhadi, Oncel Tuzel, Hadi Pouransari
CVPRW 2021 Extracurricular Learning: Knowledge Transfer Beyond Empirical Distribution Hadi Pouransari, Mojan Javaheripi, Vinay Sharma, Oncel Tuzel
CVPRW 2020 Least Squares Binary Quantization of Neural Networks Hadi Pouransari, Zhucheng Tu, Oncel Tuzel