Cho, Jaemin

22 publications

NeurIPS 2025 Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-Level CLIP Latents Han Lin, Jaemin Cho, Amir Zadeh, Chuan Li, Mohit Bansal
ICCV 2025 CAPTURE: Evaluating Spatial Reasoning in Vision Language Models via Occluded Object Counting Atin Pothiraj, Elias Stengel-Eskin, Jaemin Cho, Mohit Bansal
ICLR 2025 Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model Han Lin, Jaemin Cho, Abhay Zala, Mohit Bansal
ICLR 2025 DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback Zaid Khan, Elias Stengel-Eskin, Jaemin Cho, Mohit Bansal
ECCV 2024 Contrastive Region Guidance: Improving Grounding in Vision-Language Models Without Training David Wan, Jaemin Cho, Elias Stengel-Eskin, Mohit Bansal
ECCV 2024 DOCCI: Descriptions of Connected and Contrasting Images Yasumasa Onoe, Sunayana Rane, Zachary E Berger, Yonatan Bitton, Jaemin Cho, Roopal Garg, Alexander Ku, Zarana Parekh, Jordi Pont-Tuset, Garrett Tanzer, Su Wang, Jason M Baldridge
ICLR 2024 Davidsonian Scene Graph: Improving Reliability in Fine-Grained Evaluation for Text-to-Image Generation Jaemin Cho, Yushi Hu, Jason Michael Baldridge, Roopal Garg, Peter Anderson, Ranjay Krishna, Mohit Bansal, Jordi Pont-Tuset, Su Wang
CVPRW 2024 Diagnostic Benchmark and Iterative Inpainting for Layout-Guided Image Generation Jaemin Cho, Linjie Li, Zhengyuan Yang, Zhe Gan, Lijuan Wang, Mohit Bansal
CVPR 2024 Rethinking Interactive Image Segmentation with Low Latency High Quality and Diverse Prompts Qin Liu, Jaemin Cho, Mohit Bansal, Marc Niethammer
NeurIPS 2024 SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data Jialu Li, Jaemin Cho, Yi-Lin Sung, Jaehong Yoon, Mohit Bansal
ICCV 2023 DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models Jaemin Cho, Abhay Zala, Mohit Bansal
CVPR 2023 Hierarchical Video-Moment Retrieval and Step-Captioning Abhay Zala, Jaemin Cho, Satwik Kottur, Xilun Chen, Barlas Oguz, Yashar Mehdad, Mohit Bansal
NeurIPS 2023 Paxion: Patching Action Knowledge in Video-Language Foundation Models Zhenhailong Wang, Ansel Blume, Sha Li, Genglin Liu, Jaemin Cho, Zineng Tang, Mohit Bansal, Heng Ji
WACV 2023 Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention Zineng Tang, Jaemin Cho, Jie Lei, Mohit Bansal
NeurIPS 2023 Self-Chained Image-Language Model for Video Localization and Question Answering Shoubin Yu, Jaemin Cho, Prateek Yadav, Mohit Bansal
NeurIPS 2023 Visual Programming for Step-by-Step Text-to-Image Generation and Evaluation Jaemin Cho, Abhay Zala, Mohit Bansal
NeurIPS 2022 LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning Yi-Lin Sung, Jaemin Cho, Mohit Bansal
AAAI 2022 MuMuQA: Multimedia Multi-Hop News Question Answering via Cross-Media Knowledge Extraction and Grounding Revanth Gangi Reddy, Xilin Rui, Manling Li, Xudong Lin, Haoyang Wen, Jaemin Cho, Lifu Huang, Mohit Bansal, Avirup Sil, Shih-Fu Chang, Alexander G. Schwing, Heng Ji
NeurIPS 2022 TVLT: Textless Vision-Language Transformer Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal
CVPR 2022 VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks Yi-Lin Sung, Jaemin Cho, Mohit Bansal
ICML 2021 Unifying Vision-and-Language Tasks via Text Generation Jaemin Cho, Jie Lei, Hao Tan, Mohit Bansal
NeurIPS 2021 VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer Zineng Tang, Jaemin Cho, Hao Tan, Mohit Bansal