Bansal, Mohit

119 publications

TMLR 2026 SiLVR: A Simple Language-Based Video Reasoning Framework Ce Zhang, Yan-Bo Lin, Ziyang Wang, Mohit Bansal, Gedas Bertasius
NeurIPS 2025 4D-LRM: Large Space-Time Reconstruction Model from and to Any View at Any Time Ziqiao Ma, Xuweiyi Chen, Shoubin Yu, Sai Bi, Kai Zhang, Chen Ziwen, Sihan Xu, Jianing Yang, Zexiang Xu, Kalyan Sunkavalli, Mohit Bansal, Joyce Chai, Hao Tan
TMLR 2025 A Survey on Model MoErging: Recycling and Routing Among Specialized Experts for Collaborative Learning Prateek Yadav, Colin Raffel, Mohammed Muqeeth, Lucas Caccia, Haokun Liu, Tianlong Chen, Mohit Bansal, Leshem Choshen, Alessandro Sordoni
ICLR 2025 Adapt-$\infty$: Scalable Continual Multimodal Instruction Tuning via Dynamic Data Selection Adyasha Maharana, Jaehong Yoon, Tianlong Chen, Mohit Bansal
ICLR 2025 Anyprefer: An Agentic Framework for Preference Data Synthesis Yiyang Zhou, Zhaoyang Wang, Tianle Wang, Shangyu Xing, Peng Xia, Bo Li, Kaiyuan Zheng, Zijian Zhang, Zhaorun Chen, Wenhao Zheng, Xuchao Zhang, Chetan Bansal, Weitong Zhang, Ying Wei, Mohit Bansal, Huaxiu Yao
NeurIPS 2025 Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-Level CLIP Latents Han Lin, Jaemin Cho, Amir Zadeh, Chuan Li, Mohit Bansal
ICLR 2025 Bootstrapping Language-Guided Navigation Learning with Self-Refining Data Flywheel Zun Wang, Jialu Li, Yicong Hong, Songze Li, Kunchang Li, Shoubin Yu, Yi Wang, Yu Qiao, Yali Wang, Mohit Bansal, Limin Wang
ICCV 2025 CAPTURE: Evaluating Spatial Reasoning in Vision Language Models via Occluded Object Counting Atin Pothiraj, Elias Stengel-Eskin, Jaemin Cho, Mohit Bansal
ICLR 2025 CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion Shoubin Yu, Jaehong Yoon, Mohit Bansal
TMLR 2025 ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization Prateek Yadav, Leshem Choshen, Colin Raffel, Mohit Bansal
ICLR 2025 Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model Han Lin, Jaemin Cho, Abhay Zala, Mohit Bansal
WACV 2025 DAM: Dynamic Adapter Merging for Continual Video QA Learning Feng Cheng, Ziyang Wang, Yi-Lin Sung, Yan-Bo Lin, Mohit Bansal, Gedas Bertasius
ICLR 2025 DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback Zaid Khan, Elias Stengel-Eskin, Jaemin Cho, Mohit Bansal
WACV 2025 Improving Faithfulness of Text-to-Image Diffusion Models Through Inference Intervention Danfeng Guo, Sanchit Agarwal, Yu-Hsiang Lin, Jiun-Yu Kao, Tagyoung Chung, Nanyun Peng, Mohit Bansal
NeurIPS 2025 LASeR: Learning to Adaptively Select Reward Models with Multi-Arm Bandits Duy Nguyen, Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal
CVPR 2025 Motion-Grounded Video Reasoning: Understanding and Perceiving Motion at Pixel Level Andong Deng, Tongjia Chen, Shoubin Yu, Taojiannan Yang, Lincoln Spencer, Yapeng Tian, Ajmal Saeed Mian, Mohit Bansal, Chen Chen
NeurIPS 2025 ReAgent-V: A Reward-Driven Multi-Agent Framework for Video Understanding Yiyang Zhou, Yangfan He, Yaofeng Su, Siwei Han, Joel Jang, Gedas Bertasius, Mohit Bansal, Huaxiu Yao
TMLR 2025 Reliable and Responsible Foundation Models Xinyu Yang, Junlin Han, Rishi Bommasani, Jinqi Luo, Wenjie Qu, Wangchunshu Zhou, Adel Bibi, Xiyao Wang, Jaehong Yoon, Elias Stengel-Eskin, Shengbang Tong, Lingfeng Shen, Rafael Rafailov, Runjia Li, Zhaoyang Wang, Yiyang Zhou, Chenhang Cui, Yu Wang, Wenhao Zheng, Huichi Zhou, Jindong Gu, Zhaorun Chen, Peng Xia, Tony Lee, Thomas P Zollo, Vikash Sehwag, Jixuan Leng, Jiuhai Chen, Yuxin Wen, Huan Zhang, Zhun Deng, Linjun Zhang, Pavel Izmailov, Pang Wei Koh, Yulia Tsvetkov, Andrew Gordon Wilson, Jiaheng Zhang, James Zou, Cihang Xie, Hao Wang, Philip Torr, Julian McAuley, David Alvarez-Melis, Florian Tramèr, Kaidi Xu, Suman Jana, Chris Callison-Burch, Rene Vidal, Filippos Kokkinos, Mohit Bansal, Beidi Chen, Huaxiu Yao
ICLR 2025 SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image and Video Generation Jaehong Yoon, Shoubin Yu, Vaidehi Patil, Huaxiu Yao, Mohit Bansal
ICCV 2025 SAME: Learning Generic Language-Guided Visual Navigation with State-Adaptive Mixture of Experts Gengze Zhou, Yicong Hong, Zun Wang, Chongyang Zhao, Mohit Bansal, Qi Wu
ICLR 2025 See It from My Perspective: How Language Affects Cultural Bias in Image Understanding Amith Ananthram, Elias Stengel-Eskin, Mohit Bansal, Kathleen McKeown
ICML 2025 Self-Consistency Preference Optimization Archiki Prasad, Weizhe Yuan, Richard Yuanzhe Pang, Jing Xu, Maryam Fazel-Zarandi, Mohit Bansal, Sainbayar Sukhbaatar, Jason E Weston, Jane Yu
ICLR 2025 System 1.x: Learning to Balance Fast and Slow Planning with Language Models Swarnadeep Saha, Archiki Prasad, Justin Chen, Peter Hase, Elias Stengel-Eskin, Mohit Bansal
ICLR 2025 Unbounded: A Generative Infinite Game of Character Life Simulation Jialu Li, Yuanzhen Li, Neal Wadhwa, Yael Pritch, David E. Jacobs, Michael Rubinstein, Mohit Bansal, Nataniel Ruiz
ICLR 2025 VEDIT: Latent Prediction Architecture for Procedural Video Representation Learning Han Lin, Tushar Nagarajan, Nicolas Ballas, Mido Assran, Mojtaba Komeili, Mohit Bansal, Koustuv Sinha
ICCV 2025 VEGGIE: Instructional Editing and Reasoning Video Concepts with Grounded Generation Shoubin Yu, Difan Liu, Ziqiao Ma, Yicong Hong, Yang Zhou, Hao Tan, Joyce Chai, Mohit Bansal
CVPR 2025 VideoTree: Adaptive Tree-Based Video Representation for LLM Reasoning on Long Videos Ziyang Wang, Shoubin Yu, Elias Stengel-Eskin, Jaehong Yoon, Feng Cheng, Gedas Bertasius, Mohit Bansal
TMLR 2025 What Matters for Model Merging at Scale? Prateek Yadav, Tu Vu, Jonathan Lai, Alexandra Chronopoulou, Manaal Faruqui, Mohit Bansal, Tsendsuren Munkhdalai
ICLR 2024 $\mathbb{D}^2$ Pruning: Message Passing for Balancing Diversity & Difficulty in Data Pruning Adyasha Maharana, Prateek Yadav, Mohit Bansal
ICLR 2024 Analyzing and Mitigating Object Hallucination in Large Vision-Language Models Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
NeurIPSW 2024 AnyPrefer: An Automatic Framework for Preference Data Synthesis Yiyang Zhou, Zhaoyang Wang, Tianle Wang, Shangyu Xing, Peng Xia, Bo Li, Kaiyuan Zheng, Zijian Zhang, Zhaorun Chen, Wenhao Zheng, Xuchao Zhang, Chetan Bansal, Weitong Zhang, Ying Wei, Mohit Bansal, Huaxiu Yao
ICLR 2024 Can Sensitive Information Be Deleted from LLMs? Objectives for Defending Against Extraction Attacks Vaidehi Patil, Peter Hase, Mohit Bansal
CVPR 2024 CoDi-2: In-Context Interleaved and Interactive Any-to-Any Generation Zineng Tang, Ziyi Yang, Mahmoud Khademi, Yang Liu, Chenguang Zhu, Mohit Bansal
ECCV 2024 Contrastive Region Guidance: Improving Grounding in Vision-Language Models Without Training David Wan, Jaemin Cho, Elias Stengel-Eskin, Mohit Bansal
ICLR 2024 Davidsonian Scene Graph: Improving Reliability in Fine-Grained Evaluation for Text-to-Image Generation Jaemin Cho, Yushi Hu, Jason Michael Baldridge, Roopal Garg, Peter Anderson, Ranjay Krishna, Mohit Bansal, Jordi Pont-Tuset, Su Wang
CVPRW 2024 Diagnostic Benchmark and Iterative Inpainting for Layout-Guided Image Generation Jaemin Cho, Linjie Li, Zhengyuan Yang, Zhe Gan, Lijuan Wang, Mohit Bansal
ICLR 2024 ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models Yi-Lin Sung, Jaehong Yoon, Mohit Bansal
TMLR 2024 Exposing and Addressing Cross-Task Inconsistency in Unified Vision-Language Models Adyasha Maharana, Amita Kamath, Christopher Clark, Mohit Bansal, Aniruddha Kembhavi
TMLR 2024 FlexEControl: Flexible and Efficient Multimodal Control for Text-to-Image Generation Xuehai He, Jian Zheng, Jacob Zhiyuan Fang, Robinson Piramuthu, Mohit Bansal, Vicente Ordonez, Gunnar A Sigurdsson, Nanyun Peng, Xin Eric Wang
TMLR 2024 Fundamental Problems with Model Editing: How Should Rational Belief Revision Work in LLMs? Peter Hase, Thomas Hofweber, Xiang Zhou, Elias Stengel-Eskin, Mohit Bansal
NeurIPS 2024 GTBench: Uncovering the Strategic Reasoning Capabilities of LLMs via Game-Theoretic Evaluations Jinhao Duan, Renming Zhang, James Diffenderfer, Bhavya Kailkhura, Lichao Sun, Elias Stengel-Eskin, Mohit Bansal, Tianlong Chen, Kaidi Xu
TMLR 2024 INSPIRE: Incorporating Diverse Feature Preferences in Recourse Prateek Yadav, Peter Hase, Mohit Bansal
NeurIPS 2024 LACIE: Listener-Aware Finetuning for Calibration in Large Language Models Elias Stengel-Eskin, Peter Hase, Mohit Bansal
NeurIPSW 2024 LLM Self-Correction with DeCRIM: Decompose, Critique, and Refine for Enhanced Following of Instructions with Multiple Constraints Thomas Palmeira Ferraz, Kartik Mehta, Yu-Hsiang Lin, Haw-Shiuan Chang, Shereen Oraby, Sijia Liu, Vivek Subramanian, Tagyoung Chung, Mohit Bansal, Nanyun Peng
ICML 2024 MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models Justin Chen, Swarnadeep Saha, Elias Stengel-Eskin, Mohit Bansal
ICLR 2024 Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy Pingzhi Li, Zhenyu Zhang, Prateek Yadav, Yi-Lin Sung, Yu Cheng, Mohit Bansal, Tianlong Chen
TMLR 2024 Merging by Matching Models in Task Parameter Subspaces Derek Tam, Mohit Bansal, Colin Raffel
CVPR 2024 Multimodal Representation Learning by Alternating Unimodal Adaptation Xiaohui Zhang, Jaehong Yoon, Mohit Bansal, Huaxiu Yao
ICML 2024 Position: TrustLLM: Trustworthiness in Large Language Models Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Hanchi Sun, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Yang Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
NeurIPSW 2024 RACCooN: Remove, Add, and Change Video Content with Auto-Generated Narratives Jaehong Yoon, Shoubin Yu, Mohit Bansal
ICML 2024 ReGAL: Refactoring Programs to Discover Generalizable Abstractions Elias Stengel-Eskin, Archiki Prasad, Mohit Bansal
ICLR 2024 Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal
CVPR 2024 Rethinking Interactive Image Segmentation with Low Latency High Quality and Diverse Prompts Qin Liu, Jaemin Cho, Mohit Bansal, Marc Niethammer
NeurIPS 2024 SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data Jialu Li, Jaemin Cho, Yi-Lin Sung, Jaehong Yoon, Mohit Bansal
TMLR 2024 Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation Vaidehi Patil, Yi-Lin Sung, Peter Hase, Jie Peng, Tianlong Chen, Mohit Bansal
AAAI 2024 VLN-Video: Utilizing Driving Videos for Outdoor Vision-and-Language Navigation Jialu Li, Aishwarya Padmakumar, Gaurav S. Sukhatme, Mohit Bansal
TMLR 2024 Vision-and-Language Navigation Today and Tomorrow: A Survey in the Era of Foundation Models Yue Zhang, Ziqiao Ma, Jialu Li, Yanyuan Qiao, Zun Wang, Joyce Chai, Qi Wu, Mohit Bansal, Parisa Kordjamshidi
NeurIPS 2023 Adaptive Contextual Perception: How to Generalize to New Backgrounds and Ambiguous Objects Zhuofan Ying, Peter Hase, Mohit Bansal
NeurIPSW 2023 Analyzing and Mitigating Object Hallucination in Large Vision-Language Models Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
NeurIPS 2023 Any-to-Any Generation via Composable Diffusion Zineng Tang, Ziyi Yang, Chenguang Zhu, Michael Zeng, Mohit Bansal
TMLR 2023 Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Johan Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew Kyle Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, Cesar Ferri, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Christopher Waites, Christian Voigt, Christopher D Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, C. Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodolà, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germàn Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Xinyue Wang, Gonzalo Jaimovitch-Lopez, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Francis Anthony Shevlin, Hinrich Schuetze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B Simon, James Koppel, James Zheng, James Zou, Jan Kocon, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh Dhole, Kevin Gimpel, Kevin Omondi, Kory Wallace Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros-Colón, Luke Metz, Lütfi Kerem Senel, Maarten Bosma, Maarten Sap, Maartje Ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramirez-Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael Andrew Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan Andrew Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter W Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan Le Bras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Russ Salakhutdinov, Ryan Andrew Chi, Seungjae Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel Stern Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima Shammie Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven Piantadosi, Stuart Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsunori Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Venkatesh Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Sophie Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, Ziyi Wu
NeurIPS 2023 Can Language Models Teach? Teacher Explanations Improve Student Performance via Personalization Swarnadeep Saha, Peter Hase, Mohit Bansal
ICCV 2023 DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models Jaemin Cho, Abhay Zala, Mohit Bansal
NeurIPSW 2023 Debiasing Multimodal Models via Causal Information Minimization Vaidehi Patil, Adyasha Maharana, Mohit Bansal
NeurIPS 2023 Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models Peter Hase, Mohit Bansal, Been Kim, Asma Ghandeharioun
CVPR 2023 Hierarchical Video-Moment Retrieval and Step-Captioning Abhay Zala, Jaemin Cho, Satwik Kottur, Xilun Chen, Barlas Oguz, Yashar Mehdad, Mohit Bansal
CVPR 2023 Improving Vision-and-Language Navigation by Generating Future-View Image Semantics Jialu Li, Mohit Bansal
IJCAI 2023 On Conditional and Compositional Language Model Differentiable Prompting Jonathan Pilault, Can Liu, Mohit Bansal, Markus Dreyer
NeurIPS 2023 PanoGen: Text-Conditioned Panoramic Environment Generation for Vision-and-Language Navigation Jialu Li, Mohit Bansal
NeurIPS 2023 Paxion: Patching Action Knowledge in Video-Language Foundation Models Zhenhailong Wang, Ansel Blume, Sha Li, Genglin Liu, Jaemin Cho, Zineng Tang, Mohit Bansal, Heng Ji
WACV 2023 Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention Zineng Tang, Jaemin Cho, Jie Lei, Mohit Bansal
ICCV 2023 Scaling Data Generation in Vision-and-Language Navigation Zun Wang, Jialu Li, Yicong Hong, Yi Wang, Qi Wu, Mohit Bansal, Stephen Gould, Hao Tan, Yu Qiao
NeurIPS 2023 Self-Chained Image-Language Model for Video Localization and Question Answering Shoubin Yu, Jaemin Cho, Prateek Yadav, Mohit Bansal
ICLR 2023 Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees Swarnadeep Saha, Shiyue Zhang, Peter Hase, Mohit Bansal
NeurIPS 2023 TIES-Merging: Resolving Interference When Merging Models Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel, Mohit Bansal
ICCV 2023 Unified Coarse-to-Fine Alignment for Video-Text Retrieval Ziyang Wang, Yi-Lin Sung, Feng Cheng, Gedas Bertasius, Mohit Bansal
CVPR 2023 Unifying Vision, Text, and Layout for Universal Document Processing Zineng Tang, Ziyi Yang, Guoxin Wang, Yuwei Fang, Yang Liu, Chenguang Zhu, Michael Zeng, Cha Zhang, Mohit Bansal
CVPR 2023 VindLU: A Recipe for Effective Video-and-Language Pretraining Feng Cheng, Xizi Wang, Jie Lei, David Crandall, Mohit Bansal, Gedas Bertasius
CVPR 2023 Vision Transformers Are Parameter-Efficient Audio-Visual Learners Yan-Bo Lin, Yi-Lin Sung, Jie Lei, Mohit Bansal, Gedas Bertasius
NeurIPS 2023 Visual Programming for Step-by-Step Text-to-Image Generation and Evaluation Jaemin Cho, Abhay Zala, Mohit Bansal
AAAI 2022 CAISE: Conversational Agent for Image Search and Editing Hyounghun Kim, Doo Soon Kim, Seunghyun Yoon, Franck Dernoncourt, Trung Bui, Mohit Bansal
ECCV 2022 ECLIPSE: Efficient Long-Range Video Retrieval Using Sight and Sound Yan-Bo Lin, Jie Lei, Mohit Bansal, Gedas Bertasius
CVPR 2022 EnvEdit: Environment Editing for Vision-and-Language Navigation Jialu Li, Hao Tan, Mohit Bansal
NeurIPS 2022 Few-Shot Parameter-Efficient Fine-Tuning Is Better and Cheaper than In-Context Learning Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, Colin A Raffel
ICLR 2022 How Much Can CLIP Benefit Vision-and-Language Tasks? Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, Kurt Keutzer
NeurIPS 2022 LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning Yi-Lin Sung, Jaemin Cho, Mohit Bansal
NeurIPS 2022 Language Models with Image Descriptors Are Strong Few-Shot Video-Language Learners Zhenhailong Wang, Manling Li, Ruochen Xu, Luowei Zhou, Jie Lei, Xudong Lin, Shuohang Wang, Ziyi Yang, Chenguang Zhu, Derek Hoiem, Shih-Fu Chang, Mohit Bansal, Heng Ji
AAAI 2022 MuMuQA: Multimedia Multi-Hop News Question Answering via Cross-Media Knowledge Extraction and Grounding Revanth Gangi Reddy, Xilin Rui, Manling Li, Xudong Lin, Haoyang Wen, Jaemin Cho, Lifu Huang, Mohit Bansal, Avirup Sil, Shih-Fu Chang, Alexander G. Schwing, Heng Ji
AAAI 2022 Scientific Chart Summarization: Datasets and Improved Text Modeling Hao Tan, Chen-Tse Tsai, Yujie He, Mohit Bansal
ICLRW 2022 Scotch: A Semantic Code Search Engine for IDEs Samip Dahal, Adyasha Maharana, Mohit Bansal
ECCV 2022 StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation Adyasha Maharana, Darryl Hannan, Mohit Bansal
NeurIPS 2022 TVLT: Textless Vision-Language Transformer Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal
CVPR 2022 VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks Yi-Lin Sung, Jaemin Cho, Mohit Bansal
NeurIPS 2022 VisFIS: Visual Feature Importance Supervision with Right-for-the-Right-Reason Objectives Zhuofan Ying, Peter Hase, Mohit Bansal
NeurIPS 2022 WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models Yonatan Bitton, Nitzan Bitton Guetta, Ron Yosef, Yuval Elovici, Mohit Bansal, Gabriel Stanovsky, Roy Schwartz
AAAI 2021 Data Augmentation for Abstractive Query-Focused Multi-Document Summarization Ramakanth Pasunuru, Asli Celikyilmaz, Michel Galley, Chenyan Xiong, Yizhe Zhang, Mohit Bansal, Jianfeng Gao
NeurIPS 2021 Detecting Moments and Highlights in Videos via Natural Language Queries Jie Lei, Tamara L Berg, Mohit Bansal
AAAI 2021 FIXMYPOSE: Pose Correctional Captioning and Retrieval Hyounghun Kim, Abhay Zala, Graham Burri, Mohit Bansal
CVPR 2021 Less Is More: ClipBERT for Video-and-Language Learning via Sparse Sampling Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, Jingjing Liu
NeurIPS 2021 The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations Peter Hase, Harry Xie, Mohit Bansal
ICML 2021 Unifying Vision-and-Language Tasks via Text Generation Jaemin Cho, Jie Lei, Hao Tan, Mohit Bansal
NeurIPS 2021 VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer Zineng Tang, Jaemin Cho, Hao Tan, Mohit Bansal
AAAI 2020 AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses Tong Niu, Mohit Bansal
IJCAI 2020 Diagnosing the Environment Bias in Vision-and-Language Navigation Yubo Zhang, Hao Tan, Mohit Bansal
AAAI 2020 ManyModalQA: Modality Disambiguation and QA over Diverse Inputs Darryl Hannan, Akshay Jain, Mohit Bansal
AAAI 2020 Modality-Balanced Models for Visual Dialogue Hyounghun Kim, Hao Tan, Mohit Bansal
AAAI 2020 Multi-Source Domain Adaptation for Text Classification via DistanceNet-Bandits Han Guo, Ramakanth Pasunuru, Mohit Bansal
ECCV 2020 TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval Jie Lei, Licheng Yu, Tamara L. Berg, Mohit Bansal
AAAI 2019 Analyzing Compositionality-Sensitivity of NLI Models Yixin Nie, Yicheng Wang, Mohit Bansal
AAAI 2019 Combining Fact Extraction and Verification with Neural Semantic Matching Networks Yixin Nie, Haonan Chen, Mohit Bansal
WACV 2018 Retweet Wars: Tweet Popularity Prediction via Dynamic Multimodal Regression Ke Wang, Mohit Bansal, Jan-Michael Frahm
AAAI 2018 Source-Target Inference Models for Spatial Instruction Understanding Hao Tan, Mohit Bansal
CVPR 2017 A Joint Speaker-Listener-Reinforcer Model for Referring Expressions Licheng Yu, Hao Tan, Mohit Bansal, Tamara L. Berg
AAAI 2017 Coherent Dialogue with Attention-Based Language Models Hongyuan Mei, Mohit Bansal, Matthew R. Walter
AAAI 2017 Contextual RNN-GANs for Abstract Reasoning Diagram Generation Viveka Kulharia, Arnab Ghosh, Amitabha Mukerjee, Vinay P. Namboodiri, Mohit Bansal
AAAI 2016 Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences Hongyuan Mei, Mohit Bansal, Matthew R. Walter
ICLR 2016 Towards Universal Paraphrastic Sentence Embeddings John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu
CVPR 2016 We Are Humor Beings: Understanding and Predicting Visual Humor Arjun Chandrasekaran, Ashwin K. Vijayakumar, Stanislaw Antol, Mohit Bansal, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh
CVPR 2014 What Are You Talking About? Text-to-Image Coreference Chen Kong, Dahua Lin, Mohit Bansal, Raquel Urtasun, Sanja Fidler