Zong, Yongshuo

8 publications

CVPR 2025 Ground-V: Teaching VLMs to Ground Complex Instructions in Pixels Yongshuo Zong, Qin Zhang, Dongsheng An, Zhihua Li, Xiang Xu, Linghan Xu, Zhuowen Tu, Yifan Xing, Onkar Dabeer
ICLR 2025 VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning Yongshuo Zong, Ondrej Bohdal, Timothy Hospedales
ICML 2024 Fool Your (Vision and) Language Model with Embarrassingly Simple Permutations Yongshuo Zong, Tingyang Yu, Ruchika Chavhan, Bingchen Zhao, Timothy Hospedales
ICMLW 2024 Long-Context Vision Large Language Models: Empirical Insights and a Baseline Yongshuo Zong, Ismail Elezi, Yongxin Yang, Jiankang Deng, Timothy Hospedales
ICML 2024 Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models Yongshuo Zong, Ondrej Bohdal, Tingyang Yu, Yongxin Yang, Timothy Hospedales
CVPR 2024 What if the TV Was Off? Examining Counterfactual Reasoning Abilities of Multi-Modal Language Models Letian Zhang, Xiaotong Zhai, Zhongkai Zhao, Yongshuo Zong, Xin Wen, Bingchen Zhao
ICLR 2023 MEDFAIR: Benchmarking Fairness for Medical Imaging Yongshuo Zong, Yongxin Yang, Timothy Hospedales
CVPR 2023 Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn Ondrej Bohdal, Yinbing Tian, Yongshuo Zong, Ruchika Chavhan, Da Li, Henry Gouk, Li Guo, Timothy Hospedales