Yu, Dingli

22 publications

ICML 2025 Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs? Simon Park, Abhishek Panigrahi, Yun Cheng, Dingli Yu, Anirudh Goyal, Sanjeev Arora
ICLRW 2025 Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs? Simon Park, Abhishek Panigrahi, Yun Cheng, Dingli Yu, Anirudh Goyal, Sanjeev Arora
ICML 2025 Weak-to-Strong Generalization Even in Random Feature Networks, Provably Marko Medvedev, Kaifeng Lyu, Dingli Yu, Sanjeev Arora, Zhiyuan Li, Nathan Srebro
NeurIPSW 2024 AI-Assisted Generation of Difficult Math Questions Vedant Shah, Dingli Yu, Kaifeng Lyu, Simon Park, Jiatong Yu, Yinghui He, Nan Rosemary Ke, Michael Curtis Mozer, Yoshua Bengio, Sanjeev Arora, Anirudh Goyal
NeurIPS 2024 Can Models Learn Skill Composition from Examples? Haoyu Zhao, Simran Kaur, Dingli Yu, Anirudh Goyal, Sanjeev Arora
ICMLW 2024 Can Models Learn Skill Composition from Examples? Haoyu Zhao, Simran Kaur, Dingli Yu, Anirudh Goyal, Sanjeev Arora
NeurIPSW 2024 Can Models Learn Skill Composition from Examples? Haoyu Zhao, Simran Kaur, Dingli Yu, Anirudh Goyal, Sanjeev Arora
NeurIPS 2024 ConceptMix: A Compositional Image Generation Benchmark with Controllable Difficulty Xindi Wu, Dingli Yu, Yangsibo Huang, Olga Russakovsky, Sanjeev Arora
NeurIPSW 2024 ConceptMix: A Compositional Image Generation Benchmark with Controllable Difficulty Xindi Wu, Dingli Yu, Yangsibo Huang, Olga Russakovsky, Sanjeev Arora
NeurIPS 2024 Keeping LLMs Aligned After Fine-Tuning: The Crucial Role of Prompt Templates Kaifeng Lyu, Haoyu Zhao, Xinran Gu, Dingli Yu, Anirudh Goyal, Sanjeev Arora
ICLRW 2024 Keeping LLMs Aligned After Fine-Tuning: The Crucial Role of Prompt Templates Kaifeng Lyu, Haoyu Zhao, Xinran Gu, Dingli Yu, Anirudh Goyal, Sanjeev Arora
ICLR 2024 SKILL-MIX: A Flexible and Expandable Family of Evaluations for AI Models Dingli Yu, Simran Kaur, Arushi Gupta, Jonah Brown-Cohen, Anirudh Goyal, Sanjeev Arora
ICLR 2024 Tensor Programs VI: Feature Learning in Infinite Depth Neural Networks Greg Yang, Dingli Yu, Chen Zhu, Soufiane Hayou
ICML 2023 A Kernel-Based View of Language Model Fine-Tuning Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, Sanjeev Arora
ICLRW 2023 A Kernel-Based View of Language Model Fine-Tuning Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, Sanjeev Arora
NeurIPSW 2023 Feature Learning in Infinite-Depth Neural Networks Greg Yang, Dingli Yu, Chen Zhu, Soufiane Hayou
NeurIPSW 2023 Skill-Mix: A Flexible and Expandable Family of Evaluations for AI Models Dingli Yu, Simran Kaur, Arushi Gupta, Jonah Brown-Cohen, Anirudh Goyal, Sanjeev Arora
NeurIPS 2022 Fast Mixing of Stochastic Gradient Descent with Normalization and Weight Decay Zhiyuan Li, Tianhao Wang, Dingli Yu
NeurIPS 2022 New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound Arushi Gupta, Nikunj Saunshi, Dingli Yu, Kaifeng Lyu, Sanjeev Arora
ICLR 2020 Harnessing the Power of Infinitely Wide Deep Nets on Small-Data Tasks Sanjeev Arora, Simon S. Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, Dingli Yu
ICLR 2020 Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee Wei Hu, Zhiyuan Li, Dingli Yu
AAAI 2018 Fair Rent Division on a Budget Ariel D. Procaccia, Rodrigo A. Velez, Dingli Yu