E, Weinan

14 publications

NeurIPS 2025 On the Expressive Power of Mixture-of-Experts for Structured Complex Tasks Mingze Wang, Weinan E
ICML 2025 The Sharpness Disparity Principle in Transformers for Accelerating Language Model Pre-Training Jinbo Wang, Mingze Wang, Zhanpeng Zhou, Junchi Yan, Weinan E, Lei Wu
NeurIPS 2024 Exploring Molecular Pretraining Model at Scale Xiaohong Ji, Zhen Wang, Zhifeng Gao, Hang Zheng, Linfeng Zhang, Guolin Ke, Weinan E
NeurIPS 2024 Improving Generalization and Convergence by Enhancing Implicit Regularization Mingze Wang, Jinbo Wang, Haotian He, Zilin Wang, Guanhua Huang, Feiyu Xiong, Zhiyu Li, Weinan E, Lei Wu
NeurIPS 2024 Understanding the Expressive Power and Mechanisms of Transformer for Sequence Modeling Mingze Wang, Weinan E
JMLR 2022 Approximation and Optimization Theory for Linear Continuous-Time Recurrent Neural Networks Zhong Li, Jiequn Han, Weinan E, Qianxiao Li
ICLR 2021 On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis Zhong Li, Jiequn Han, Weinan E, Qianxiao Li
NeurIPS 2020 Towards Theoretically Understanding Why SGD Generalizes Better than Adam in Deep Learning Pan Zhou, Jiashi Feng, Chao Ma, Caiming Xiong, Steven Chu Hong Hoi, Weinan E
JMLR 2019 Stochastic Modified Equations and Dynamics of Stochastic Gradient Algorithms I: Mathematical Foundations Qianxiao Li, Cheng Tai, Weinan E
NeurIPS 2018 End-to-End Symmetry Preserving Inter-Atomic Potential Energy Model for Finite and Extended Systems Linfeng Zhang, Jiequn Han, Han Wang, Wissam Saidi, Roberto Car, Weinan E
NeurIPS 2018 How SGD Selects the Global Minima in Over-Parameterized Learning: A Dynamical Stability Perspective Lei Wu, Chao Ma, Weinan E
ICML 2017 Stochastic Modified Equations and Adaptive Stochastic Gradient Algorithms Qianxiao Li, Cheng Tai, Weinan E
ICLR 2016 Convolutional Neural Networks with Low-Rank Regularization Cheng Tai, Tong Xiao, Xiaogang Wang, Weinan E
JMLR 2016 Multiscale Adaptive Representation of Signals: I. the Basic Framework Cheng Tai, Weinan E