Ge, Rong

76 publications

ICLR 2025 For Better or for Worse? Learning Minimum Variance Features with Label Augmentation Muthu Chidambaram, Rong Ge
NeurIPS 2025 Learning-Augmented Algorithms for $k$-Median via Online Learning Anish Hebbar, Rong Ge, Amit Kumar, Debmalya Panigrahi
ICLR 2025 Reassessing How to Compare and Improve the Calibration of Machine Learning Models Muthu Chidambaram, Rong Ge
ICLR 2025 Task Descriptors Help Transformers Learn Linear Models In-Context Ruomin Huang, Rong Ge
NeurIPS 2024 How Does Gradient Descent Learn Features --- a Local Analysis for Regularized Two-Layer Neural Networks Mo Zhou, Rong Ge
NeurIPS 2024 Linear Transformers Are Versatile In-Context Learners Max Vladymyrov, Johannes von Oswald, Mark Sandler, Rong Ge
ICMLW 2024 Linear Transformers Are Versatile In-Context Learners Max Vladymyrov, Johannes von Oswald, Mark Sandler, Rong Ge
NeurIPS 2024 Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input Ziang Chen, Rong Ge
ICLR 2024 On the Limitations of Temperature Scaling for Distributions with Overlaps Muthu Chidambaram, Rong Ge
ICMLW 2024 Task Descriptors Help Transformers Learn Linear Models In-Context Ruomin Huang, Rong Ge
NeurIPS 2023 Connecting Pre-Trained Language Model and Downstream Task via Properties of Representation Chenwei Wu, Holden Lee, Rong Ge
ICLR 2023 Depth Separation with Multilayer Mean-Field Networks Yunwei Ren, Mo Zhou, Rong Ge
NeurIPSW 2023 Do Transformers Parse While Predicting the Masked Word? Haoyu Zhao, Abhishek Panigrahi, Rong Ge, Sanjeev Arora
ICML 2023 Hiding Data Helps: On the Benefits of Masking for Sparse Coding Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge
NeurIPSW 2023 How Does Gradient Descent Learn Features --- a Local Analysis for Regularized Two-Layer Neural Networks Mo Zhou, Rong Ge
ICML 2023 Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression Mo Zhou, Rong Ge
NeurIPSW 2023 Multi-Head CLIP: Improving CLIP with Diverse Representations and Flat Minima Mo Zhou, Xiong Zhou, Li Erran Li, Stefano Ermon, Rong Ge
ICLR 2023 Plateau in Monotonic Linear Interpolation --- a "Biased" View of Loss Landscape for Deep Networks Xiang Wang, Annie N. Wang, Mo Zhou, Rong Ge
ICML 2023 Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup Muthu Chidambaram, Xiang Wang, Chenwei Wu, Rong Ge
NeurIPS 2023 Robust Second-Order Nonconvex Optimization and Its Application to Low Rank Matrix Sensing Shuyao Li, Yu Cheng, Ilias Diakonikolas, Jelena Diakonikolas, Rong Ge, Stephen Wright
NeurIPS 2023 Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models Alex Damian, Eshaan Nichani, Rong Ge, Jason Lee
NeurIPSW 2023 The Role of Linguistic Priors in Measuring Compositional Generalization of Vision-Language Models Chenwei Wu, Patrick Haffner, Li Erran Li, Stefano Ermon, Rong Ge
ICLR 2023 Understanding Edge-of-Stability Training Dynamics with a Minimalist Example Xingyu Zhu, Zixuan Wang, Xiang Wang, Mo Zhou, Rong Ge
ICLR 2023 Understanding the Robustness of Self-Supervised Learning Through Topic Modeling Zeping Luo, Shiyou Wu, Cindy Weng, Mo Zhou, Rong Ge
ICML 2022 Extracting Latent State Representations with Linear Dynamics from Rich Observations Abraham Frandsen, Rong Ge, Holden Lee
ICML 2022 Online Algorithms with Multiple Predictions Keerti Anand, Rong Ge, Amit Kumar, Debmalya Panigrahi
NeurIPS 2022 Outlier-Robust Sparse Estimation via Non-Convex Optimization Yu Cheng, Ilias Diakonikolas, Rong Ge, Shivam Gupta, Daniel Kane, Mahdi Soltanolkotabi
ICLR 2022 Towards Understanding the Data Dependency of Mixup-Style Training Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge
COLT 2021 A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network Mo Zhou, Rong Ge, Chi Jin
NeurIPS 2021 A Regression Approach to Learning-Augmented Online Algorithms Keerti Anand, Rong Ge, Amit Kumar, Debmalya Panigrahi
ALT 2021 Efficient Sampling from the Bingham Distribution Rong Ge, Holden Lee, Jianfeng Lu, Andrej Risteski
ICML 2021 Guarantees for Tuning the Step Size Using a Learning-to-Learn Approach Xiang Wang, Shuai Yuan, Chenwei Wu, Rong Ge
NeurIPS 2021 Understanding Deflation Process in Over-Parametrized Tensor Decomposition Rong Ge, Yunwei Ren, Xiang Wang, Mo Zhou
NeurIPS 2020 Beyond Lazy Training for Over-Parameterized Tensor Decomposition Xiang Wang, Chenwei Wu, Jason Lee, Tengyu Ma, Rong Ge
ICML 2020 Customizing ML Predictions for Online Algorithms Keerti Anand, Rong Ge, Debmalya Panigrahi
ICML 2020 High-Dimensional Robust Mean Estimation via Gradient Descent Yu Cheng, Ilias Diakonikolas, Rong Ge, Mahdi Soltanolkotabi
NeurIPS 2019 Explaining Landscape Connectivity of Low-Cost Solutions for Multilayer Nets Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Rong Ge, Sanjeev Arora
COLT 2019 Faster Algorithms for High-Dimensional Robust Covariance Estimation Yu Cheng, Ilias Diakonikolas, Rong Ge, David P. Woodruff
ICLR 2019 Learning Two-Layer Neural Networks with Symmetric Inputs Rong Ge, Rohith Kuditipudi, Zhize Li, Xiang Wang
COLT 2019 Open Problem: Do Good Algorithms Necessarily Query Bad Points? Rong Ge, Prateek Jain, Sham M. Kakade, Rahul Kidambi, Dheeraj M. Nagaraj, Praneeth Netrapalli
FnTML 2019 Spectral Learning on Matrices and Tensors Majid Janzamin, Rong Ge, Jean Kossaifi, Anima Anandkumar
COLT 2019 Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization Rong Ge, Zhize Li, Weiyao Wang, Xiang Wang
NeurIPS 2019 The Step Decay Schedule: A near Optimal, Geometrically Decaying Learning Rate Procedure for Least Squares Rong Ge, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli
ICLR 2019 Understanding Composition of Word Embeddings via Tensor Decomposition Abraham Frandsen, Rong Ge
NeurIPS 2018 Beyond Log-Concavity: Provable Guarantees for Sampling Multi-Modal Distributions Using Simulated Tempering Langevin Monte Carlo Holden Lee, Andrej Risteski, Rong Ge
ICML 2018 Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator Maryam Fazel, Rong Ge, Sham Kakade, Mehran Mesbahi
ICLR 2018 Learning One-Hidden-Layer Neural Networks with Landscape Design Rong Ge, Jason D. Lee, Tengyu Ma
COLT 2018 Non-Convex Matrix Completion Against a Semi-Random Adversary Yu Cheng, Rong Ge
NeurIPS 2018 On the Local Minima of the Empirical Risk Chi Jin, Lydia T. Liu, Rong Ge, Michael I Jordan
ICML 2018 Stronger Generalization Bounds for Deep Nets via a Compression Approach Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang
JMLR 2017 Analyzing Tensor Power Method Dynamics in Overcomplete Regime Animashree Anandkumar, Rong Ge, Majid Janzamin
ICML 2017 Generalization and Equilibrium in Generative Adversarial Nets (GANs) Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, Yi Zhang
COLT 2017 Homotopy Analysis for Tensor PCA Anima Anandkumar, Yuan Deng, Rong Ge, Hossein Mobahi
ICML 2017 How to Escape Saddle Points Efficiently Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan
ICML 2017 No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis Rong Ge, Chi Jin, Yi Zheng
COLT 2017 On the Ability of Neural Nets to Express Distributions Holden Lee, Rong Ge, Tengyu Ma, Andrej Risteski, Sanjeev Arora
NeurIPS 2017 On the Optimization Landscape of Tensor Decompositions Rong Ge, Tengyu Ma
ICML 2016 Efficient Algorithms for Large-Scale Generalized Eigenvector Computation and Canonical Correlation Analysis Rong Ge, Chi Jin, Sham, Praneeth Netrapalli, Aaron Sidford
COLT 2016 Efficient Approaches for Escaping Higher Order Saddle Points in Non-Convex Optimization Animashree Anandkumar, Rong Ge
NeurIPS 2016 Matrix Completion Has No Spurious Local Minimum Rong Ge, Jason Lee, Tengyu Ma
ICML 2016 Provable Algorithms for Inference in Topic Models Sanjeev Arora, Rong Ge, Frederic Koehler, Tengyu Ma, Ankur Moitra
ICML 2016 Rich Component Analysis Rong Ge, James Zou
COLT 2015 Competing with the Empirical Risk Minimizer in a Single Pass Roy Frostig, Rong Ge, Sham M. Kakade, Aaron Sidford
COLT 2015 Escaping from Saddle Points - Online Stochastic Gradient for Tensor Decomposition Rong Ge, Furong Huang, Chi Jin, Yang Yuan
ICML 2015 Intersecting Faces: Non-Negative Matrix Factorization with New Guarantees Rong Ge, James Zou
COLT 2015 Learning Overcomplete Latent Variable Models Through Tensor Methods Animashree Anandkumar, Rong Ge, Majid Janzamin
COLT 2015 Simple, Efficient, and Neural Algorithms for Sparse Coding Sanjeev Arora, Rong Ge, Tengyu Ma, Ankur Moitra
ALT 2015 Tensor Decompositions for Learning Latent Variable Models (a Survey for ALT) Anima Anandkumar, Rong Ge, Daniel J. Hsu, Sham M. Kakade, Matus Telgarsky
ICML 2015 Un-Regularizing: Approximate Proximal Point and Faster Stochastic Algorithms for Empirical Risk Minimization Roy Frostig, Rong Ge, Sham Kakade, Aaron Sidford
JMLR 2014 A Tensor Approach to Learning Mixed Membership Community Models Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade
COLT 2014 New Algorithms for Learning Incoherent and Overcomplete Dictionaries Sanjeev Arora, Rong Ge, Ankur Moitra
ICML 2014 Provable Bounds for Learning Some Deep Representations Sanjeev Arora, Aditya Bhaskara, Rong Ge, Tengyu Ma
JMLR 2014 Tensor Decompositions for Learning Latent Variable Models Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, Matus Telgarsky
ICML 2013 A Practical Algorithm for Topic Modeling with Provable Guarantees Sanjeev Arora, Rong Ge, Yonatan Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, Michael Zhu
COLT 2013 A Tensor Spectral Approach to Learning Mixed Membership Community Models Animashree Anandkumar, Rong Ge, Daniel J. Hsu, Sham M. Kakade
NeurIPS 2012 Provable ICA with Unknown Gaussian Noise, with Implications for Gaussian Mixtures and Autoencoders Sanjeev Arora, Rong Ge, Ankur Moitra, Sushant Sachdeva