Ward, Rachel

18 publications

NeurIPS 2024 Provable Acceleration of Nesterov's Accelerated Gradient for Asymmetric Matrix Factorization and Linear Neural Networks Zhenghao Xu, Yuqing Wang, Tuo Zhao, Rachel Ward, Molei Tao
ICML 2023 Adaptively Weighted Data Augmentation Consistency Regularization for Robust Optimization Under Concept Shift Yijun Dong, Yuege Xie, Rachel Ward
NeurIPS 2023 Cluster-Aware Semi-Supervised Learning: Relational Knowledge Distillation Provably Learns Clustering Yijun Dong, Kevin Miller, Qi Lei, Rachel Ward
NeurIPS 2023 Convergence of Alternating Gradient Descent for Matrix Factorization Rachel Ward, Tamara Kolda
NeurIPS 2023 Nearly Optimal Bounds for Cyclic Forgetting William Swartworth, Deanna Needell, Rachel Ward, Mark Kong, Halyun Jeong
AISTATS 2023 Sample Efficiency of Data Augmentation Consistency Regularization Shuo Yang, Yijun Dong, Rachel Ward, Inderjit S. Dhillon, Sujay Sanghavi, Qi Lei
NeurIPSW 2023 TinyGSM: Achieving 80% on GSM8k with One Billion Parameters Bingbin Liu, Sebastien Bubeck, Ronen Eldan, Janardhan Kulkarni, Yuanzhi Li, Anh Nguyen, Rachel Ward, Yi Zhang
COLT 2022 How Catastrophic Can Catastrophic Forgetting Be in Linear Regression? Itay Evron, Edward Moroshko, Rachel Ward, Nathan Srebro, Daniel Soudry
COLT 2022 The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel Ward
NeurIPS 2021 Bootstrapping the Error of Oja's Algorithm Robert Lunde, Purnamrita Sarkar, Rachel Ward
COLT 2021 Streaming K-PCA: Efficient Guarantees for Oja’s Algorithm, Beyond Rank-One Updates De Huang, Jonathan Niles-Weed, Rachel Ward
JMLR 2020 AdaGrad Stepsizes: Sharp Convergence over Nonconvex Landscapes Rachel Ward, Xiaoxia Wu, Leon Bottou
NeurIPS 2020 Implicit Regularization and Convergence for Weight Normalization Xiaoxia Wu, Edgar Dobriban, Tongzheng Ren, Shanshan Wu, Zhiyuan Li, Suriya Gunasekar, Rachel Ward, Qiang Liu
AISTATS 2020 Linear Convergence of Adaptive Stochastic Gradient Descent Yuege Xie, Xiaoxia Wu, Rachel Ward
ICML 2019 AdaGrad Stepsizes: Sharp Convergence over Nonconvex Landscapes Rachel Ward, Xiaoxia Wu, Leon Bottou
JMLR 2015 Completing Any Low-Rank Matrix, Provably Yudong Chen, Srinadh Bhojanapalli, Sujay Sanghavi, Rachel Ward
ICML 2014 Coherent Matrix Completion Yudong Chen, Srinadh Bhojanapalli, Sujay Sanghavi, Rachel Ward
NeurIPS 2014 Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz Algorithm Deanna Needell, Rachel Ward, Nati Srebro