Kale, Satyen

73 publications

ICLR 2025 Efficient Stagewise Pretraining via Progressive Subnetworks Abhishek Panigrahi, Nikunj Saunshi, Kaifeng Lyu, Sobhan Miryoosefi, Sashank J. Reddi, Satyen Kale, Sanjiv Kumar
NeurIPS 2025 Understanding Outer Optimizers in Local SGD: Learning Rates, Momentum, and Acceleration Ahmed Khaled, Satyen Kale, Arthur Douillard, Chi Jin, Rob Fergus, Manzil Zaheer
ICMLW 2024 Asynchronous Local-SGD Training for Language Modeling Bo Liu, Rachita Chhaparia, Arthur Douillard, Satyen Kale, Andrei Alex Rusu, Jiajun Shen, Arthur Szlam, MarcAurelio Ranzato
ICML 2024 Improved Differentially Private and Lazy Online Convex Optimization: Lower Regret Without Smoothness Requirements Naman Agarwal, Satyen Kale, Karan Singh, Abhradeep Guha Thakurta
TMLR 2024 On the Unreasonable Effectiveness of Federated Averaging with Heterogeneous Data Jianyu Wang, Rudrajit Das, Gauri Joshi, Satyen Kale, Zheng Xu, Tong Zhang
ALT 2024 Semi-Supervised Group DRO: Combating Sparsity with Unlabeled Data Pranjal Awasthi, Satyen Kale, Ankit Pensia
ICML 2023 Beyond Uniform Lipschitz Condition in Differentially Private Optimization Rudrajit Das, Satyen Kale, Zheng Xu, Tong Zhang, Sujay Sanghavi
COLT 2023 Differentially Private and Lazy Online Convex Optimization Naman Agarwal, Satyen Kale, Karan Singh, Abhradeep Thakurta
ICML 2023 Efficient Training of Language Models Using Few-Shot Learning Sashank J. Reddi, Sobhan Miryoosefi, Stefani Karp, Shankar Krishnan, Satyen Kale, Seungyeon Kim, Sanjiv Kumar
ICML 2023 On the Convergence of Federated Averaging with Cyclic Client Participation Yae Jee Cho, Pranay Sharma, Gauri Joshi, Zheng Xu, Satyen Kale, Tong Zhang
AISTATS 2022 Federated Functional Gradient Boosting Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi
ICML 2022 Agnostic Learnability of Halfspaces via Logistic Loss Ziwei Ji, Kwangjun Ahn, Pranjal Awasthi, Satyen Kale, Stefani Karp
ALT 2022 Efficient Methods for Online Multiclass Logistic Regression Naman Agarwal, Satyen Kale, Julian Zimmert
NeurIPS 2022 From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent Christopher M De Sa, Satyen Kale, Jason Lee, Ayush Sekhari, Karthik Sridharan
COLT 2022 Private Matrix Approximation and Geometry of Unitary Orbits Oren Mangoubi, Yikai Wu, Satyen Kale, Abhradeep Thakurta, Nisheeth K. Vishnoi
COLT 2022 Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States Julian Zimmert, Naman Agarwal, Satyen Kale
NeurIPS 2022 Reproducibility in Optimization: Theoretical Framework and Limits Kwangjun Ahn, Prateek Jain, Ziwei Ji, Satyen Kale, Praneeth Netrapalli, Gil I Shamir
COLT 2022 Self-Consistency of the Fokker Planck Equation Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani
ALT 2021 A Deep Conditioning Treatment of Neural Networks Naman Agarwal, Pranjal Awasthi, Satyen Kale
NeurIPS 2021 Breaking the Centralized Barrier for Cross-Device Federated Learning Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian U Stich, Ananda Theertha Suresh
NeurIPS 2021 Learning with User-Level Privacy Daniel Levy, Ziteng Sun, Kareem Amin, Satyen Kale, Alex Kulesza, Mehryar Mohri, Ananda Theertha Suresh
NeurIPS 2021 SGD: The Role of Implicit Regularization, Batch-Size and Multiple-Epochs Ayush Sekhari, Karthik Sridharan, Satyen Kale
NeurIPS 2020 Estimating Training Data Influence by Tracing Gradient Descent Garima Pruthi, Frederick Liu, Satyen Kale, Mukund Sundararajan
NeurIPS 2020 PAC-Bayes Learning Bounds for Sample-Dependent Priors Pranjal Awasthi, Satyen Kale, Stefani Karp, Mehryar Mohri
ICML 2020 SCAFFOLD: Stochastic Controlled Averaging for Federated Learning Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, Ananda Theertha Suresh
ALT 2019 Algorithmic Learning Theory 2019: Preface Aurélien Garivier, Satyen Kale
NeurIPS 2019 Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces Chuan Guo, Ali Mousavi, Xiang Wu, Daniel N Holtmann-Rice, Satyen Kale, Sashank Reddi, Sanjiv Kumar
ICML 2019 Escaping Saddle Points with Adaptive Gradient Methods Matthew Staib, Sashank Reddi, Satyen Kale, Sanjiv Kumar, Suvrit Sra
NeurIPS 2019 Hypothesis Set Stability and Generalization Dylan J Foster, Spencer Greenberg, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan
AISTATS 2019 Stochastic Negative Mining for Learning with Large Output Spaces Sashank J. Reddi, Satyen Kale, Felix Yu, Daniel Holtmann-Rice, Jiecao Chen, Sanjiv Kumar
NeurIPS 2018 Adaptive Methods for Nonconvex Optimization Manzil Zaheer, Sashank Reddi, Devendra Sachan, Satyen Kale, Sanjiv Kumar
COLT 2018 Logistic Regression: The Importance of Being Improper Dylan J. Foster, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan
ICML 2018 Loss Decomposition for Fast Learning in Large Output Spaces Ian En-Hsu Yen, Satyen Kale, Felix Yu, Daniel Holtmann-Rice, Sanjiv Kumar, Pradeep Ravikumar
ICLR 2018 On the Convergence of Adam and Beyond Sashank J. Reddi, Satyen Kale, Sanjiv Kumar
NeurIPS 2018 Online Learning of Quantum States Scott Aaronson, Xinyi Chen, Elad Hazan, Satyen Kale, Ashwin Nayak
ICML 2017 Adaptive Feature Selection: Computationally Efficient Online Sparse Linear Regression Under RIP Satyen Kale, Zohar Karnin, Tengyuan Liang, Dávid Pál
NeurIPS 2017 Parameter-Free Online Learning via Model Selection Dylan J Foster, Satyen Kale, Mehryar Mohri, Karthik Sridharan
COLT 2017 Preface: Conference on Learning Theory (COLT), 2017 Satyen Kale, Ohad Shamir
NeurIPS 2016 Hardness of Online Sleeping Combinatorial Optimization Problems Satyen Kale, Chansoo Lee, David Pal
MLJ 2016 Learning Rotations with Little Regret Elad Hazan, Satyen Kale, Manfred K. Warmuth
COLT 2016 Online Sparse Linear Regression Dean P. Foster, Satyen Kale, Howard J. Karloff
IJCAI 2016 Optimal and Adaptive Algorithms for Online Boosting Alina Beygelzimer, Satyen Kale, Haipeng Luo
AAAI 2015 Budgeted Prediction with Expert Advice Kareem Amin, Satyen Kale, Gerald Tesauro, Deepak S. Turaga
NeurIPS 2015 Online Gradient Boosting Alina Beygelzimer, Elad Hazan, Satyen Kale, Haipeng Luo
ICML 2015 Optimal and Adaptive Algorithms for Online Boosting Alina Beygelzimer, Satyen Kale, Haipeng Luo
COLT 2015 Proceedings of the 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015 Peter Grünwald, Elad Hazan, Satyen Kale
JMLR 2014 Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly-Convex Optimization Elad Hazan, Satyen Kale
COLT 2014 Multiarmed Bandits with Limited Expert Advice Satyen Kale
COLT 2014 Open Problem: Efficient Online Sparse Regression Satyen Kale
ICML 2014 Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford, Lihong Li, Robert Schapire
NeurIPS 2013 Adaptive Market Making via Online Learning Jacob Abernethy, Satyen Kale
IJCAI 2013 Bargaining for Revenue Shares on Tree Trading Networks Arpita Ghosh, Satyen Kale, Kevin J. Lang, Benjamin Moseley
AISTATS 2012 Contextual Bandit Learning with Predictable Rewards Alekh Agarwal, Miroslav Dudik, Satyen Kale, John Langford, Robert Schapire
ICML 2012 Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm Regularization Haim Avron, Satyen Kale, Shiva Prasad Kasiviswanathan, Vikas Sindhwani
COLT 2012 Near-Optimal Algorithms for Online Matrix Prediction Elad Hazan, Satyen Kale, Shai Shalev-Shwartz
JMLR 2012 Online Submodular Minimization Elad Hazan, Satyen Kale
ICML 2012 Projection-Free Online Learning Elad Hazan, Satyen Kale
COLT 2011 A Simple Multi-Armed Bandit Algorithm with Optimal Variation-Bounded Regret Elad Hazan, Satyen Kale
JMLR 2011 Better Algorithms for Benign Bandits Elad Hazan, Satyen Kale
COLT 2011 Beyond the Regret Minimization Barrier: An Optimal Algorithm for Stochastic Strongly-Convex Optimization Elad Hazan, Satyen Kale
UAI 2011 Efficient Optimal Learning for Contextual Bandits Miroslav Dudík, Daniel J. Hsu, Satyen Kale, Nikos Karampatziakis, John Langford, Lev Reyzin, Tong Zhang
NeurIPS 2011 Newtron: An Efficient Bandit Algorithm for Online Multiclass Prediction Elad Hazan, Satyen Kale
MLJ 2010 Extracting Certainty from Uncertainty: Regret Bounded by Variation in Costs Elad Hazan, Satyen Kale
COLT 2010 Learning Rotations with Little Regret Elad Hazan, Satyen Kale, Manfred K. Warmuth
NeurIPS 2010 Non-Stochastic Bandit Slate Problems Satyen Kale, Lev Reyzin, Robert E. Schapire
COLT 2010 On-Line Variance Minimization in O(n2) per Trial? Elad Hazan, Satyen Kale, Manfred K. Warmuth
NeurIPS 2009 Beyond Convexity: Online Submodular Minimization Elad Hazan, Satyen Kale
NeurIPS 2009 On Stochastic and Worst-Case Models for Investing Elad Hazan, Satyen Kale
COLT 2008 Extracting Certainty from Uncertainty: Regret Bounded by Variation in Costs Elad Hazan, Satyen Kale
NeurIPS 2007 Computational Equivalence of Fixed Points and No Regret Algorithms, and Convergence to Equilibria Elad Hazan, Satyen Kale
MLJ 2007 Logarithmic Regret Algorithms for Online Convex Optimization Elad Hazan, Amit Agarwal, Satyen Kale
ICML 2006 Algorithms for Portfolio Management Based on the Newton Method Amit Agarwal, Elad Hazan, Satyen Kale, Robert E. Schapire
COLT 2006 Logarithmic Regret Algorithms for Online Convex Optimization Elad Hazan, Adam Kalai, Satyen Kale, Amit Agarwal