Shamir, Ohad

129 publications

TMLR 2025 Are Convex Optimization Curves Convex? Guy Barzilai, Ohad Shamir, Moslem Zamani
NeurIPS 2025 Beyond Benign Overfitting in Nadaraya-Watson Interpolators Daniel Barzilai, Guy Kornowski, Ohad Shamir
COLT 2025 Logarithmic Width Suffices for Robust Memorization Amitsour Egosi, Gilad Yehudai, Ohad Shamir
COLT 2025 The Oracle Complexity of Simplex-Based Matrix Games: Linear Separability and Nash Equilibria Guy Kornowski, Ohad Shamir
NeurIPS 2025 When Models Don’t Collapse: On the Consistency of Iterative MLE Daniel Barzilai, Ohad Shamir
JMLR 2024 An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization Guy Kornowski, Ohad Shamir
COLT 2024 Depth Separation in Norm-Bounded Infinite-Width Neural Networks Suzanna Parkinson, Greg Ongie, Rebecca Willett, Ohad Shamir, Nathan Srebro
ICML 2024 Generalization in Kernel Regression Under Realistic Assumptions Daniel Barzilai, Ohad Shamir
NeurIPSW 2024 On the Hardness of Meaningful Local Guarantees in Nonsmooth Nonconvex Optimization Guy Kornowski, Swati Padmanabhan, Ohad Shamir
COLT 2024 Open Problem: Anytime Convergence Rate of Gradient Descent Guy Kornowski, Ohad Shamir
NeurIPS 2023 Accelerated Zeroth-Order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance Nikita Kornilov, Ohad Shamir, Aleksandr Lobanov, Darina Dvinskikh, Alexander Gasnikov, Innokentiy Shibaev, Eduard Gorbunov, Samuel Horváth
NeurIPSW 2023 An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization Guy Kornowski, Ohad Shamir
COLT 2023 Deterministic Nonsmooth Nonconvex Optimization Michael Jordan, Guy Kornowski, Tianyi Lin, Ohad Shamir, Manolis Zampetakis
NeurIPS 2023 From Tempered to Benign Overfitting in ReLU Neural Networks Guy Kornowski, Gilad Yehudai, Ohad Shamir
ALT 2023 Implicit Regularization Towards Rank Minimization in ReLU Networks Nadav Timor, Gal Vardi, Ohad Shamir
NeurIPS 2023 Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks Roey Magen, Ohad Shamir
JMLR 2023 The Implicit Bias of Benign Overfitting Ohad Shamir
NeurIPS 2022 Gradient Methods Provably Converge to Non-Robust Networks Gal Vardi, Gilad Yehudai, Ohad Shamir
NeurIPS 2022 On Margin Maximization in Linear and ReLU Networks Gal Vardi, Ohad Shamir, Nati Srebro
NeurIPSW 2022 On the Complexity of Finding Small Subgradients in Nonsmooth Optimization Guy Kornowski, Ohad Shamir
ICLR 2022 On the Optimal Memorization Power of ReLU Neural Networks Gal Vardi, Gilad Yehudai, Ohad Shamir
JMLR 2022 Oracle Complexity in Nonsmooth Nonconvex Optimization Guy Kornowski, Ohad Shamir
NeurIPS 2022 Reconstructing Training Data from Trained Neural Networks Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, Michal Irani
COLT 2022 The Implicit Bias of Benign Overfitting Ohad Shamir
IJCAI 2022 The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication (Extended Abstract) Blake E. Woodworth, Brian Bullins, Ohad Shamir, Nathan Srebro
NeurIPS 2022 The Sample Complexity of One-Hidden-Layer Neural Networks Gal Vardi, Ohad Shamir, Nati Srebro
COLT 2022 Width Is Less Important than Depth in ReLU Neural Networks Gal Vardi, Gilad Yehudai, Ohad Shamir
NeurIPS 2021 A Stochastic Newton Algorithm for Distributed Convex Optimization Brian Bullins, Kshitij Patel, Ohad Shamir, Nathan Srebro, Blake E Woodworth
JMLR 2021 Gradient Methods Never Overfit on Separable Data Ohad Shamir
COLT 2021 Implicit Regularization in ReLU Networks with the Square Loss Gal Vardi, Ohad Shamir
NeurIPS 2021 Learning a Single Neuron with Bias Using Gradient Descent Gal Vardi, Gilad Yehudai, Ohad Shamir
NeurIPS 2021 Oracle Complexity in Nonsmooth Nonconvex Optimization Guy Kornowski, Ohad Shamir
NeurIPS 2021 Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems Itay Safran, Ohad Shamir
COLT 2021 Size and Depth Separation in Approximating Benign Functions with Neural Networks Gal Vardi, Daniel Reichman, Toniann Pitassi, Ohad Shamir
COLT 2021 The Connection Between Approximation, Depth Separation and Learnability in Neural Networks Eran Malach, Gilad Yehudai, Shai Shalev-Schwartz, Ohad Shamir
COLT 2021 The Effects of Mild Over-Parameterization on the Optimization Landscape of Shallow ReLU Neural Networks Itay M Safran, Gilad Yehudai, Ohad Shamir
COLT 2021 The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication Blake E Woodworth, Brian Bullins, Ohad Shamir, Nathan Srebro
ALT 2020 A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates Yossi Arjevani, Ohad Shamir, Nathan Srebro
COLT 2020 How Good Is SGD with Random Shuffling? Itay Safran, Ohad Shamir
ICML 2020 Is Local SGD Better than Minibatch SGD? Blake Woodworth, Kumar Kshitij Patel, Sebastian Stich, Zhen Dai, Brian Bullins, Brendan Mcmahan, Ohad Shamir, Nathan Srebro
NeurIPS 2020 Neural Networks with Small Weights and Depth-Separation Barriers Gal Vardi, Ohad Shamir
ICML 2020 Proving the Lottery Ticket Hypothesis: Pruning Is All You Need Eran Malach, Gilad Yehudai, Shai Shalev-Schwartz, Ohad Shamir
ICML 2020 The Complexity of Finding Stationary Points with Stochastic Gradient Descent Yoel Drori, Ohad Shamir
COLT 2019 Depth Separations in Neural Networks: What Is Actually Being Separated? Itay Safran, Ronen Eldan, Ohad Shamir
COLT 2019 Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks Ohad Shamir
NeurIPS 2019 On the Power and Limitations of Random Features for Understanding Neural Networks Gilad Yehudai, Ohad Shamir
COLT 2019 Space Lower Bounds for Linear Prediction in the Streaming Model Yuval Dagan, Gil Kur, Ohad Shamir
COLT 2019 The Complexity of Making the Gradient Small in Stochastic Convex Optimization Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake Woodworth
NeurIPS 2018 Are ResNets Provably Better than Linear Predictors? Ohad Shamir
ALT 2018 Bandit Regret Scaling with the Effective Loss Range Nicolò Cesa-Bianchi, Ohad Shamir
COLT 2018 Detecting Correlations with Little Memory and Communication Yuval Dagan, Ohad Shamir
JMLR 2018 Distribution-Specific Hardness of Learning Neural Networks Ohad Shamir
NeurIPS 2018 Global Non-Convex Optimization with Discretized Diffusions Murat A Erdogdu, Lester Mackey, Ohad Shamir
COLT 2018 Size-Independent Sample Complexity of Neural Networks Noah Golowich, Alexander Rakhlin, Ohad Shamir
ICML 2018 Spurious Local Minima Are Common in Two-Layer ReLU Neural Networks Itay Safran, Ohad Shamir
JMLR 2017 An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback Ohad Shamir
ICML 2017 Communication-Efficient Algorithms for Distributed Stochastic Principal Component Analysis Dan Garber, Ohad Shamir, Nathan Srebro
ICML 2017 Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks Itay Safran, Ohad Shamir
ICML 2017 Failures of Gradient-Based Deep Learning Shai Shalev-Shwartz, Ohad Shamir, Shaked Shammah
ICML 2017 Online Learning with Local Permutations and Delayed Feedback Ohad Shamir, Liran Szlak
ICML 2017 Oracle Complexity of Second-Order Methods for Finite-Sum Problems Yossi Arjevani, Ohad Shamir
COLT 2017 Preface: Conference on Learning Theory (COLT), 2017 Satyen Kale, Ohad Shamir
ICML 2016 Convergence of Stochastic Gradient Descent for PCA Ohad Shamir
NeurIPS 2016 Dimension-Free Iteration Complexity of Finite Sum Optimization Problems Yossi Arjevani, Ohad Shamir
ICML 2016 Fast Stochastic Algorithms for SVD and PCA: Convergence Properties and Convexity Ohad Shamir
ICML 2016 Multi-Player Bandits – A Musical Chairs Approach Jonathan Rosenski, Ohad Shamir, Liran Szlak
JMLR 2016 On Lower and Upper Bounds in Smooth and Strongly Convex Optimization Yossi Arjevani, Shai Shalev-Shwartz, Ohad Shamir
ICML 2016 On the Iteration Complexity of Oblivious First-Order Optimization Algorithms Yossi Arjevani, Ohad Shamir
ICML 2016 On the Quality of the Initial Basin in Overspecified Neural Networks Itay Safran, Ohad Shamir
COLT 2016 Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June 23-26, 2016 Vitaly Feldman, Alexander Rakhlin, Ohad Shamir
COLT 2016 The Power of Depth for Feedforward Neural Networks Ronen Eldan, Ohad Shamir
NeurIPS 2016 Without-Replacement Sampling for Stochastic Gradient Methods Ohad Shamir
ICML 2015 A Stochastic PCA and SVD Algorithm with an Exponential Convergence Rate Ohad Shamir
ICML 2015 Attribute Efficient Linear Regression with Distribution-Dependent Sampling Doron Kukliansky, Ohad Shamir
NeurIPS 2015 Communication Complexity of Distributed Convex Learning and Optimization Yossi Arjevani, Ohad Shamir
AISTATS 2015 Graph Approximation and Clustering on a Budget Ethan Fetaya, Ohad Shamir, Shimon Ullman
COLT 2015 On the Complexity of Bandit Linear Optimization Ohad Shamir
COLT 2015 On the Complexity of Learning with Kernels Nicolò Cesa-Bianchi, Yishay Mansour, Ohad Shamir
JMLR 2015 The Sample Complexity of Learning Linear Predictors with the Squared Loss Ohad Shamir
ICML 2014 Communication-Efficient Distributed Optimization Using an Approximate Newton-Type Method Ohad Shamir, Nati Srebro, Tong Zhang
NeurIPS 2014 Fundamental Limits of Online and Distributed Algorithms for Statistical Learning and Estimation Ohad Shamir
JMLR 2014 Matrix Completion with the Trace Norm: Learning, Bounding, and Transducing Ohad Shamir, Shai Shalev-Shwartz
NeurIPS 2014 On the Computational Efficiency of Training Neural Networks Roi Livni, Shai Shalev-Shwartz, Ohad Shamir
AISTATS 2013 Localization and Adaptation in Online Learning Alexander Rakhlin, Ohad Shamir, Karthik Sridharan
COLT 2013 On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization Ohad Shamir
COLT 2013 Online Learning for Time Series Prediction Oren Anava, Elad Hazan, Shie Mannor, Ohad Shamir
NeurIPS 2013 Online Learning with Switching Costs and Other Adaptive Adversaries Nicolò Cesa-Bianchi, Ofer Dekel, Ohad Shamir
CVPR 2013 Probabilistic Label Trees for Efficient Large Scale Image Classification Baoyuan Liu, Fereshteh Sadeghi, Marshall Tappen, Ohad Shamir, Ce Liu
ICML 2013 Stochastic Gradient Descent for Non-Smooth Optimization: Convergence Results and Optimal Averaging Schemes Ohad Shamir, Tong Zhang
ICML 2012 Decoupling Exploration and Exploitation in Multi-Armed Bandits Orly Avner, Shie Mannor, Ohad Shamir
AISTATS 2012 Learning from Weak Teachers Ruth Urner, Shai Ben David, Ohad Shamir
ICML 2012 Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization Alexander Rakhlin, Ohad Shamir, Karthik Sridharan
COLT 2012 Open Problem: Is Averaging Needed for Strongly Convex Stochastic Gradient Descent? Ohad Shamir
JMLR 2012 Optimal Distributed Online Prediction Using Mini-Batches Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, Lin Xiao
NeurIPS 2012 Relax and Randomize : From Value to Algorithms Sasha Rakhlin, Ohad Shamir, Karthik Sridharan
AISTATS 2012 There’s a Hole in My Data Space: Piecewise Predictors for Heterogeneous Learning Problems Ofer Dekel, Ohad Shamir
COLT 2012 Unified Algorithms for Online Learning and Competitive Analysis Niv Buchbinder, Shahar Chen, Joshep (Seffi) Naor, Ohad Shamir
AISTATS 2012 Using More Data to Speed-up Training Time Shai Shalev-Shwartz, Ohad Shamir, Eran Tromer
ICML 2011 Adaptively Learning the Crowd Kernel Omer Tamuz, Ce Liu, Serge J. Belongie, Ohad Shamir, Adam Kalai
NeurIPS 2011 Better Mini-Batch Algorithms via Accelerated Gradient Methods Andrew Cotter, Ohad Shamir, Nati Srebro, Karthik Sridharan
COLT 2011 Collaborative Filtering with the Trace Norm: Learning, Bounding, and Transducing Ohad Shamir, Shai Shalev-Shwartz
NeurIPS 2011 Efficient Learning of Generalized Linear and Single Index Models with Isotonic Regression Sham M. Kakade, Varun Kanade, Ohad Shamir, Adam Kalai
JMLR 2011 Efficient Learning with Partially Observed Attributes Nicoló Cesa-Bianchi, Shai Shalev-Shwartz, Ohad Shamir
NeurIPS 2011 Efficient Online Learning via Randomized Rounding Nicolò Cesa-bianchi, Ohad Shamir
NeurIPS 2011 From Bandits to Experts: On the Value of Side-Observations Shie Mannor, Ohad Shamir
ICML 2011 Large-Scale Convex Minimization with a Low-Rank Constraint Shai Shalev-Shwartz, Alon Gonen, Ohad Shamir
IJCAI 2011 Learning Linear and Kernel Predictors with the 0-1 Loss Function Shai Shalev-Shwartz, Ohad Shamir, Karthik Sridharan
NeurIPS 2011 Learning with the Weighted Trace-Norm Under Arbitrary Sampling Distributions Rina Foygel, Ohad Shamir, Nati Srebro, Ruslan Salakhutdinov
ICML 2011 Optimal Distributed Online Prediction Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, Lin Xiao
AAAI 2011 Quantity Makes Quality: Learning with Partial Views Nicolò Cesa-Bianchi, Shai Shalev-Shwartz, Ohad Shamir
AISTATS 2011 Spectral Clustering on a Budget Ohad Shamir, Naftali Tishby
ICML 2010 Efficient Learning with Partially Observed Attributes Nicolò Cesa-Bianchi, Shai Shalev-Shwartz, Ohad Shamir
JMLR 2010 Learnability, Stability and Uniform Convergence Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, Karthik Sridharan
AISTATS 2010 Learning Exponential Families in High-Dimensions: Strong Convexity and Sparsity Sham Kakade, Ohad Shamir, Karthik Sindharan, Ambuj Tewari
COLT 2010 Learning Kernel-Based Halfspaces with the Zero-One Loss Shai Shalev-Shwartz, Ohad Shamir, Karthik Sridharan
MLJ 2010 Learning to Classify with Missing and Corrupted Features Ofer Dekel, Ohad Shamir, Lin Xiao
AISTATS 2010 Multiclass-Multilabel Classification with More Classes than Examples Ofer Dekel, Ohad Shamir
COLT 2010 Online Learning of Noisy Data with Kernels Nicolò Cesa-Bianchi, Shai Shalev-Shwartz, Ohad Shamir
MLJ 2010 Stability and Model Selection in K-Means Clustering Ohad Shamir, Naftali Tishby
ICML 2009 Good Learners for Evil Teachers Ofer Dekel, Ohad Shamir
COLT 2009 Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, Karthik Sridharan
COLT 2009 Stochastic Convex Optimization Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, Karthik Sridharan
COLT 2009 The Complexity of Improperly Learning Large Margin Halfspaces Shai Shalev-Shwartz, Ohad Shamir, Karthik Sridharan
COLT 2009 Vox Populi: Collecting High-Quality Labels from a Crowd Ofer Dekel, Ohad Shamir
ALT 2008 Learning and Generalization with the Information Bottleneck Ohad Shamir, Sivan Sabato, Naftali Tishby
ICML 2008 Learning to Classify with Missing and Corrupted Features Ofer Dekel, Ohad Shamir
COLT 2008 Model Selection and Stability in K-Means Clustering Ohad Shamir, Naftali Tishby
NeurIPS 2008 On the Reliability of Clustering Stability in the Large Sample Regime Ohad Shamir, Naftali Tishby
NeurIPS 2007 Cluster Stability for Finite Samples Ohad Shamir, Naftali Tishby