Cohen, Nadav

23 publications

ICLR 2025 DeciMamba: Exploring the Length Extrapolation Potential of Mamba Assaf Ben-Kish, Itamar Zimerman, Shady Abu-Hussein, Nadav Cohen, Amir Globerson, Lior Wolf, Raja Giryes
NeurIPS 2025 Do Neural Networks Need Gradient Descent to Generalize? a Theoretical Study Yotam Alexander, Yonatan Slutzky, Yuval Ran-Milo, Nadav Cohen
NeurIPS 2025 The Implicit Bias of Structured State Space Models Can Be Poisoned with Clean Labels Yonatan Slutzky, Yotam Alexander, Noam Razin, Nadav Cohen
ICML 2024 Implicit Bias of Policy Gradient in Linear Quadratic Control: Extrapolation to Unseen Initial States Noam Razin, Yotam Alexander, Edo Cohen-Karlik, Raja Giryes, Amir Globerson, Nadav Cohen
NeurIPS 2024 Provable Benefits of Complex Parameterizations for Structured State Space Models Yuval Ran-Milo, Eden Lumbroso, Edo Cohen-Karlik, Raja Giryes, Amir Globerson, Nadav Cohen
ICLR 2023 Learning Low Dimensional State Spaces with Overparameterized Recurrent Neural Nets Edo Cohen-Karlik, Itamar Menuhin-Gruman, Raja Giryes, Nadav Cohen, Amir Globerson
NeurIPS 2023 On the Ability of Graph Neural Networks to Model Interactions Between Vertices Noam Razin, Tom Verbin, Nadav Cohen
ICMLW 2023 On the Ability of Graph Neural Networks to Model Interactions Between Vertices Noam Razin, Tom Verbin, Nadav Cohen
NeurIPS 2023 What Makes Data Suitable for a Locally Connected Neural Network? a Necessary and Sufficient Condition Based on Quantum Entanglement. ‪Yotam Alexander‬‏, Nimrod De La Vega, Noam Razin, Nadav Cohen
AISTATS 2022 On the Implicit Bias of Gradient Descent for Temporal Extrapolation Edo Cohen-Karlik, Avichai Ben David, Nadav Cohen, Amir Globerson
ICML 2022 Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks Noam Razin, Asaf Maman, Nadav Cohen
NeurIPS 2021 Continuous vs. Discrete Optimization of Deep Neural Networks Omer Elkabetz, Nadav Cohen
ICML 2021 Implicit Regularization in Tensor Factorization Noam Razin, Asaf Maman, Nadav Cohen
NeurIPS 2020 Implicit Regularization in Deep Learning May Not Be Explainable by Norms Noam Razin, Nadav Cohen
ICLR 2019 A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks Sanjeev Arora, Nadav Cohen, Noah Golowich, Wei Hu
NeurIPS 2019 Implicit Regularization in Deep Matrix Factorization Sanjeev Arora, Nadav Cohen, Wei Hu, Yuping Luo
ICLR 2018 Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions Nadav Cohen, Ronen Tamari, Amnon Shashua
ICLR 2018 Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design Yoav Levine, David Yakira, Nadav Cohen, Amnon Shashua
ICML 2018 On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization Sanjeev Arora, Nadav Cohen, Elad Hazan
ICLR 2017 Inductive Bias of Deep Convolutional Networks Through Pooling Geometry Nadav Cohen, Amnon Shashua
ICML 2016 Convolutional Rectifier Networks as Generalized Tensor Decompositions Nadav Cohen, Amnon Shashua
CVPR 2016 Deep SimNets Nadav Cohen, Or Sharir, Amnon Shashua
COLT 2016 On the Expressive Power of Deep Learning: A Tensor Analysis Nadav Cohen, Or Sharir, Amnon Shashua