ML Anthology
Authors
Search
About
Yehudai, Gilad
24 publications
NeurIPS
2025
Compositional Reasoning with Transformers, RNNs, and Chain of Thought
Gilad Yehudai
,
Noah Amsel
,
Joan Bruna
NeurIPS
2025
Depth-Width Tradeoffs for Transformers on Graph Tasks
Gilad Yehudai
,
Clayton Sanford
,
Maya Bechler-Speicher
,
Orr Fischer
,
Ran Gilad-Bachrach
,
Amir Globerson
NeurIPS
2025
Emergence of Linear Truth Encodings in Language Models
Shauli Ravfogel
,
Gilad Yehudai
,
Tal Linzen
,
Joan Bruna
,
Alberto Bietti
AISTATS
2025
Locally Optimal Descent for Dynamic Stepsize Scheduling
Gilad Yehudai
,
Alon Cohen
,
Amit Daniely
,
Yoel Drori
,
Tomer Koren
,
Mariano Schain
COLT
2025
Logarithmic Width Suffices for Robust Memorization
Amitsour Egosi
,
Gilad Yehudai
,
Ohad Shamir
ICLR
2025
Quality over Quantity in Attention Layers: When Adding More Heads Hurts
Noah Amsel
,
Gilad Yehudai
,
Joan Bruna
NeurIPS
2024
MALT Powers up Adversarial Attacks
Odelia Melamed
,
Gilad Yehudai
,
Adi Shamir
NeurIPSW
2024
On the Reconstruction of Training Data from Group Invariant Networks
Ran Elbaz
,
Gilad Yehudai
,
Meirav Galun
,
Haggai Maron
ALT
2024
RedEx: Beyond Fixed Representation Methods via Convex Optimization
Amit Daniely
,
Mariano Schain
,
Gilad Yehudai
NeurIPS
2023
Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Linear Subspaces
Odelia Melamed
,
Gilad Yehudai
,
Gal Vardi
NeurIPS
2023
Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses
Gon Buzaglo
,
Niv Haim
,
Gilad Yehudai
,
Gal Vardi
,
Yakir Oz
,
Yaniv Nikankin
,
Michal Irani
NeurIPS
2023
From Tempered to Benign Overfitting in ReLU Neural Networks
Guy Kornowski
,
Gilad Yehudai
,
Ohad Shamir
ICLRW
2023
Reconstructing Training Data from Multiclass Neural Networks
Gon Buzaglo
,
Niv Haim
,
Gilad Yehudai
,
Gal Vardi
,
Michal Irani
NeurIPS
2022
Gradient Methods Provably Converge to Non-Robust Networks
Gal Vardi
,
Gilad Yehudai
,
Ohad Shamir
ICLR
2022
On the Optimal Memorization Power of ReLU Neural Networks
Gal Vardi
,
Gilad Yehudai
,
Ohad Shamir
NeurIPS
2022
Reconstructing Training Data from Trained Neural Networks
Niv Haim
,
Gal Vardi
,
Gilad Yehudai
,
Ohad Shamir
,
Michal Irani
COLT
2022
Width Is Less Important than Depth in ReLU Neural Networks
Gal Vardi
,
Gilad Yehudai
,
Ohad Shamir
ICML
2021
From Local Structures to Size Generalization in Graph Neural Networks
Gilad Yehudai
,
Ethan Fetaya
,
Eli Meirom
,
Gal Chechik
,
Haggai Maron
NeurIPS
2021
Learning a Single Neuron with Bias Using Gradient Descent
Gal Vardi
,
Gilad Yehudai
,
Ohad Shamir
COLT
2021
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
Eran Malach
,
Gilad Yehudai
,
Shai Shalev-Schwartz
,
Ohad Shamir
COLT
2021
The Effects of Mild Over-Parameterization on the Optimization Landscape of Shallow ReLU Neural Networks
Itay M Safran
,
Gilad Yehudai
,
Ohad Shamir
COLT
2020
Learning a Single Neuron with Gradient Methods
Gilad Yehudai
,
Shamir Ohad
ICML
2020
Proving the Lottery Ticket Hypothesis: Pruning Is All You Need
Eran Malach
,
Gilad Yehudai
,
Shai Shalev-Schwartz
,
Ohad Shamir
NeurIPS
2019
On the Power and Limitations of Random Features for Understanding Neural Networks
Gilad Yehudai
,
Ohad Shamir