Sanford, Clayton

10 publications

ICML 2025 Best of Both Worlds: Advantages of Hybrid Graph Sequence Models Ali Behrouz, Ali Parviz, Mahdi Karami, Clayton Sanford, Bryan Perozzi, Vahab Mirrokni
NeurIPS 2025 Depth-Width Tradeoffs for Transformers on Graph Tasks Gilad Yehudai, Clayton Sanford, Maya Bechler-Speicher, Orr Fischer, Ran Gilad-Bachrach, Amir Globerson
NeurIPS 2025 Fast Attention Mechanisms: A Tale of Parallelism Jingwen Liu, Hantao Yu, Clayton Sanford, Alexandr Andoni, Daniel Hsu
NeurIPS 2025 When Do Transformers Outperform Feedforward and Recurrent Networks? a Statistical Perspective Alireza Mousavi-Hosseini, Clayton Sanford, Denny Wu, Murat A Erdogdu
ICML 2024 Transformers, Parallel Computation, and Logarithmic Depth Clayton Sanford, Daniel Hsu, Matus Telgarsky
NeurIPS 2024 Understanding Transformer Reasoning Capabilities via Graph Algorithms Clayton Sanford, Bahare Fatemi, Ethan Hall, Anton Tsitsulin, Mehran Kazemi, Jonathan Halcrow, Bryan Perozzi, Vahab Mirrokni
NeurIPS 2023 Representational Strengths and Limitations of Transformers Clayton Sanford, Daniel J. Hsu, Matus J. Telgarsky
NeurIPS 2022 Learning Single-Index Models with Shallow Neural Networks Alberto Bietti, Joan Bruna, Clayton Sanford, Min Jae Song
NeurIPS 2022 On Scrambling Phenomena for Randomly Initialized Recurrent Networks Vaggos Chatziafratis, Ioannis Panageas, Clayton Sanford, Stelios Stavroulakis
NeurIPS 2021 Support Vector Machines and Linear Regression Coincide with Very High-Dimensional Features Navid Ardeshir, Clayton Sanford, Daniel J. Hsu