Sedghi, Hanie

22 publications

ICLR 2025 Improving Large Language Model Planning with Action Sequence Similarity Xinran Zhao, Hanie Sedghi, Bernd Bohnet, Dale Schuurmans, Azade Nova
TMLR 2024 Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron T Parisi, Abhishek Kumar, Alexander A Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin Fathy Elsayed, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura A Culp, Lechao Xiao, Maxwell Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, Noah Fiedel
ICML 2023 Can Neural Network Memorization Be Localized? Pratyush Maini, Michael Curtis Mozer, Hanie Sedghi, Zachary Chase Lipton, J Zico Kolter, Chiyuan Zhang
ICLR 2023 Leveraging Unlabeled Data to Track Memorization Mahsa Forouzesh, Hanie Sedghi, Patrick Thiran
ICLR 2023 REPAIR: REnormalizing Permuted Activations for Interpolation Repair Keller Jordan, Hanie Sedghi, Olga Saukh, Rahim Entezari, Behnam Neyshabur
ICLRW 2023 The Role of Pre-Training Data in Transfer Learning Rahim Entezari, Mitchell Wortsman, Olga Saukh, M. Moein Shariatnia, Hanie Sedghi, Ludwig Schmidt
ICLR 2022 Exploring the Limits of Large Scale Pre-Training Samira Abnar, Mostafa Dehghani, Behnam Neyshabur, Hanie Sedghi
ICLR 2022 Leveraging Unlabeled Data to Predict Out-of-Distribution Performance Saurabh Garg, Sivaraman Balakrishnan, Zachary Chase Lipton, Behnam Neyshabur, Hanie Sedghi
ICLR 2022 The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks Rahim Entezari, Hanie Sedghi, Olga Saukh, Behnam Neyshabur
NeurIPSW 2021 Avoiding Spurious Correlations: Bridging Theory and Practice Thao Nguyen, Vaishnavh Nagarajan, Hanie Sedghi, Behnam Neyshabur
NeurIPSW 2021 Leveraging Unlabeled Data to Predict Out-of-Distribution Performance Saurabh Garg, Sivaraman Balakrishnan, Zachary Chase Lipton, Behnam Neyshabur, Hanie Sedghi
ICLR 2021 The Deep Bootstrap Framework: Good Online Learners Are Good Offline Generalizers Preetum Nakkiran, Behnam Neyshabur, Hanie Sedghi
ICLR 2020 Generalization Bounds for Deep Convolutional Neural Networks Philip M. Long, Hanie Sedghi
ICLR 2020 Size-Free Generalization Bounds for Convolutional Neural Networks Philip M. Long, Hanie Sedghi
ICLR 2020 The Intriguing Role of Module Criticality in the Generalization of Deep Networks Niladri S. Chatterji, Behnam Neyshabur, Hanie Sedghi
NeurIPS 2020 What Is Being Transferred in Transfer Learning? Behnam Neyshabur, Hanie Sedghi, Chiyuan Zhang
ICLR 2019 The Singular Values of Convolutional Layers Hanie Sedghi, Vineet Gupta, Philip M. Long
UAI 2017 How Good Are My Predictions? Efficiently Approximating Precision-Recall Curves for Massive Datasets Ashish Sabharwal, Hanie Sedghi
AISTATS 2016 Provable Tensor Methods for Learning Mixtures of Generalized Linear Models Hanie Sedghi, Majid Janzamin, Anima Anandkumar
ICLR 2015 Provable Methods for Training Neural Networks with Sparse Connectivity Hanie Sedghi, Anima Anandkumar
ICLR 2015 Score Function Features for Discriminative Learning Majid Janzamin, Hanie Sedghi, Anima Anandkumar
NeurIPS 2014 Multi-Step Stochastic ADMM in High Dimensions: Applications to Sparse Optimization and Matrix Decomposition Hanie Sedghi, Anima Anandkumar, Edmond Jonckheere