Bansal, Yamini

9 publications

TMLR 2024 Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron T Parisi, Abhishek Kumar, Alexander A Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin Fathy Elsayed, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura A Culp, Lechao Xiao, Maxwell Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, Noah Fiedel
TMLR 2023 Empirical Limitations of the NTK for Understanding Scaling Laws in Deep Learning Nikhil Vyas, Yamini Bansal, Preetum Nakkiran
ICML 2023 The Unreasonable Effectiveness of Few-Shot Learning for Machine Translation Xavier Garcia, Yamini Bansal, Colin Cherry, George Foster, Maxim Krikun, Melvin Johnson, Orhan Firat
ICML 2022 Data Scaling Laws in NMT: The Effect of Noise and Architecture Yamini Bansal, Behrooz Ghorbani, Ankush Garg, Biao Zhang, Colin Cherry, Behnam Neyshabur, Orhan Firat
ICLR 2021 For Self-Supervised Learning, Rationality Implies Generalization, Provably Yamini Bansal, Gal Kaplun, Boaz Barak
NeurIPS 2021 Revisiting Model Stitching to Compare Neural Representations Yamini Bansal, Preetum Nakkiran, Boaz Barak
ICLR 2020 Cz-Gem: A Framework for Disentangled Representation Learning Akash Srivastava, Yamini Bansal, Yukun Ding, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund
ICLR 2020 Deep Double Descent: Where Bigger Models and More Data Hurt Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, Ilya Sutskever
ICLR 2018 On the Information Bottleneck Theory of Deep Learning Andrew Michael Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan Daniel Tracey, David Daniel Cox