Schwab, David J.

18 publications

NeurIPS 2025 Generalization vs Specialization Under Concept Shift Alex Nguyen, David J. Schwab, Vudtiwat Ngampruetikorn
ICML 2025 When Can In-Context Learning Generalize Out of Task Distribution? Chase Goddard, Lindsay M. Smith, Vudtiwat Ngampruetikorn, David J. Schwab
NeurIPSW 2024 Generalization vs Specialization Under Concept Shift Alex Nguyen, David J. Schwab, Vudtiwat Ngampruetikorn
NeurIPSW 2024 Model Recycling: Model Component Reuse to Promote In-Context Learning Lindsay M. Smith, Chase Goddard, Vudtiwat Ngampruetikorn, David J. Schwab
NeurIPSW 2024 Specialization-Generalization Transition in Exemplar-Based In-Context Learning Chase Goddard, Lindsay M. Smith, Vudtiwat Ngampruetikorn, David J. Schwab
ICLRW 2023 AWE: Adaptive Weight-Space Ensembling for Few-Shot Fine-Tuning Jean-Christophe Gagnon-Audet, Ricardo Pio Monti, David J. Schwab
ICLR 2023 Don’t Forget the Nullspace! Nullspace Occupancy as a Mechanism for Out of Distribution Failure Daksh Idnani, Vivek Madan, Naman Goyal, David J. Schwab, Shanmukha Ramakrishna Vedantam
ICMLW 2023 Understanding Energy-Based Modeling of Proteins via an Empirically Motivated Minimal Ground Truth Model Peter William Fields, Vudtiwat Ngampruetikorn, Rama Ranganathan, David J. Schwab, Stephanie Palmer
NeurIPS 2022 Information Bottleneck Theory of High-Dimensional Regression: Relevancy, Efficiency and Optimality Vudtiwat Ngampruetikorn, David J Schwab
NeurIPS 2021 An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers Ramakrishna Vedantam, David Lopez-Paz, David J Schwab
NeurIPSW 2021 Learning Background Invariance Improves Generalization and Robustness in Self-Supervised Learning on ImageNet and Beyond Chaitanya Ryali, David J. Schwab, Ari S. Morcos
NeurIPS 2021 Perturbation Theory for the Information Bottleneck Vudtiwat Ngampruetikorn, David J Schwab
ICLR 2021 Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs Jonathan Frankle, David J. Schwab, Ari S. Morcos
NeurIPS 2020 Learning Optimal Representations with the Decodable Information Bottleneck Yann Dubois, Douwe Kiela, David J Schwab, Ramakrishna Vedantam
ICLR 2020 The Early Phase of Neural Network Training Jonathan Frankle, David J. Schwab, Ari S. Morcos
NeurIPS 2018 Learning to Share and Hide Intentions Using Information Regularization Dj Strouse, Max Kleiman-Weiner, Josh Tenenbaum, Matt Botvinick, David J Schwab
NeurIPS 2016 Supervised Learning with Tensor Networks Edwin Stoudenmire, David J Schwab
UAI 2016 The Deterministic Information Bottleneck Dj Strouse, David J. Schwab