Wilson, Andrew Gordon

93 publications

ICLR 2025 Bayesian Optimization of Antibodies Informed by a Generative Model of Evolving Sequences Alan Nawzad Amin, Nate Gruver, Yilun Kuang, Yucen Lily Li, Hunter Elliott, Calvin McCarter, Aniruddh Raghu, Peyton Greenside, Andrew Gordon Wilson
ICLR 2025 Compute-Optimal LLMs Provably Generalize Better with Scale Marc Anton Finzi, Sanyam Kapoor, Diego Granziol, Anming Gu, Christopher De Sa, J Zico Kolter, Andrew Gordon Wilson
ICML 2025 Customizing the Inductive Biases of SoftMax Attention Using Structured Matrices Yilun Kuang, Noah Amsel, Sanae Lotfi, Shikai Qiu, Andres Potapczynski, Andrew Gordon Wilson
ICML 2025 Enhancing Foundation Models for Time Series Forecasting via Wavelet-Based Tokenization Luca Masserano, Abdul Fatir Ansari, Boran Han, Xiyuan Zhang, Christos Faloutsos, Michael W. Mahoney, Andrew Gordon Wilson, Youngsuk Park, Syama Sundar Rangapuram, Danielle C. Maddix, Bernie Wang
AISTATS 2025 Fine-Tuning with Uncertainty-Aware Priors Makes Vision and Language Foundation Models More Reliable Tim G. J. Rudner, Xiang Pan, Yucen Lily Li, Ravid Shwartz-Ziv, Andrew Gordon Wilson
ICLRW 2025 Flexible Models of Functional Annotations to Variant Effects Using Accelerated Linear Algebra Alan Nawzad Amin, Andres Potapczynski, Andrew Gordon Wilson
NeurIPS 2025 Hyperparameter Transfer Enables Consistent Gains of Matrix-Preconditioned Optimizers Across Scales Shikai Qiu, Zixi Chen, Hoang Phan, Qi Lei, Andrew Gordon Wilson
ICML 2025 Position: Deep Learning Is Not so Mysterious or Different Andrew Gordon Wilson
ICML 2025 Position: Supervised Classifiers Answer the Wrong Questions for OOD Detection Yucen Lily Li, Daohan Lu, Polina Kirichenko, Shikai Qiu, Tim G. J. Rudner, C. Bayan Bruss, Andrew Gordon Wilson
TMLR 2025 Reliable and Responsible Foundation Models Xinyu Yang, Junlin Han, Rishi Bommasani, Jinqi Luo, Wenjie Qu, Wangchunshu Zhou, Adel Bibi, Xiyao Wang, Jaehong Yoon, Elias Stengel-Eskin, Shengbang Tong, Lingfeng Shen, Rafael Rafailov, Runjia Li, Zhaoyang Wang, Yiyang Zhou, Chenhang Cui, Yu Wang, Wenhao Zheng, Huichi Zhou, Jindong Gu, Zhaorun Chen, Peng Xia, Tony Lee, Thomas P Zollo, Vikash Sehwag, Jixuan Leng, Jiuhai Chen, Yuxin Wen, Huan Zhang, Zhun Deng, Linjun Zhang, Pavel Izmailov, Pang Wei Koh, Yulia Tsvetkov, Andrew Gordon Wilson, Jiaheng Zhang, James Zou, Cihang Xie, Hao Wang, Philip Torr, Julian McAuley, David Alvarez-Melis, Florian Tramèr, Kaidi Xu, Suman Jana, Chris Callison-Burch, Rene Vidal, Filippos Kokkinos, Mohit Bansal, Beidi Chen, Huaxiu Yao
ICLRW 2025 Residue-Level Text Conditioning for Protein Language Model Mutation Effect Prediction Dan Berenberg, Nate Gruver, Alan Nawzad Amin, Peter Mørch Groth, Leo Chen, Harsh R. Srivastava, Pascal Notin, Debora Susan Marks, Andrew Gordon Wilson, Kyunghyun Cho, Richard Bonneau
ICML 2025 Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks Shikai Qiu, Lechao Xiao, Andrew Gordon Wilson, Jeffrey Pennington, Atish Agarwala
NeurIPS 2025 Small Batch Size Training for Language Models: When Vanilla SGD Works, and Why Gradient Accumulation Is Wasteful Martin Marek, Sanae Lotfi, Aditya Somasundaram, Andrew Gordon Wilson, Micah Goldblum
ICML 2025 Training Flexible Models of Genetic Variant Effects from Functional Annotations Using Accelerated Linear Algebra Alan Nawzad Amin, Andres Potapczynski, Andrew Gordon Wilson
NeurIPS 2025 Why Masking Diffusion Works: Condition on the Jump Schedule for Improved Discrete Diffusion Alan Nawzad Amin, Nate Gruver, Andrew Gordon Wilson
ICLRW 2025 Why Masking Diffusion Works: Condition on the Jump Schedule for Improved Discrete Diffusion Alan Nawzad Amin, Nate Gruver, Andrew Gordon Wilson
ICLR 2024 A Study of Bayesian Neural Network Surrogates for Bayesian Optimization Yucen Lily Li, Tim G. J. Rudner, Andrew Gordon Wilson
NeurIPSW 2024 Bayesian Optimization of Antibodies Informed by a Generative Model of Evolving Sequences Alan Nawzad Amin, Nate Gruver, Yucen Lily Li, Yilun Kuang, Hunter Elliott, Calvin McCarter, Aniruddh Raghu, Peyton Greenside, Andrew Gordon Wilson
TMLR 2024 Chronos: Learning the Language of Time Series Abdul Fatir Ansari, Lorenzo Stella, Ali Caner Turkmen, Xiyuan Zhang, Pedro Mercado, Huibin Shen, Oleksandr Shchur, Syama Sundar Rangapuram, Sebastian Pineda Arango, Shubham Kapoor, Jasper Zschiegner, Danielle C. Maddix, Hao Wang, Michael W. Mahoney, Kari Torkkola, Andrew Gordon Wilson, Michael Bohlke-Schneider, Bernie Wang
ICML 2024 Compute Better Spent: Replacing Dense Layers with Structured Matrices Shikai Qiu, Andres Potapczynski, Marc Anton Finzi, Micah Goldblum, Andrew Gordon Wilson
ICML 2024 Controllable Prompt Tuning for Balancing Group Distributional Robustness Hoang Phan, Andrew Gordon Wilson, Qi Lei
NeurIPSW 2024 Effectively Leveraging Exogenous Information Across Neural Forecasters Andres Potapczynski, Kin G. Olivares, Malcolm Wolff, Andrew Gordon Wilson, Dmitry Efimov, Vincent Quenneville-Belair
ICLR 2024 Fine-Tuned Language Models Generate Stable Inorganic Materials as Text Nate Gruver, Anuroop Sriram, Andrea Madotto, Andrew Gordon Wilson, C. Lawrence Zitnick, Zachary Ward Ulissi
ICMLW 2024 Fine-Tuning with Uncertainty-Aware Priors Makes Vision and Language Foundation Models More Reliable Tim G. J. Rudner, Xiang Pan, Yucen Lily Li, Ravid Shwartz-Ziv, Andrew Gordon Wilson
MLOSS 2024 Fortuna: A Library for Uncertainty Quantification in Deep Learning Gianluca Detommaso, Alberto Gasparin, Michele Donini, Matthias Seeger, Andrew Gordon Wilson, Cedric Archambeau
ICMLW 2024 Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion Hossein Souri, Arpit Bansal, Hamid Kazemi, Liam H Fowl, Aniruddha Saha, Jonas Geiping, Andrew Gordon Wilson, Rama Chellappa, Tom Goldstein, Micah Goldblum
NeurIPSW 2024 LLMForecaster: Improving Seasonal Event Forecasts with Unstructured Textual Data Hanyu Zhang, Chuck Arvin, Dmitry Efimov, Michael W. Mahoney, Dominique Perrault-Joncas, Shankar Ramasubramanian, Andrew Gordon Wilson, Malcolm Wolff
NeurIPS 2024 Large Language Models Must Be Taught to Know What They Don’t Know Sanyam Kapoor, Nate Gruver, Manley Roberts, Katherine Collins, Arka Pal, Umang Bhatt, Adrian Weller, Samuel Dooley, Micah Goldblum, Andrew Gordon Wilson
AISTATS 2024 Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors Tim G. J. Rudner, Ya Shi Zhang, Andrew Gordon Wilson, Julia Kempe
ICML 2024 Modeling Caption Diversity in Contrastive Vision-Language Pretraining Samuel Lavoie, Polina Kirichenko, Mark Ibrahim, Mido Assran, Andrew Gordon Wilson, Aaron Courville, Nicolas Ballas
ICML 2024 Non-Vacuous Generalization Bounds for Large Language Models Sanae Lotfi, Marc Anton Finzi, Yilun Kuang, Tim G. J. Rudner, Micah Goldblum, Andrew Gordon Wilson
ICML 2024 Position: Bayesian Deep Learning Is Needed in the Age of Large-Scale AI Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, José Miguel Hernández-Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang
ICML 2024 Position: The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning Micah Goldblum, Marc Anton Finzi, Keefer Rowan, Andrew Gordon Wilson
ICML 2024 Scalable and Flexible Causal Discovery with an Efficient Test for Adjacency Alan Nawzad Amin, Andrew Gordon Wilson
NeurIPS 2024 Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices Andres Potapczynski, Shikai Qiu, Marc Finzi, Christopher Ferri, Zixi Chen, Micah Goldblum, C. Bayan Bruss, Christopher De Sa, Andrew Gordon Wilson
ICML 2024 Transferring Knowledge from Large Foundation Models to Small Downstream Models Shikai Qiu, Boran Han, Danielle C. Maddix, Shuai Zhang, Bernie Wang, Andrew Gordon Wilson
NeurIPS 2024 Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models Sanae Lotfi, Yilun Kuang, Brandon Amos, Micah Goldblum, Marc Finzi, Andrew Gordon Wilson
ICMLW 2024 Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models Sanae Lotfi, Yilun Kuang, Marc Anton Finzi, Brandon Amos, Micah Goldblum, Andrew Gordon Wilson
ICLR 2023 A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks Marc Anton Finzi, Andres Potapczynski, Matthew Choptuik, Andrew Gordon Wilson
AISTATS 2023 Bayesian Optimization with Conformal Prediction Sets Samuel Stanton, Wesley Maddox, Andrew Gordon Wilson
NeurIPSW 2023 Fine-Tuned Language Models Generate Stable Inorganic Materials as Text Nate Gruver, Anuroop Sriram, Andrea Madotto, Andrew Gordon Wilson, C. Lawrence Zitnick, Zachary Ward Ulissi
ICML 2023 Function-Space Regularization in Neural Networks: A Probabilistic Perspective Tim G. J. Rudner, Sanyam Kapoor, Shikai Qiu, Andrew Gordon Wilson
ICLR 2023 How Much Data Are Augmentations Worth? an Investigation into Scaling Laws, Invariance, and Implicit Regularization Jonas Geiping, Micah Goldblum, Gowthami Somepalli, Ravid Shwartz-Ziv, Tom Goldstein, Andrew Gordon Wilson
ICLR 2023 Last Layer Re-Training Is Sufficient for Robustness to Spurious Correlations Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson
ICLR 2023 Learning Multimodal Data Augmentation in Feature Space Zichang Liu, Zhiqiang Tang, Xingjian Shi, Aston Zhang, Mu Li, Anshumali Shrivastava, Andrew Gordon Wilson
ICMLW 2023 Protein Design with Guided Discrete Diffusion Nate Gruver, Samuel Don Stanton, Nathan C. Frey, Tim G. J. Rudner, Isidro Hotzel, Julien Lafrance-Vanasse, Arvind Rajpal, Kyunghyun Cho, Andrew Gordon Wilson
ICML 2023 Simple and Fast Group Robustness by Automatic Feature Reweighting Shikai Qiu, Andres Potapczynski, Pavel Izmailov, Andrew Gordon Wilson
ICLR 2023 The Lie Derivative for Measuring Learned Equivariance Nate Gruver, Marc Anton Finzi, Micah Goldblum, Andrew Gordon Wilson
ICLR 2023 Transfer Learning with Deep Tabular Models Roman Levin, Valeriia Cherepanova, Avi Schwarzschild, Arpit Bansal, C. Bayan Bruss, Tom Goldstein, Andrew Gordon Wilson, Micah Goldblum
ICLRW 2023 Understanding the Class-Specific Effects of Data Augmentations Polina Kirichenko, Randall Balestriero, Mark Ibrahim, Shanmukha Ramakrishna Vedantam, Hamed Firooz, Andrew Gordon Wilson
ICML 2023 User-Defined Event Sampling and Uncertainty Quantification in Diffusion Models for Physical Dynamical Systems Marc Anton Finzi, Anudhyan Boral, Andrew Gordon Wilson, Fei Sha, Leonardo Zepeda-Nunez
ICML 2022 Accelerating Bayesian Optimization for Biological Sequence Design with Denoising Autoencoders Samuel Stanton, Wesley Maddox, Nate Gruver, Phillip Maffettone, Emily Delaney, Peyton Greenside, Andrew Gordon Wilson
ICML 2022 Bayesian Model Selection, the Marginal Likelihood, and Generalization Sanae Lotfi, Pavel Izmailov, Gregory Benton, Micah Goldblum, Andrew Gordon Wilson
ICLR 2022 Deconstructing the Inductive Biases of Hamiltonian Neural Networks Nate Gruver, Marc Anton Finzi, Samuel Don Stanton, Andrew Gordon Wilson
ICMLW 2022 How Much Data Is Augmentation Worth? Jonas Geiping, Gowthami Somepalli, Ravid Shwartz-Ziv, Andrew Gordon Wilson, Tom Goldstein, Micah Goldblum
ICMLW 2022 Last Layer Re-Training Is Sufficient for Robustness to Spurious Correlations Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson
UAI 2022 Low-Precision Arithmetic for Fast Gaussian Processes Wesley J. Maddox, Andres Potapcynski, Andrew Gordon Wilson
ICML 2022 Low-Precision Stochastic Gradient Langevin Dynamics Ruqi Zhang, Andrew Gordon Wilson, Christopher De Sa
NeurIPSW 2022 On Representation Learning Under Class Imbalance Ravid Shwartz-Ziv, Micah Goldblum, Yucen Lily Li, C. Bayan Bruss, Andrew Gordon Wilson
ICMLW 2022 Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior Ravid Shwartz-Ziv, Micah Goldblum, Hossein Souri, Sanyam Kapoor, Chen Zhu, Yann LeCun, Andrew Gordon Wilson
NeurIPSW 2022 Transfer Learning with Deep Tabular Models Roman Levin, Valeriia Cherepanova, Avi Schwarzschild, Arpit Bansal, C. Bayan Bruss, Tom Goldstein, Andrew Gordon Wilson, Micah Goldblum
ICML 2022 Volatility Based Kernels and Moving Average Means for Accurate Forecasting with Gaussian Processes Gregory Benton, Wesley Maddox, Andrew Gordon Wilson
ICML 2021 A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups Marc Finzi, Max Welling, Andrew Gordon Wilson
L4DC 2021 On the Model-Based Stochastic Value Gradient for Continuous Reinforcement Learning Brandon Amos, Samuel Stanton, Denis Yarats, Andrew Gordon Wilson
ICMLW 2021 Task-Agnostic Continual Learning with Hybrid Probabilistic Models Polina Kirichenko, Mehrdad Farajtabar, Dushyant Rao, Balaji Lakshminarayanan, Nir Levine, Ang Li, Huiyi Hu, Andrew Gordon Wilson, Razvan Pascanu
ICLR 2020 Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, Andrew Gordon Wilson
ICML 2020 Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data Marc Finzi, Samuel Stanton, Pavel Izmailov, Andrew Gordon Wilson
ICML 2020 Randomly Projected Additive Gaussian Processes for Regression Ian Delbridge, David Bindel, Andrew Gordon Wilson
ICML 2020 Semi-Supervised Learning with Normalizing Flows Pavel Izmailov, Polina Kirichenko, Marc Finzi, Andrew Gordon Wilson
ICLR 2020 Towards Understanding the True Loss Surface of Deep Neural Networks Using Random Matrix Theory and Iterative Spectral Methods Diego Granziol, Timur Garipov, Dmitry Vetrov, Stefan Zohren, Stephen Roberts, Andrew Gordon Wilson
NeurIPS 2019 A Simple Baseline for Bayesian Uncertainty in Deep Learning Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, Andrew Gordon Wilson
JMLR 2019 Change Surfaces for Expressive Multidimensional Changepoints and Counterfactual Prediction William Herlands, Daniel B. Neill, Hannes Nickisch, Andrew Gordon Wilson
NeurIPS 2019 Exact Gaussian Processes on a Million Data Points Ke Wang, Geoff Pleiss, Jacob Gardner, Stephen Tyree, Kilian Q. Weinberger, Andrew Gordon Wilson
NeurIPS 2019 Function-Space Distributions over Kernels Gregory Benton, Wesley J Maddox, Jayson Salkey, Julio Albinati, Andrew Gordon Wilson
UAI 2019 Practical Multi-Fidelity Bayesian Optimization for Hyperparameter Tuning Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, Andrew Gordon Wilson
ICML 2019 SWALP : Stochastic Weight Averaging in Low Precision Training Guandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, Andrew Gordon Wilson, Chris De Sa
ICML 2019 Simple Black-Box Adversarial Attacks Chuan Guo, Jacob Gardner, Yurong You, Andrew Gordon Wilson, Kilian Weinberger
UAI 2019 Subspace Inference for Bayesian Deep Learning Pavel Izmailov, Wesley J. Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson
ICLR 2019 There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson
UAI 2018 Averaging Weights Leads to Wider Optima and Better Generalization Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry P. Vetrov, Andrew Gordon Wilson
ICML 2018 Constant-Time Predictive Distributions for Gaussian Processes Geoff Pleiss, Jacob Gardner, Kilian Weinberger, Andrew Gordon Wilson
AISTATS 2018 Gaussian Process Subset Scanning for Anomalous Pattern Detection in Non-Iid Data William Herlands, Edward McFowland, Andrew Gordon Wilson, Daniel B. Neill
ICLR 2018 Hierarchical Density Order Embeddings Ben Athiwaratkun, Andrew Gordon Wilson
AISTATS 2018 Product Kernel Interpolation for Scalable Gaussian Processes Jacob R. Gardner, Geoff Pleiss, Ruihan Wu, Kilian Q. Weinberger, Andrew Gordon Wilson
JMLR 2017 Learning Scalable Deep Kernels with Recurrent Structure Maruan Al-Shedivat, Andrew Gordon Wilson, Yunus Saatchi, Zhiting Hu, Eric P. Xing
AISTATS 2016 Bayesian Nonparametric Kernel-Learning Junier B. Oliva, Avinava Dubey, Andrew Gordon Wilson, Barnabás Póczos, Jeff G. Schneider, Eric P. Xing
AISTATS 2016 Deep Kernel Learning Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, Eric P. Xing
AISTATS 2016 Scalable Gaussian Processes for Characterizing Multidimensional Change Surfaces William Herlands, Andrew Gordon Wilson, Hannes Nickisch, Seth R. Flaxman, Daniel B. Neill, Wilbert Van Panhuis, Eric P. Xing
AISTATS 2015 A La Carte - Learning Fast Kernels Zichao Yang, Andrew Gordon Wilson, Alexander J. Smola, Le Song
AISTATS 2014 Student-T Processes as Alternatives to Gaussian Processes Amar Shah, Andrew Gordon Wilson, Zoubin Ghahramani
ICML 2012 Gaussian Process Regression Networks Andrew Gordon Wilson, David A. Knowles, Zoubin Ghahramani
ECML-PKDD 2012 Modelling Input Varying Correlations Between Multiple Responses Andrew Gordon Wilson, Zoubin Ghahramani
UAI 2011 Generalised Wishart Processes Andrew Gordon Wilson, Zoubin Ghahramani