Rudner, Tim G. J.

51 publications

ICML 2025 Can Transformers Learn Full Bayesian Inference in Context? Arik Reuter, Tim G. J. Rudner, Vincent Fortuin, David Rügamer
ICLRW 2025 Can Transformers Learn Full Bayesian Inference in Context? Arik Reuter, Tim G. J. Rudner, Vincent Fortuin, David Rügamer
AISTATS 2025 Fine-Tuning with Uncertainty-Aware Priors Makes Vision and Language Foundation Models More Reliable Tim G. J. Rudner, Xiang Pan, Yucen Lily Li, Ravid Shwartz-Ziv, Andrew Gordon Wilson
AISTATS 2025 Geometry-Aware Generative Autoencoders for Warped Riemannian Metric Learning and Generative Modeling on Data Manifolds Xingzhi Sun, Danqi Liao, Kincaid MacDonald, Yanlei Zhang, Guillaume Huguet, Guy Wolf, Ian Adelstein, Tim G. J. Rudner, Smita Krishnaswamy
AISTATS 2025 Improving Pre-Trained Self-Supervised Embeddings Through Effective Entropy Maximization Deep Chakraborty, Yann LeCun, Tim G. J. Rudner, Erik Learned-Miller
NeurIPS 2025 Learning from Reward-Free Offline Data: A Case for Planning with Latent Dynamics Models Vlad Sobal, Wancong Zhang, Kyunghyun Cho, Randall Balestriero, Tim G. J. Rudner, Yann LeCun
ICML 2025 Position: Supervised Classifiers Answer the Wrong Questions for OOD Detection Yucen Lily Li, Daohan Lu, Polina Kirichenko, Shikai Qiu, Tim G. J. Rudner, C. Bayan Bruss, Andrew Gordon Wilson
ICLRW 2025 Semantic-Level Confidence Calibration of Language Models via Temperature Scaling Tom A. Lamb, Desi R. Ivanova, Philip Torr, Tim G. J. Rudner
ICLRW 2025 Stress-Testing Offline Reward-Free Reinforcement Learning: A Case for Planning with Latent Dynamics Models Vlad Sobal, Wancong Zhang, Kyunghyun Cho, Randall Balestriero, Tim G. J. Rudner, Yann LeCun
ICLRW 2025 What Actually Matters for Materials Discovery: Pitfalls and Recommendations in Bayesian Optimization Tristan Cinquin, Stanley Lo, Felix Strieth-Kalthoff, Alan Aspuru-Guzik, Geoff Pleiss, Robert Bamler, Tim G. J. Rudner, Vincent Fortuin, Agustinus Kristiadi
ICLR 2024 A Study of Bayesian Neural Network Surrogates for Bayesian Optimization Yucen Lily Li, Tim G. J. Rudner, Andrew Gordon Wilson
ICMLW 2024 A Variational Formulation of Reinforcement Learning in Infinite-Horizon Markov Decision Processes Tim G. J. Rudner
TMLR 2024 Attacking Bayes: On the Adversarial Robustness of Bayesian Neural Networks Yunzhen Feng, Tim G. J. Rudner, Nikolaos Tsilivis, Julia Kempe
ICML 2024 Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design Leo Klarner, Tim G. J. Rudner, Garrett M Morris, Charlotte Deane, Yee Whye Teh
ICMLW 2024 Fine-Tuning with Uncertainty-Aware Priors Makes Vision and Language Foundation Models More Reliable Tim G. J. Rudner, Xiang Pan, Yucen Lily Li, Ravid Shwartz-Ziv, Andrew Gordon Wilson
ICMLW 2024 Geometry-Aware Autoencoders for Metric Learning and Generative Modeling on Data Manifolds Xingzhi Sun, Danqi Liao, Kincaid MacDonald, Yanlei Zhang, Guillaume Huguet, Guy Wolf, Ian Adelstein, Tim G. J. Rudner, Smita Krishnaswamy
AISTATS 2024 Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors Tim G. J. Rudner, Ya Shi Zhang, Andrew Gordon Wilson, Julia Kempe
ICML 2024 Non-Vacuous Generalization Bounds for Large Language Models Sanae Lotfi, Marc Anton Finzi, Yilun Kuang, Tim G. J. Rudner, Micah Goldblum, Andrew Gordon Wilson
ICML 2024 Position: Bayesian Deep Learning Is Needed in the Age of Large-Scale AI Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, José Miguel Hernández-Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang
NeurIPS 2024 Pre-Trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control Gunshi Gupta, Karmesh Yadav, Yarin Gal, Zsolt Kira, Dhruv Batra, Cong Lu, Tim G. J. Rudner
ICLRW 2024 Pre-Trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control Gunshi Gupta, Karmesh Yadav, Yarin Gal, Dhruv Batra, Zsolt Kira, Cong Lu, Tim G. J. Rudner
NeurIPSW 2024 SCIURus: Shared Circuits for Interpretable Uncertainty Representations in Language Models Carter Teplica, Yixin Liu, Arman Cohan, Tim G. J. Rudner
NeurIPSW 2024 SCIURus: Shared Circuits for Interpretable Uncertainty Representations in Language Models Carter Teplica, Yixin Liu, Arman Cohan, Tim G. J. Rudner
NeurIPSW 2024 Squeezing Water from a Stone: Improving Pre-Trained Self-Supervised Embeddings Through Effective Entropy Maximization Deep Chakraborty, Tim G. J. Rudner, Erik Learned-Miller
NeurIPSW 2024 Weak-to-Strong Confidence Prediction Tracy Yixin Zhu, Yukai Yang, Marco Morucci, Tim G. J. Rudner
NeurIPSW 2024 Weak-to-Strong Confidence Prediction Yukai Yang, Tracy Yixin Zhu, Marco Morucci, Tim G. J. Rudner
NeurIPS 2023 An Information Theory Perspective on Variance-Invariance-Covariance Regularization Ravid Shwartz-Ziv, Randall Balestriero, Kenji Kawaguchi, Tim G. J. Rudner, Yann LeCun
CLeaR 2023 Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning? Gunshi Gupta, Tim G. J. Rudner, Rowan Thomas McAllister, Adrien Gaidon, Yarin Gal
TMLR 2023 Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A Osborne, Yee Whye Teh
ICML 2023 Drug Discovery Under Covariate Shift with Domain-Informed Prior Distributions over Functions Leo Klarner, Tim G. J. Rudner, Michael Reutlinger, Torsten Schindler, Garrett M Morris, Charlotte Deane, Yee Whye Teh
ICML 2023 Function-Space Regularization in Neural Networks: A Probabilistic Perspective Tim G. J. Rudner, Sanyam Kapoor, Shikai Qiu, Andrew Gordon Wilson
NeurIPS 2023 Protein Design with Guided Discrete Diffusion Nate Gruver, Samuel Stanton, Nathan Frey, Tim G. J. Rudner, Isidro Hotzel, Julien Lafrance-Vanasse, Arvind Rajpal, Kyunghyun Cho, Andrew G Wilson
ICMLW 2023 Protein Design with Guided Discrete Diffusion Nate Gruver, Samuel Don Stanton, Nathan C. Frey, Tim G. J. Rudner, Isidro Hotzel, Julien Lafrance-Vanasse, Arvind Rajpal, Kyunghyun Cho, Andrew Gordon Wilson
NeurIPS 2023 Should We Learn Most Likely Functions or Parameters? Shikai Qiu, Tim G. J. Rudner, Sanyam Kapoor, Andrew G Wilson
NeurIPS 2023 Visual Explanations of Image-Text Representations via Multi-Modal Information Bottleneck Attribution Ying Wang, Tim G. J. Rudner, Andrew G Wilson
NeurIPSW 2022 A Neural Tangent Kernel Perspective on Function-Space Regularization in Neural Networks Zonghao Chen, Xupeng Shi, Tim G. J. Rudner, Qixuan Feng, Weizhong Zhang, Tong Zhang
NeurIPSW 2022 Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning? Gunshi Gupta, Tim G. J. Rudner, Rowan Thomas McAllister, Adrien Gaidon, Yarin Gal
NeurIPSW 2022 Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning? Gunshi Gupta, Tim G. J. Rudner, Rowan Thomas McAllister, Adrien Gaidon, Yarin Gal
NeurIPSW 2022 Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning? Gunshi Gupta, Tim G. J. Rudner, Rowan Thomas McAllister, Adrien Gaidon, Yarin Gal
ICMLW 2022 Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A Osborne, Yee Whye Teh
ICML 2022 Continual Learning via Sequential Function-Space Variational Inference Tim G. J. Rudner, Freddie Bickford Smith, Qixuan Feng, Yee Whye Teh, Yarin Gal
ICMLW 2022 Plex: Towards Reliability Using Pretrained Large Model Extensions Dustin Tran, Jeremiah Zhe Liu, Michael W Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda E Mariet, Huiyi Hu, Neil Band, Tim G. J. Rudner, Karan Singhal, Zachary Nado, Joost van Amersfoort, Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan, E. Kelly Buchanan, Kevin Patrick Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani, Jasper Snoek, Balaji Lakshminarayanan
NeurIPS 2022 Tractable Function-Space Variational Inference in Bayesian Neural Networks Tim G. J. Rudner, Zonghao Chen, Yee Whye Teh, Yarin Gal
NeurIPSW 2021 Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks Neil Band, Tim G. J. Rudner, Qixuan Feng, Angelos Filos, Zachary Nado, Michael W Dusenberry, Ghassen Jerfel, Dustin Tran, Yarin Gal
NeurIPS 2021 On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations Tim G. J. Rudner, Cong Lu, Michael A Osborne, Yarin Gal, Yee W. Teh
ICML 2021 On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes Tim G. J. Rudner, Oscar Key, Yarin Gal, Tom Rainforth
NeurIPS 2021 Outcome-Driven Reinforcement Learning via Variational Inference Tim G. J. Rudner, Vitchyr Pong, Rowan McAllister, Yarin Gal, Sergey Levine
NeurIPSW 2021 PCA Subspaces Are Not Always Optimal for Bayesian Learning Alexandre Bense, Amir Joudaki, Tim G. J. Rudner, Vincent Fortuin
ICML 2020 Inter-Domain Deep Gaussian Processes Tim G. J. Rudner, Dino Sejdinovic, Yarin Gal
AAAI 2019 Multi3Net: Segmenting Flooded Buildings via Fusion of Multiresolution, Multisensor, and Multitemporal Satellite Imagery Tim G. J. Rudner, Marc Rußwurm, Jakub Fil, Ramona Pelich, Benjamin Bischke, Veronika Kopacková, Piotr Bilinski
NeurIPS 2019 VIREL: A Variational Inference Framework for Reinforcement Learning Matthew Fellows, Anuj Mahajan, Tim G. J. Rudner, Shimon Whiteson