Bethge, Matthias

102 publications

NeurIPS 2025 AlgoTune: Can Language Models Speed up General-Purpose Numerical Programs? Ori Press, Brandon Amos, Haoyu Zhao, Yikai Wu, Samuel Ainsworth, Dominik Krupke, Patrick Kidger, Touqir Sajed, Bartolomeo Stellato, Jisun Park, Nathanael Bosch, Eli Meril, Albert Steppi, Arman Zharmagambetov, Fangzhao Zhang, David Pérez-Piñeiro, Alberto Mercurio, Ni Zhan, Talor Abramovich, Kilian Lieret, Hanlin Zhang, Shirley Huang, Matthias Bethge, Ofir Press
ICLRW 2025 Are We Done with Object-Centric Learning? Alexander Rubinstein, Ameya Prabhu, Matthias Bethge, Seong Joon Oh
ICLRW 2025 Can Language Models Falsify? the Need for Inverse Benchmarking Shiven Sinha, Shashwat Goel, Ponnurangam Kumaraguru, Jonas Geiping, Matthias Bethge, Ameya Prabhu
NeurIPS 2025 Equivariance by Contrast: Identifiable Equivariant Embeddings from Unlabeled Finite Group Actions Tobias Schmidt, Steffen Schneider, Matthias Bethge
ICML 2025 Great Models Think Alike and This Undermines AI Oversight Shashwat Goel, Joschka Strüber, Ilze Amanda Auzina, Karuna K Chandra, Ponnurangam Kumaraguru, Douwe Kiela, Ameya Prabhu, Matthias Bethge, Jonas Geiping
ICLRW 2025 Great Models Think Alike and This Undermines AI Oversight Shashwat Goel, Joschka Strüber, Ilze Amanda Auzina, Karuna K Chandra, Ponnurangam Kumaraguru, Douwe Kiela, Ameya Prabhu, Matthias Bethge, Jonas Geiping
ICLRW 2025 How to Merge Multimodal Models over Time? Sebastian Dziadzio, Vishaal Udandarao, Karsten Roth, Ameya Prabhu, Zeynep Akata, Samuel Albanie, Matthias Bethge
CVPR 2025 How to Merge Your Multimodal Models over Time? Sebastian Dziadzio, Vishaal Udandarao, Karsten Roth, Ameya Prabhu, Zeynep Akata, Samuel Albanie, Matthias Bethge
ICLR 2025 Identifying Latent State Transitions in Non-Linear Dynamical Systems Çağlar Hızlı, Çağatay Yıldız, Matthias Bethge, S. T. John, Pekka Marttinen
ICLR 2025 In Search of Forgotten Domain Generalization Prasanna Mayilvahanan, Roland S. Zimmermann, Thaddäus Wiedemer, Evgenia Rusak, Attila Juhos, Matthias Bethge, Wieland Brendel
ICLRW 2025 In Search of Forgotten Domain Generalization Prasanna Mayilvahanan, Roland S. Zimmermann, Thaddäus Wiedemer, Evgenia Rusak, Attila Juhos, Matthias Bethge, Wieland Brendel
TMLR 2025 Investigating Continual Pretraining in Large Language Models: Insights and Implications Çağatay Yıldız, Nishaanth Kanna Ravichandran, Nitin Sharma, Matthias Bethge, Beyza Ermis
ICML 2025 LLMs on the Line: Data Determines Loss-to-Loss Scaling Laws Prasanna Mayilvahanan, Thaddäus Wiedemer, Sayak Mallick, Matthias Bethge, Wieland Brendel
ICLRW 2025 LLMs on the Line: Data Determines Loss-to-Loss Scaling Laws Prasanna Mayilvahanan, Thaddäus Wiedemer, Sayak Mallick, Matthias Bethge, Wieland Brendel
ICCV 2025 Modeling Saliency Dataset Bias Matthias Kümmerer, Harneet Singh Khanuja, Matthias Bethge
ICML 2025 Testing the Limits of Fine-Tuning for Improving Visual Cognition in Vision Language Models Luca M. Schulze Buschoff, Konstantinos Voudouris, Elif Akata, Matthias Bethge, Joshua B. Tenenbaum, Eric Schulz
ICCV 2025 VGGSounder: Audio-Visual Evaluations for Foundation Models Daniil Zverev, Thaddäus Wiedemer, Ameya Prabhu, Matthias Bethge, Wieland Brendel, A. Sophia Koepke
NeurIPS 2025 What Moves the Eyes: Doubling Mechanistic Model Performance Using Deep Networks to Discover and Test Cognitive Hypotheses Federico D'Agostino, Lisa Schwetlick, Matthias Bethge, Matthias Kuemmerer
ICML 2025 WikiBigEdit: Understanding the Limits of Lifelong Knowledge Editing in LLMs Lukas Thede, Karsten Roth, Matthias Bethge, Zeynep Akata, Thomas Hartvigsen
NeurIPSW 2024 A Practitioner's Guide to Continual Multimodal Pretraining Karsten Roth, Vishaal Udandarao, Sebastian Dziadzio, Ameya Prabhu, Mehdi Cherti, Oriol Vinyals, Olivier J Henaff, Samuel Albanie, Matthias Bethge, Zeynep Akata
NeurIPS 2024 A Practitioner's Guide to Real-World Continual Multimodal Pretraining Vishaal Udandarao, Karsten Roth, Sebastian Dziadzio, Ameya Prabhu, Mehdi Cherti, Oriol Vinyals, Olivier Hénaff, Samuel Albanie, Zeynep Akata, Matthias Bethge
NeurIPS 2024 CiteME: Can Language Models Accurately Cite Scientific Claims? Ori Press, Andreas Hochlehnert, Ameya Prabhu, Vishaal Udandarao, Ofir Press, Matthias Bethge
TMLR 2024 Continual Learning: Applications and the Road Forward Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M van de Ven
ICLR 2024 Does CLIP’s Generalization Performance Mainly Stem from High Train-Test Similarity? Prasanna Mayilvahanan, Thaddäus Wiedemer, Evgenia Rusak, Matthias Bethge, Wieland Brendel
NeurIPS 2024 Efficient Lifelong Model Evaluation in an Era of Rapid Progress Ameya Prabhu, Vishaal Udandarao, Philip H.S. Torr, Matthias Bethge, Adel Bibi, Samuel Albanie
ICMLW 2024 Identifying Latent State Transition in Non-Linear Dynamical Systems Çağlar Hızlı, Çagatay Yildiz, Matthias Bethge, S. T. John, Pekka Marttinen
ICMLW 2024 In Search of Forgotten Domain Generalization Prasanna Mayilvahanan, Roland S. Zimmermann, Thaddäus Wiedemer, Evgenia Rusak, Attila Juhos, Matthias Bethge, Wieland Brendel
CoLLAs 2024 Infinite dSprites for Disentangled Continual Learning: Separating Memory Edits from Generalization Sebastian Dziadzio, Çagatay Yildiz, Gido M. Ven, Tomasz Trzcinski, Tinne Tuytelaars, Matthias Bethge
ICLR 2024 Most Discriminative Stimuli for Functional Cell Type Clustering Max F Burg, Thomas Zenkel, Michaela Vystrčilová, Jonathan Oesterle, Larissa Höfling, Konstantin Friedrich Willeke, Jan Lause, Sarah Müller, Paul G. Fahey, Zhiwei Ding, Kelli Restivo, Shashwat Sridhar, Tim Gollisch, Philipp Berens, Andreas S. Tolias, Thomas Euler, Matthias Bethge, Alexander S Ecker
NeurIPS 2024 No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance Vishaal Udandarao, Ameya Prabhu, Adhiraj Ghosh, Yash Sharma, Philip H.S. Torr, Adel Bibi, Samuel Albanie, Matthias Bethge
NeurIPS 2024 Object Segmentation from Common Fate: Motion Energy Processing Enables Human-like Zero-Shot Generalization to Random Dot Stimuli Matthias Tangemann, Matthias Kümmerer, Matthias Bethge
ICLRW 2024 Pre-Training Concept Frequency Is Predictive of CLIP Zero-Shot Performance Vishaal Udandarao, Ameya Prabhu, Philip Torr, Adel Bibi, Samuel Albanie, Matthias Bethge
NeurIPSW 2024 Pretraining Frequency Predicts Compositional Generalization of CLIP on Real-World Tasks Thaddäus Wiedemer, Yash Sharma, Ameya Prabhu, Matthias Bethge, Wieland Brendel
ICLR 2024 Provable Compositional Generalization for Object-Centric Learning Thaddäus Wiedemer, Jack Brady, Alexander Panfilov, Attila Juhos, Matthias Bethge, Wieland Brendel
CoLLAs 2024 Reflecting on the State of Rehearsal-Free Continual Learning with Pretrained Models Lukas Thede, Karsten Roth, Olivier J Henaff, Matthias Bethge, Zeynep Akata
ICML 2024 The Entropy Enigma: Success and Failure of Entropy Minimization Ori Press, Ravid Shwartz-Ziv, Yann Lecun, Matthias Bethge
ICLR 2024 Visual Data-Type Understanding Does Not Emerge from Scaling Vision-Language Models Vishaal Udandarao, Max F Burg, Samuel Albanie, Matthias Bethge
NeurIPSW 2024 Wu’s Method Boosts Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry Shiven Sinha, Ameya Prabhu, Ponnurangam Kumaraguru, Siddharth Bhat, Matthias Bethge
NeurIPS 2023 Compositional Generalization from First Principles Thaddäus Wiedemer, Prasanna Mayilvahanan, Matthias Bethge, Wieland Brendel
NeurIPSW 2023 Does CLIP’s Generalization Performance Mainly Stem from High Train-Test Similarity? Prasanna Mayilvahanan, Thaddäus Wiedemer, Evgenia Rusak, Matthias Bethge, Wieland Brendel
TMLR 2023 Jacobian-Based Causal Discovery with Nonlinear ICA Patrik Reizinger, Yash Sharma, Matthias Bethge, Bernhard Schölkopf, Ferenc Huszár, Wieland Brendel
NeurIPS 2023 Modulated Neural ODEs Ilze Amanda Auzina, Çağatay Yıldız, Sara Magliacane, Matthias Bethge, Efstratios Gavves
NeurIPS 2023 RDumb: A Simple Approach That Questions Our Progress in Continual Test-Time Adaptation Ori Press, Steffen Schneider, Matthias Kümmerer, Matthias Bethge
CLeaR 2023 Unsupervised Object Learning via Common Fate Matthias Tangemann, Steffen Schneider, Julius Von Kügelgen, Francesco Locatello, Peter Vincent Gehler, Thomas Brox, Matthias Kuemmerer, Matthias Bethge, Bernhard Schölkopf
ICMLW 2022 CCC: Continuously Changing Corruptions Ori Press, Steffen Schneider, Matthias Kuemmerer, Matthias Bethge
CoLLAs 2022 Disentanglement and Generalization Under Correlation Shifts Christina M. Funke, Paul Vicol, Kuan-chieh Wang, Matthias Kuemmerer, Richard Zemel, Matthias Bethge
ICLRW 2022 Disentanglement and Generalization Under Correlation Shifts Christina M Funke, Paul Vicol, Kuan-Chieh Wang, Matthias Kuemmerer, Richard Zemel, Matthias Bethge
TMLR 2022 If Your Data Distribution Shifts, Use Self-Learning Evgenia Rusak, Steffen Schneider, George Pachitariu, Luisa Eck, Peter Vincent Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge
ICMLW 2022 ImageNet-D: A New Challenging Robustness Dataset Inspired by Domain Adaptation Evgenia Rusak, Steffen Schneider, Peter Vincent Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge
ICLR 2022 Visual Representation Learning Does Not Generalize Strongly Within the Same Domain Lukas Schott, Julius Von Kügelgen, Frederik Träuble, Peter Vincent Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel
JMLR 2021 Benchmarking Unsupervised Object Representations for Video Sequences Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker
ICML 2021 Contrastive Learning Inverts the Data Generating Process Roland S. Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, Wieland Brendel
ICCV 2021 DeepGaze IIE: Calibrated Prediction in and Out-of-Domain for State-of-the-Art Saliency Modeling Akis Linardos, Matthias Kümmerer, Ori Press, Matthias Bethge
ICLR 2021 Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
NeurIPS 2021 How Well Do Feature Visualizations Support Causal Understanding of CNN Activations? Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas Wallis, Wieland Brendel
NeurIPS 2021 Partial Success in Closing the Gap Between Human and Machine Vision Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
WACV 2021 Pretraining Boosts Out-of-Domain Robustness for Pose Estimation Alexander Mathis, Thomas Biasi, Steffen Schneider, Mert Yuksekgonul, Byron Rogers, Matthias Bethge, Mackenzie W. Mathis
ICLR 2021 Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding David A. Klindt, Lukas Schott, Yash Sharma, Ivan Ustyuzhaninov, Wieland Brendel, Matthias Bethge, Dylan Paiton
ECCV 2020 A Simple Way to Make Neural Networks Robust Against Diverse Image Corruptions Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel
NeurIPS 2020 Improving Robustness Against Common Corruptions by Covariate Shift Adaptation Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, Matthias Bethge
ECCV 2020 Measuring the Importance of Temporal Features in Video Saliency Matthias Tangemann, Matthias Kümmerer, Thomas S.A. Wallis, Matthias Bethge
NeurIPSW 2020 Natural Images Are More Informative for Interpreting CNN Activations than State-of-the-Art Synthetic Feature Visualizations Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
NeurIPSW 2020 On the Surprising Similarities Between Supervised and Self-Supervised Models Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
ICLR 2020 Rotation-Invariant Clustering of Neuronal Responses in Primary Visual Cortex Ivan Ustyuzhaninov, Santiago A. Cadena, Emmanouil Froudarakis, Paul G. Fahey, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Fabian H. Sinz, Andreas S. Tolias, Matthias Bethge, Alexander S. Ecker
NeurIPS 2020 System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina Cornelius Schröder, David Klindt, Sarah Strauss, Katrin Franke, Matthias Bethge, Thomas Euler, Philipp Berens
ICLR 2019 A Rotation-Equivariant Convolutional Neural Network Model of Primary Visual Cortex Alexander S. Ecker, Fabian H. Sinz, Emmanouil Froudarakis, Paul G. Fahey, Santiago A. Cadena, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Andreas S. Tolias, Matthias Bethge
NeurIPS 2019 Accurate, Reliable and Fast Robustness Evaluation Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge
ICLR 2019 Approximating CNNs with Bag-of-Local-Features Models Works Surprisingly Well on ImageNet Wieland Brendel, Matthias Bethge
ICLR 2019 Excessive Invariance Causes Adversarial Vulnerability Joern-Henrik Jacobsen, Jens Behrmann, Richard Zemel, Matthias Bethge
NeurIPSW 2019 How Well Do Deep Neural Networks Trained on Object Recognition Characterize the Mouse Visual System? Santiago A. Cadena, Fabian H. Sinz, Taliah Muhammad, Emmanouil Froudarakis, Erick Cobos, Edgar Y. Walker, Jake Reimer, Matthias Bethge, Andreas Tolias, Alexander S. Ecker
ICLR 2019 ImageNet-Trained CNNs Are Biased Towards Texture; Increasing Shape Bias Improves Accuracy and Robustness Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
NeurIPS 2019 Learning from Brains How to Regularize Machines Zhe Li, Wieland Brendel, Edgar Walker, Erick Cobos, Taliah Muhammad, Jacob Reimer, Matthias Bethge, Fabian Sinz, Zachary Pitkow, Andreas Tolias
ICLR 2019 Towards the First Adversarially Robust Neural Network Model on MNIST Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel
ICLR 2018 Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models Wieland Brendel, Jonas Rauber, Matthias Bethge
ECCV 2018 Diverse Feature Visualizations Reveal Invariances in Early Layers of Deep Neural Networks Santiago A. Cadena, Marissa A. Weis, Leon A. Gatys, Matthias Bethge, Alexander S. Ecker
NeurIPS 2018 Generalisation in Humans and Deep Neural Networks Robert Geirhos, Carlos R. M. Temme, Jonas Rauber, Heiko H. Schütt, Matthias Bethge, Felix A. Wichmann
ICML 2018 One-Shot Segmentation in Clutter Claudio Michaelis, Matthias Bethge, Alexander Ecker
ECCV 2018 Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics Matthias Kummerer, Thomas S. A. Wallis, Matthias Bethge
CVPR 2017 Controlling Perceptual Factors in Neural Style Transfer Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Aaron Hertzmann, Eli Shechtman
NeurIPS 2017 Neural System Identification for Large Populations Separating “what” and “where” David Klindt, Alexander S Ecker, Thomas Euler, Matthias Bethge
ICCV 2017 Understanding Low- and High-Level Contributions to Fixation Prediction Matthias Kummerer, Thomas S. A. Wallis, Leon A. Gatys, Matthias Bethge
ICLR 2017 What Does It Take to Generate Natural Textures? Ivan Ustyuzhaninov, Wieland Brendel, Leon A. Gatys, Matthias Bethge
ICLR 2016 A Note on the Evaluation of Generative Models Lucas Theis, Aäron van den Oord, Matthias Bethge
CVPR 2016 Image Style Transfer Using Convolutional Neural Networks Leon A. Gatys, Alexander S. Ecker, Matthias Bethge
AISTATS 2015 Data Modeling with the Elliptical Gamma Distribution Suvrit Sra, Reshad Hosseini, Lucas Theis, Matthias Bethge
ICLR 2015 Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet Matthias Kümmerer, Lucas Theis, Matthias Bethge
NeurIPS 2015 Generative Image Modeling Using Spatial LSTMs Lucas Theis, Matthias Bethge
NeurIPS 2015 Texture Synthesis Using Convolutional Neural Networks Leon Gatys, Alexander S Ecker, Matthias Bethge
NeurIPS 2012 Training Sparse Natural Image Models with a Fast Gibbs Sampler of an Extended State Space Lucas Theis, Jascha Sohl-dickstein, Matthias Bethge
JMLR 2011 In All Likelihood, Deep Belief Is Not Enough Lucas Theis, Sebastian Gerwinn, Fabian Sinz, Matthias Bethge
NeurIPS 2010 Evaluating Neuronal Codes for Inference Using Fisher Information Haefner Ralf, Matthias Bethge
JMLR 2010 Lp-Nested Symmetric Distributions Fabian Sinz, Matthias Bethge
NeurIPS 2009 A Joint Maximum-Entropy Model for Binary Neural Population Patterns and Continuous Signals Sebastian Gerwinn, Philipp Berens, Matthias Bethge
NeurIPS 2009 Bayesian Estimation of Orientation Preference Maps Sebastian Gerwinn, Leonard White, Matthias Kaschube, Matthias Bethge, Jakob H. Macke
NeurIPS 2009 Hierarchical Modeling of Local Image Features Through $L_p$-Nested Symmetric Distributions Matthias Bethge, Eero P. Simoncelli, Fabian H. Sinz
NeurIPS 2009 Neurometric Function Analysis of Population Codes Philipp Berens, Sebastian Gerwinn, Alexander Ecker, Matthias Bethge
NeurIPS 2008 The Conjoint Effect of Divisive Normalization and Orientation Selectivity on Redundancy Reduction Fabian H. Sinz, Matthias Bethge
NeurIPS 2007 Bayesian Inference for Spiking Neuron Models with a Sparsity Prior Sebastian Gerwinn, Matthias Bethge, Jakob H. Macke, Matthias Seeger
NeurIPS 2007 Near-Maximum Entropy Models for Binary Neural Representations of Natural Images Matthias Bethge, Philipp Berens
NeurIPS 2007 Receptive Fields Without Spike-Triggering Guenther Zeck, Matthias Bethge, Jakob H. Macke
NeurIPS 2002 Binary Tuning Is Optimal for Neural Rate Coding with High Temporal Resolution Matthias Bethge, David Rotermund, Klaus Pawelzik
NeCo 2002 Optimal Short-Term Population Coding: When Fisher Information Fails Matthias Bethge, David Rotermund, Klaus Pawelzik