Alkin, Benedikt

12 publications

TMLR 2025 AB-UPT: Scaling Neural CFD Surrogates for High- Fidelity Automotive Aerodynamics Simulations via Anchored- Branched Universal Physics Transformers Benedikt Alkin, Maurits Bleeker, Richard Kurle, Tobias Kronlachner, Reinhard Sonnleitner, Matthias Dorfer, Johannes Brandstetter
ICLR 2025 MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Masked Image Modeling Representations Benedikt Alkin, Lukas Miklautz, Sepp Hochreiter, Johannes Brandstetter
ICLRW 2025 NeuralDEM: Real-Time Simulation of Industrial Particulate Flows Benedikt Alkin, Tobias Kronlachner, Samuele Papa, Stefan Pirker, Thomas Lichtenegger, Johannes Brandstetter
NeurIPS 2025 Parameter Efficient Fine-Tuning via Explained Variance Adaptation Fabian Paischer, Lukas Hauzenberger, Thomas Schmied, Benedikt Alkin, Marc Peter Deisenroth, Sepp Hochreiter
ICLRW 2025 UPT++: Latent Point Set Neural Operators for Modeling System State Transitions Andreas Fürst, Florian Sestak, Artur P. Toshev, Benedikt Alkin, Nikolaus A. Adams, Andreas Mayr, Günter Klambauer, Johannes Brandstetter
ICLR 2025 Vision-LSTM: xLSTM as Generic Vision Backbone Benedikt Alkin, Maximilian Beck, Korbinian Pöppel, Sepp Hochreiter, Johannes Brandstetter
AAAI 2024 Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget Johannes Lehner, Benedikt Alkin, Andreas Fürst, Elisabeth Rumetshofer, Lukas Miklautz, Sepp Hochreiter
NeurIPSW 2024 MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Representations Benedikt Alkin, Lukas Miklautz, Sepp Hochreiter, Johannes Brandstetter
NeurIPSW 2024 One Initialization to Rule Them All: Fine-Tuning via Explained Variance Adaptation Fabian Paischer, Lukas Hauzenberger, Thomas Schmied, Benedikt Alkin, Marc Peter Deisenroth, Sepp Hochreiter
NeurIPSW 2024 One Initialization to Rule Them All: Fine-Tuning via Explained Variance Adaptation Fabian Paischer, Lukas Hauzenberger, Thomas Schmied, Benedikt Alkin, Marc Peter Deisenroth, Sepp Hochreiter
NeurIPS 2024 Universal Physics Transformers: A Framework for Efficiently Scaling Neural Operators Benedikt Alkin, Andreas Fürst, Simon Schmid, Lukas Gruber, Markus Holzleitner, Johannes Brandstetter
ICMLW 2024 Vision-LSTM: xLSTM as Generic Vision Backbone Benedikt Alkin, Maximilian Beck, Korbinian Pöppel, Sepp Hochreiter, Johannes Brandstetter