20.11.2025
39 Paper bei NeurIPS 2025 akzeptiert
Bei der diesjährigen NeurIPS-Konferenz wurden 39 Beiträge von Forschenden unseres Exzellenzclusters akzeptiert.
Die 39. Konferenz zu Neural Information Processing Systems (NeurIPS) findet am San Diego Convention Center in den Vereinigten Staaten vom 2. - 7. Dezember 2025 und am Hilton Mexico City Reforma in Mexiko vom 30. November - 5. Dezember 2025 statt. NeurIPS ist die größte Konferenz für Maschinelles Lernen und Computational Neuroscience. Ziel der jährlichen Treffen ist es, den Forschungsaustausch zu neuronalen Informationsverarbeitungssysteme in ihren biologischen, technologischen, mathematischen und theoretischen Aspekten zu fördern. Der Schwerpunkt liegt auf peer-reviewed, neuartigen Forschungsarbeiten, die in einer allgemeinen Session vorgestellt und diskutiert werden, sowie auf eingeladenen Vorträgen von ausgewiesenen Experten.
In diesem Jahr ist unser Cluster mit 39 Papern auf der NeurIPS vertreten.
Liste der akzeptierten Beiträge unserer Mitglieder (hervorgehoben) und ihren Teammitgliedern (alle Beiträge sind hier zu finden):
- Direct Alignment with Heterogeneous Preferences
Ali Shirali, Arash Nasr-Esfahany, Abdullah Alomar, Parsa Mirtaheri, Rediet Abebe, Ariel Procaccia - TRACE: Contrastive learning for multi-trial time series data in neuroscience
Lisa Schmors, Dominic Gonschorek, Jan Niklas Böhm, Yongrong Qiu, Na Zhou, Dmitry Kobak, Andreas Tolias, Fabian Sinz, Jacob Reimer, Katrin Franke, Sebastian Damrich, Philipp Berens - A data and task-constrained mechanistic model of the mouse outer retina shows robustness to contrast variations
Kyra Kadhim, Jonas Beck, Ziwei Huang, Jakob H Macke, Fred Rieke, Thomas Euler, Michael Deistler, Philipp Berens - Equivariance by Contrast: Identifiable Equivariant Embeddings from Unlabeled Finite Group Actions
Tobias Schmidt, Steffen Schneider, Matthias Bethge - What Moves the Eyes: Doubling Mechanistic Model Performance Using Deep Networks to Discover and Test Cognitive Hypotheses
Federico D'Agostino, Lisa Schwetlick, Matthias Bethge, Matthias Kümmerer - AlgoTune: Can Language Models Speed Up General-Purpose Numerical Programs?
Ori Press, Brandon Amos, Haoyu Zhao, Yikai Wu, Samuel Ainsworth, Dominik Krupke, Patrick Kidger, Touqir Sajed, Bartolomeo Stellato, Jisun Park, Nathanael Bosch, Eli Meril, Albert Steppi, Arman Zharmagambetov, Fangzhao Zhang, David Pérez-Piñeiro, Alberto Mercurio, Ni Zhan, Talor Abramovich, Kilian Lieret, Hanlin Zhang, Shirley Huang, Matthias Bethge, Ofir Press - BEDLAM2.0: Synthetic humans and cameras in motion
Joachim Tesch, Giorgio Becherini, Prerana Achar, Anastasios Yiannakidis, Muhammed Kocabas, Priyanka Patel, Michael Black - HairFree: Compositional 2D Head Prior for Text-Driven 360° Bald Texture Synthesis
Mirela Ostrek, Michael Black, Justus Thies - Quantifying Uncertainty in Error Consistency: Towards Reliable Behavioral Comparison of Classifiers
Thomas Klein, Sascha Meyen, Wieland Brendel, Felix A. Wichmann, Kristof Meding - Concept-Guided Interpretability via Neural Chunking
Shuchen Wu, Stephan Alaniz, Shyamgopal Karthik, Peter Dayan, Eric Schulz, Zeynep Akata - FNOPE: Simulation-based inference on function spaces with Fourier Neural Operators
Guy Moss, Leah Muhle, Reinhard Drews, Jakob H Macke, Cornelius Schröder - Put CASH on Bandits: A Max K-Armed Problem for Automated Machine Learning
Amir Rezaei Balef, Claire Vernade, Katharina Eggensperger - Position: Benchmarking is Broken - Don't Let AI be Its Own Judge
Zerui Cheng, Stella Wohnig, Ruchika Gupta, Samiul Alam, Tassallah Abdullahi, João Alves Ribeiro, Christian Nielsen-Garcia, Saif Mir, Siran Li, Jason Orender, Seyed Ali Bahrainian, Daniel Kirste, Aaron Gokaslan, Carsten Eickhoff, Ruben Wolff - ReSim: Reliable World Simulation for Autonomous Driving
Jiazhi Yang, Kashyap Chitta, Shenyuan Gao, Long Chen, Yuqian Shao, Xiaosong Jia, Hongyang Li, Andreas Geiger, Xiangyu Yue, Li Chen - Register and [CLS] tokens induce a decoupling of local and global features in large ViTs
Alexander Lappe, Martin Giese - How Benchmark Prediction from Fewer Data Misses the Mark
Guanhua Zhang, Florian E. Dorner, Moritz Hardt - Monoculture or Multiplicity: Which Is It?
Mila Gorecki, Moritz Hardt - Advancing Compositional Awareness in CLIP with Efficient Fine-Tuning
Amit Peleg, Naman Deep Singh, Matthias Hein - Robustness in Both Domains: CLIP Needs a Robust Text Encoder
Elias Abad Rocamora, Christian Schlarmann, Naman Deep Singh, Yongtao Wu, Matthias Hein, Volkan Cevher - Rethinking Approximate Gaussian Inference in Classification
Bálint Mucsányi, Nathaël Da Costa, Philipp Hennig - Learning in Compact Spaces with Approximately Normalized Transformer
Jörg Franke, Urs Spiegelhalter, Marianna Nezhurina, Jenia Jitsev, Frank Hutter, Michael Hefenbrock - Do-PFN: In-Context Learning for Causal Effect Estimation
Jake Robertson, Arik Reuter, Siyuan Guo, Noah Hollmann, Frank Hutter, Bernhard Schölkopf - TabArena: A Living Benchmark for Machine Learning on Tabular Data
Nick Erickson, Lennart Purucker, Andrej Tschalzev, David Holzmüller, Prateek Desai, David Salinas, Frank Hutter - DeltaProduct: Improving State-Tracking in Linear RNNs via Householder Products
Julien Siems, Timur Carstensen, Arber Zela, Frank Hutter, Massimiliano Pontil, Riccardo Grazzi - Gompertz Linear Units: Leveraging Asymmetry for Enhanced Learning Dynamics
Indrashis Das, Mahmoud Safari, Steven Adriaensen, Frank Hutter - EquiTabPFN: A Target-Permutation Equivariant Prior Fitted Network
Michael Arbel, David Salinas, Frank Hutter - On the Surprising Effectiveness of Large Learning Rates under Standard Width Scaling
Moritz Haas, Sebastian Bordt, Ulrike Luxburg, Leena Chennuru Vankadara - Performative Validity of Recourse Explanations
Gunnar König, Hidde Fokkema, Timo Freiesleben, Celestine Mendler-Dünner, Ulrike Luxburg - Effortless, Simulation-Efficient Bayesian Inference using Tabular Foundation Models
Julius Vetter, Manuel Gloeckler, Daniel Gedon, Jakob H Macke - Identifying multi-compartment Hodgkin-Huxley models with high-density extracellular voltage recordings
Ian Christopher Tanoh, Michael Deistler, Jakob H Macke, Scott Linderman - Forecasting in Offline Reinforcement Learning for Non-stationary Environments
Suzan Ece Ada, Georg Martius, Emre Ugur, Erhan Oztop - Look-Ahead Reasoning on Learning Platforms
Haiqing Zhu, Tijana Zrnic, Celestine Mendler-Dünner - Collective Counterfactual Explanations: Balancing Individual Goals and Collective Dynamics
Ahmad-Reza Ehyaei, Ali Shirali, Samira Samadi - SPARTAN: A Sparse Transformer World Model Attending to What Matters
Anson Lei, Bernhard Schölkopf, Ingmar Posner - Reparameterized LLM Training via Orthogonal Equivalence Transformation
Zeju Qiu, Simon Buchholz, Tim Xiao, Maximilian Dax, Bernhard Schölkopf, Weiyang Liu - Counterfactual reasoning: an analysis of in-context emergence
Moritz Miller, Bernhard Schölkopf, Siyuan Guo - Are Language Models Efficient Reasoners? A Perspective from Logic Programming
Andreas Opedal, Yanick Zengaffinen, Haruki Shirakami, Clemente Pasti, Mrinmaya Sachan, Abulhair Saparov, Ryan Cotterell, Bernhard Schölkopf - Non-Stationary Lipschitz Bandits
Nicolas Nguyen, Solenne Gaucher, Claire Vernade - Quantization-Free Autoregressive Action Transformer
Ziyad Sheebaelhamd, Michael Tschannen, Michael Muehlebach, Claire Vernade