<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	 xmlns:content="http://purl.org/rss/1.0/modules/content/"
	 xmlns:utevent="http://uni-tuebingen.de/ns/event/"
	 xmlns:f="http://typo3.org/ns/TYPO3/CMS/Fluid/ViewHelpers"
	 xmlns:ut="http://typo3.org/ns/Unitue/Project/ViewHelpers"
	 xmlns:n="http://typo3.org/ns/GeorgRinger/News/ViewHelpers">
	<channel>
		<title>Home Wichmann-Lab</title><link>https://uni-tuebingen.de/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/neuronale-informationsverarbeitung/home/</link><description>Der RSS Feed der Universität Tübingen</description><language>en-EN</language><copyright>Universität Tübingen</copyright><pubDate>Wed, 22 Apr 2026 06:55:00 +0200</pubDate><lastBuildDate>Wed, 22 Apr 2026 06:55:00 +0200</lastBuildDate><item><guid isPermaLink="false">news-129453</guid><pubDate>Wed, 18 Mar 2026 10:35:15 +0100</pubDate><title>New article published at ICLR 2026</title><link>https://uni-tuebingen.de/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/neuronale-informationsverarbeitung/news/newsfullview-aktuell/article/new-article-published-at-iclr-2026/</link><description>Title: &quot;Low-Pass Filtering Improves Behavioral Alignment of Vision Models.&quot; by Max Wolff, Thomas Klein, Evgenia Rusak, Felix A. Wichmann and Wieland Brendel</description><content:encoded><![CDATA[<p><strong>Abstract:</strong><br>Despite their impressive performance on computer vision benchmarks, Deep Neural Networks (DNNs) still fall short of adequately modeling human visual behavior, as measured by error consistency and shape bias. Recent work hypothesized that behavioral alignment can be drastically improved through generative—rather than discriminative—classifiers, with far-reaching implications for models of human vision.<br>Here, we instead show that the increased alignment of generative models can be largely explained by a seemingly innocuous resizing operation in the generative model which effectively acts as a low-pass filter. In a series of controlled experiments, we show that removing high-frequency spatial information from discriminative models like CLIP drastically increases their behavioral alignment. Simply blurring images at test-time—rather than training on blurred images—achieves a new state-of-the-art score on the model-vs-human benchmark, halving the current alignment gap between DNNs and human observers. Furthermore, low-pass filters are likely optimal, which we demonstrate by directly optimizing filters for alignment. To contextualize the performance of optimal filters, we compute the frontier of all possible pareto-optimal solutions to the benchmark, which was formerly unknown.<br>We explain our findings by observing that the frequency spectrum of optimal Gaussian filters roughly matches the spectrum of band-pass filters implemented by the human visual system. We show that the contrast sensitivity function, describing the inverse of the contrast threshold required for humans to detect a sinusoidal grating as a function of spatiotemporal frequency, is approximated well by Gaussian filters of the specific width that also maximizes error consistency.</p><p>To see the whole article, please visit our <a href="/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/neuronale-informationsverarbeitung/publications/peer-reviewed/#c2388621">publications page.</a></p>]]></content:encoded><category>NeuroInfo</category></item><item><guid isPermaLink="false">news-128796</guid><pubDate>Mon, 02 Mar 2026 16:13:56 +0100</pubDate><title>New article published in the Journal of Vision </title><link>https://uni-tuebingen.de/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/neuronale-informationsverarbeitung/news/newsfullview-aktuell/article/new-article-published-in-the-journal-of-vision-5/</link><description>Title: &quot;Behavioral differences between humans and machines arise early in visual processing&quot; by Thomas Klein, Wieland Brendel and Felix Wichmann</description><content:encoded><![CDATA[<p><strong>Abstract:</strong><br>It remains an open question to what extent current deep neural networks (DNNs) are suitable computational models of the human visual system. While DNNs have proven to be capable of predicting neural activations in primate visual cortex with great success, psychophysical experiments have shown behavioral differences between DNNs and human observers. One of these behavioral differences is which individual images DNNs and human observers find easy or difficult to recognize, as quantified by <i>error consistency (EC)</i>. Hypothetically, the reported differences in EC could arise late in visual processing, even though the representations extracted by DNNs and human observers may have been more similar in the initial forward sweep: At the presentation and response times investigated in earlier work, observer-internal idiosyncrasies (e.g., in feedback-mediated memory) might have influenced the final behavioral responses, lowering EC between DNNs and human observers. To test this hypothesis, we systematically vary presentation times of backward-masked stimuli from 8.3 to 267 ms and measure human performance on a speeded eightfold identification task with natural images. Contrary to the hypothesis that error consistency peaks early in time, we find that it never exceeds the value of 0.4 known from previous work with longer presentation times, suggesting that the differences between DNNs and humans cannot be explained by late high-level reasoning but point to systematic processing differences between DNNs and the early human visual system.</p><p>To see the whole article, please visit our <a href="/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/neuronale-informationsverarbeitung/publications/peer-reviewed/#c2388621">publications page.</a></p>]]></content:encoded><category>NeuroInfo</category></item><item><guid isPermaLink="false">news-127314</guid><pubDate>Wed, 14 Jan 2026 09:12:53 +0100</pubDate><title>New abstract accepted as a poster at TeaP 2026, Tübingen, FRG</title><link>https://uni-tuebingen.de/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/neuronale-informationsverarbeitung/news/newsfullview-aktuell/article/new-abstract-accepted-as-a-poster-at-teap-2026-tuebingen-frg/</link><description>Title: &quot;Modern datasets to constrain parameters of early spatial vision models: improved methodology and increased statistical power&quot; by Maryam Jannati and Felix A. Wichmann</description><content:encoded><![CDATA[<p>To see the whole abstract, please visit our <a href="/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/neuronale-informationsverarbeitung/publications/conference-abstracts/#c2445306">publications page.</a></p>]]></content:encoded><category>NeuroInfo</category></item><item><guid isPermaLink="false">news-125310</guid><pubDate>Mon, 10 Nov 2025 18:44:36 +0100</pubDate><title>New article published to NeurIPS 2025</title><link>https://uni-tuebingen.de/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/neuronale-informationsverarbeitung/news/newsfullview-aktuell/article/new-article-published-to-neurips-2025/</link><description>Title: &quot;Quantifying Uncertainty in Error Consistency: Towards Reliable Behavioral Comparison of Classifiers&quot; by Thomas Klein, Sascha Meyen, Wieland Brendel, Felix A. Wichmann and Kristof Meding </description><content:encoded><![CDATA[<div><p><strong>Abstract:</strong><br>Benchmarking models is a key factor for the rapid progress in machine learning (ML) research. Thus, further progress depends on improving benchmarking metrics. A standard metric to measure the behavioral alignment between ML models and human observers is error consistency (EC). EC allows for more fine-grained comparisons of behavior than other metrics such as accuracy, and has been used in the influential Brain-Score benchmark to rank different DNNs by their behavioral consistency with humans. Previously, EC values have been reported without confidence intervals. However, empirically measured EC values are typically noisy - thus, without confidence intervals, valid benchmarking conclusions are problematic. Here we improve on standard EC in two ways: First, we show how to obtain confidence intervals for EC using a bootstrapping technique, allowing us to derive significance tests for EC. Second, we propose a new computational model relating the EC between two classifiers to the implicit probability that one of them copies responses from the other. This view of EC allows us to give practical guidance to scientists regarding the number of trials required for sufficiently powerful, conclusive experiments. Finally, we use our methodology to revisit popular NeuroAI-results. We find that while the general trend of behavioral differences between humans and machines holds up to scrutiny, many reported differences between deep vision models are statistically insignificant. Our methodology enables researchers to design adequately powered experiments that can reliably detect behavioral differences between models, providing a foundation for more rigorous benchmarking of behavioral alignment.</p><div class="note-content-value markdown-rendered"><p>To see the whole article, please visit our <a href="https://uni-tuebingen.de/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/neuronale-informationsverarbeitung/publications/peer-reviewed/#c2193906" target="_blank">publications page.</a></p></div></div>]]></content:encoded><category>NeuroInfo</category></item><item><guid isPermaLink="false">news-124242</guid><pubDate>Thu, 16 Oct 2025 16:36:46 +0200</pubDate><title>New article published in the Journal &quot;Cognition&quot;</title><link>https://uni-tuebingen.de/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/neuronale-informationsverarbeitung/news/newsfullview-aktuell/article/new-article-published-in-the-journal-cognition/</link><description>Title: &quot;Tracing truth through conceptual scaling&quot; by Lukas S. Huber, David-Elias Künstle and Kevin Reuter</description><content:encoded><![CDATA[<p><strong>Abstract:</strong><br>Conceptions of truth have shifted considerably, adapting to the changing cultural and intellectual contexts of our time. In this paper we employ a conceptual scaling method (Study 1) to empirically capture laypeople’s understanding of truth&nbsp;as spatial relations within individualized conceptual maps. Results indicate that participants most dominantly align with a correspondence notion of truth, followed by authenticity and then coherence. A more fine-grained analysis reveals substantial variation in pluralism: while some participants exhibit a strongly monistic tendency, many others endorse a two-theory blend (most often correspondence and authenticity). In a follow-up study (Study 2) conducted three months later, we confirm the validity and robustness of these findings. Participants’ dominant alignment reliably predicts how they apply the concept of truth in a contextualized task.</p><p>To see the whole article, please visit our <a href="https://uni-tuebingen.de/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/neuronale-informationsverarbeitung/publications/peer-reviewed/#c2193906" target="_blank">publications page.</a></p>]]></content:encoded><category>NeuroInfo</category></item>
	</channel>
</rss>

