News
Honorary doctor University of Tartu.
My current central research interests include the following.
Computational modeling of lexical processing.
Funded by the European research council, we are investigating the potential of wide learning (modeling with huge linear networks) for understanding human lexical processing in reading, listening, and speaking. We have recently provided a proof of concept that the processing of both simple and morphologically complex words can be achieved with very high accuracy without requiring theoretical constructs such as morphemes, stems, exponents, inflectional classes, and exceptions. Our new model, Linear Discriminative
Learning, is a formalization of Word and Paradigm Morphology (Blevins, 2016, CUP) that is grounded in discrimination learning. A detailed study of English inflectional and derivational morphology is provided in Baayen, Chuang, Shafaei-Bajestan and Blevins (2018, Complexity) and a small case study for Latin is available in Baayen, Chuang and Blevins (2018, The Mental Lexicon). I am both excited and puzzled that the simple linear mappings underlying Linear Discriminative Learning works so well.
Phonetics.
In my lab, we are using electromagnetic articulography and ultrasound to clarify how speakers move their jaw and tongue during articulation. We have been studying dialect differences (speakers in the north-east of the Netherlands speak with their tongue further back in the mouth compared to speakers in the center east, Wieling et al. 2016, Journal of Phonetics), and we have recently obtained evidence that practice makes perfect also for articulation (Tomaschek et al., 2018, Linguistic Vanguard). We are also modeling the different acoustic durations of homophonous suffixes (e.g., English -s, which on nouns expresses plural or genitive, and on verbs the third person singular) using discriminative learning.
Statistical methods.
I have a long-standing interest in statistical methods, including linear mixed effects models, random forests, generalized additive models, quantile regression, and survival analysis. I am especially impressed by the
combination of quantile regression and generalized additive modeling as implemented in the qgam package for R by Matteo Fasiolo (University of Bristol). I love exploratory data analysis and have learned most from those experiments that flatly contradicted my predictions, and revealed unexpected new trends in my data.
Teaching
SuSe 2025: Linguistics for Cognitive Science
|
SuSe 2025: Methods II: Statistics
|
SuSe 2024: Methods II: Statistics
|
WiSe 2023: Linguistics for Cognitive Science
|
SuSe 2023: Methods II: Statistics
|
SuSe 2023: Multimodal communication
|
WiSe 2022: Linguistics for Cognitive Science
|
SuSe 2022: Methods II: Statistics
|
WiSe 2021: Linguistics for Cognitive Science
|
WiSe 2021: Methods II: Statistics
|
WiSe 2020: Linguistics for Cognitive Science
|
WiSe 2019: Linguistics for Cognitive Science
|
SuSe 2019: Computational Models of Morphological Processing
|
SuSe 2019: Introduction to Regression and Data Analysis
|
WiSe 2018: Linguistics for Cognitive Science
|
SuSe 2018: Introduction to Linguistics for Cognitive Science
|
SuSe 2018: Advanced Regression Modeling
|
SuSe 2018: Introduction to Regression and Data Analysis
|
WiSe 2017: Linguistics for Cognitive Science
|
WiSe 2017: Hierarchical Linear Models
|
SuSe 2017: Regression modeling strategies for the analysis of linguistic and psycholinguistic data
|
SuSe 2016: Introduction to Cognitive Models of Language Processing
|
SuSe 2016: Regression modeling strategies for the analysis of linguistic and psycholinguistic data
|
WiSe 2015: Linguistik für kognitionswissenschaften
|
SuSe 2015: Introduction to Cognitive Models of Language Processing
|
SuSe 2015: Regression modeling strategies for the analysis of linguistic and psycholinguistic data
|
WiSe 2014: Mathematics for Linguistics
|
WiSe 2014: Linguistik für kognitionswissenschaften
|
SuSe 2014: Introduction to Cognitive Models of Language Processing
|
SuSe 2014: Regression modeling strategies for the analysis of linguistic and psycholinguistic data
|
WiSe 2013: Mathematics for Linguistics
|
WiSe 2013: Linguistik für Kognitionswissenschaften
|
SuSe 2013: Introduction to Cognitive Models of Language Processing
|
SuSe 2013: Regression modeling strategies for the analysis of linguistic and psycholinguistic data|
|
WiSe 2012: Mathematics for Linguistics
|
WiSe 2012: Linguistik für Kognitionswissenschaften
|
SuSe 2012: Kausalität und Sprache
|
SuSe 2012: Regression modeling strategies for the analysis of linguistic and psycholinguistic data
WiSe 2011: Introduction to cognitive models of language processing
|
- WiSe 2011: Linguistik für Kognitionswissenschaften
Selected publications
Yang, Y., and Baayen, R. H. (2025). Comparing the semantic structures of lexicon of Mandarin and English. Language and Cognition, 17, e10, 1-30. pdf
Hassan Shahmohammadi, Maria Heitmeier, Elnaz Shafaei-Bajestan, Hendrik P. A. Lensch, R. Harald Baayen (2024). How direct is the link between words and images? The Mental Lexicon, 1-40. Pdf
Shen, T., and Baayen, R. H. (2023). Productivity and semantic transparency: An exploration of word formation in Mandarin Chinese. The Mental Lexicon, 1-22. Pdf
Baayen, R. H., Fasiolo, M., Wood, S., Chuang, Y.-Y. (2022). A note on the modeling of the effects of experimental time in psycholinguistic experiments. The Mental Lexicon, 1-35. Pdf
Shafaei-Bajestan, E., M. Moradipour-Tari, P. Uhrig, and R. H. Baayen (2021). LDL-AURIS: A computational model, grounded in error-driven learning, for the comprehension of single spoken words. Language, Cognition and Neuroscience, 1-28. Pdf
Shen, T., and R. H. Baayen. (2021). Adjective-Noun Compounds in Mandarin: a Study on Productivity. Corpus Linguistics and Linguistic Theory, 1-30. Pdf
Tomaschek, F., Tucker, B.V., Ramscar, M., and Baayen, R. H. (2021). Paradigmatic enhancement of stem vowels in regular English inflected verb forms. Morphology, 31, 171-199. Pdf
Baayen, R. H., and Smolka, E. (2020). Modeling morphological priming in German with naive discriminative learning. Frontiers in Communication, section Language Sciences, 1-40. Pdf
Chuang, Y-Y., Bell, M. J., Banke, I., and Baayen, R. H. (2020). Bilingual and multilingual mental lexicon: a modeling study with Linear Discriminative Learning. Language Learning, 1-73. Pdf
Chuang, Y-Y., Vollmer, M-l., Shafaei-Bajestan, E., Gahl, S., Hendrix, P., and Baayen, R. H. (2020). The processing of pseudoword form and meaning in production and comprehension: A computational modeling approach using Linear Discriminative Learning. Behavior Research Methods, 1-51. pdf
Baayen R.H., Chuang Y., Shafaei-Bajestan E., Blevins J.P. (2019). The discriminative lexicon: A unified computational model for the lexicon and lexical processing in comprehension and production grounded not in (de)composition but in linear discriminative learning. Complexity, 1-39. pdf
Tomaschek, F., Plag, I., Ernestus, M., and Baayen, R. H. (2019). Phonetic effects of morphology and context: Modeling the duration of word-final S in English with naïve discriminative learning. Journal of Linguistics, 1-39. Pdf
Sering K., Milin P., Baayen R. H., (2018). Language comprehension as a multi-label classification problem. Statistica Neerlandica, 72, 339–353. pdf
Tomaschek F., Tucker B.V., Fasiolo M., and Baayen R.H. (2018). Practice makes perfect: The consequences of lexical proficiency for articulation. Linguistics Vanguard, 4, 1-13. pdf
Baayen R.H., Vasishth S., Kliegl R., Bates D. (2017). The cave of shadows. Addressing the human factor with generalized additive mixed models. Journal of Memory and Language, 94:206-234. pdf
Linke M., Bröker F., Ramscar M., Baayen R.H. (2017). Are baboons learning "orthographic" representations? Probably not. PLoS ONE 12(8): e0183876. pdf
Baayen R.H., Shaoul C., Willits J., Ramscar M. (2015). Comprehension without segmentation: A proof of concept with naive discriminative learning. Language, Cognition, and Neuroscience, 31:106–128. pdf
Ramscar M., Baayen R.H. (2014). The myth of cognitive decline: why our minds improve as we age. New Scientist, 221(2961):28–29. pdf
Kösling, K., Kunter, G., Baayen, R. H., and Plag, I. (2013). Prominence in triconstituent compounds: Pitch contours and linguistic theory. Language and Speech, 56, 529–554. Pdf
Baayen R.H., Milin P., Durdević D. F., Hendrix P., Marelli M. (2011). An amorphous model for morphological processing in visual comprehension based on naive discriminative learning. Psychological Review, 118:438–482. pdf
Baayen R.H. (2008). Analyzing linguistic data: A practical introduction to statics using R. Cambridge University Press. url
Baayen, R.H. (2001). Word Frequency Distributions, Kluwer Academic Publishers. pdf
Presentations in the last 2 years
2025
Baayen, R. H., How can it be so simple? Predicting the F0-contours of Mandarin words in spontaneous speech from their corresponding contextualized embeddings with linear mappings, Guangzhou University School of Foreign Studies, Guangzhou, China, March 24, 2025.
Baayen, R. H., How does language work? Challenges and opportunities in the age of deep learning, National Taiwan Normal University Co-hosted by the College of Liberal Arts, Taipei, Taiwan, March 20, 2025.
Baayen, R. H., How can it be so simple? Predicting the F0-contours of Mandarin words in spontaneous speech from their corresponding contextualized embeddings with linear mappings, City University of Hongkong Department of Linguistics and Translation, Hongkong, China, March 13, 2025.
Baayen, R. H., How does language work? Challenges and opportunities in the age of deep learning, Colloquium talk University of Lübeck | Institute of Robotics and Cognitive Systems, Lübeck, Germany, March 3, 2025.
Baayen, R. H., How does language work? Challenges and opportunities in the age of deep learning, Colloquium talk Faculty of Behavioural and Movement Sciences Vrije Universiteit, Amsterdam, the Netherlands, February 7, 2025.
2024
Nikolaev, A., Baayen, R. H., and Chuang, Y.-Y., Analyzing Finnish Inflectional Classes through Discriminative Lexicon Models, Digital Research Data and Human Sciences (DRDHum) conference, Joensuu, Finland, December 11, 2024.
Baayen R. H., How can it be so simple? Predicting the FO-contours of Mandarin words in spontaneous speech from their corresponding contextualized embeddings with linear mappings, TÜling, Tartu, Estonia, December 3, 2024.
Beaman, K. V., Sering, K., and Baayen R. H., Lectal coherence in Swabian time and space: a cognitive-computational perspective, Lecture series on Lectal Coherence: Language Variation and Change across Linguistic Disciplines, München, Germany, November 15, 2024.
Baayen, R. H., and Heitmeier, M., Linear Discriminative Learning, Workshop at the International Word Processing Conference (WoProc 2024), Belgrade, Serbia, July 6, 2024.
Nikolaev, A., Chuang, Y.-Y., and Baayen, R. H., Analyzing Finnish Inflectional Classes through Discriminative Lexicon Models. International Word Processing Conference (WoProc 2024), Belgrade, Serbia, July 5, 2024.
Chuang, Y.-Y., Bell, M. J., Tseng, Y.-H., and Baayen, R. H., Word-specific tonal realizations in Mandarin. International Word Processing Conference (WoProc 2024), Belgrade, Serbia, July 5, 2024.
Baayen, R. H., and Berg, K., Historical and psycholinguistic perspectives on morphological productivity: A sketch of an integrative approach. Unraveling linguistic productivity: Insights into usage, processing and variability conference, Ghent, Belgium, May 21, 2024.
Baayen, R. H., Tseng, Y.-H., and Ernestus, M., Age and speech reduction in the Buckeye corpus: the influence of word meaning on word form. Conference on Corpora for Language and Aging Research (CLARe6), Tübingen, Germany, April 10, 2024.
Baayen, R. H., Perspectives on morphological productivity, Colloquium Center for Language and Cognition (CLCG), Groningen, Netherlands, January 22, 2024.
Baayen, R. H., Modeling Mandarin tones on two-word compounds, Colloquium English Language and Linguistics, Düsseldorf, Germany, January 19, 2024.