My current central research interests include the following.
Computational modeling of lexical processing.
Funded by the European research council, we are investigating the potential of wide learning (modeling with huge linear networks) for understanding human lexical processing in reading, listening, and speaking. We have recently provided a proof of concept that the processing of both simple and morphologically complex words can be achieved with very high accuracy without requiring theoretical constructs such as morphemes, stems, exponents, inflectional classes, and exceptions. Our new model, Linear Discriminative
Learning, is a formalization of Word and Paradigm Morphology (Blevins, 2016, CUP) that is grounded in discrimination learning. A detailed study of English inflectional and derivational morphology is provided in Baayen, Chuang, Shafaei-Bajestan and Blevins (2018, Complexity) and a small case study for Latin is available in Baayen, Chuang and Blevins (2018, The Mental Lexicon). I am both excited and puzzled that the simple linear mappings underlying Linear Discriminative Learning works so well.
Phonetics.
In my lab, we are using electromagnetic articulography and ultrasound to clarify how speakers move their jaw and tongue during articulation. We have been studying dialect differences (speakers in the north-east of the Netherlands speak with their tongue further back in the mouth compared to speakers in the center east, Wieling et al. 2016, Journal of Phonetics), and we have recently obtained evidence that practice makes perfect also for articulation (Tomaschek et al., 2018, Linguistic Vanguard). We are also modeling the different acoustic durations of homophonous suffixes (e.g., English -s, which on nouns expresses plural or genitive, and on verbs the third person singular) using discriminative learning.
Statistical methods.
I have a long-standing interest in statistical methods, including linear mixed effects models, random forests, generalized additive models, quantile regression, and survival analysis. I am especially impressed by the
combination of quantile regression and generalized additive modeling as implemented in the qgam package for R by Matteo Fasiolo (University of Bristol). I love exploratory data analysis and have learned most from those experiments that flatly contradicted my predictions, and revealed unexpected new trends in my data.
Teaching
SoSe 2023
Methods II: Statistics
SoSe 2023
Multimodal communication
WiSe 2022/2023
Linguistics for Cognitive Science
SoSe 2022
Methods II: Statistics
WiSe 2021/2022
Linguistics for Cognitive Science
SoSe 2021
Methods II: Statistics
WiSe 2020/2021
Linguistics for Cognitive Science
WiSe 2019/2020
Linguistics for Cognitive Science
SoSe 2019
Computational Models of Morphological Processing
SoSe 2019
Introduction to Regression and Data Analysis
WiSe 2018/2019
Introduction to Linguistics for Cognitive Science
SoSe 2018
Advanced Regression Modeling
SoSe 2018
Introduction to Regression and Data Analysis
SoSe 2018
Introduction to Linguistics for Cognitive Science
Selected publications
Shen, T., and Baayen, R. H. (2023). Productivity and semantic transparency: An exploration of word formation in Mandarin Chinese. The Mental Lexicon, 1-22. Pdf
Baayen, R. H., Fasiolo, M., Wood, S., Chuang, Y.-Y. (2022). A note on the modeling of the effects of experimental time in psycholinguistic experiments. The Mental Lexicon, 1-35. Pdf
Shafaei-Bajestan, E., M. Moradipour-Tari, P. Uhrig, and R. H. Baayen (2021). LDL-AURIS: A computational model, grounded in error-driven learning, for the comprehension of single spoken words. Language, Cognition and Neuroscience, 1-28. Pdf
Shen, T., and R. H. Baayen. (2021). Adjective-Noun Compounds in Mandarin: a Study on Productivity. Corpus Linguistics and Linguistic Theory, 1-30. Pdf
Tomaschek, F., Tucker, B.V., Ramscar, M., and Baayen, R. H. (2021). Paradigmatic enhancement of stem vowels in regular English inflected verb forms. Morphology, 31, 171-199. Pdf
Baayen, R. H., and Smolka, E. (2020). Modeling morphological priming in German with naive discriminative learning. Frontiers in Communication, section Language Sciences, 1-40. Pdf
Chuang, Y-Y., Bell, M. J., Banke, I., and Baayen, R. H. (2020). Bilingual and multilingual mental lexicon: a modeling study with Linear Discriminative Learning. Language Learning, 1-73. Pdf
Chuang, Y-Y., Vollmer, M-l., Shafaei-Bajestan, E., Gahl, S., Hendrix, P., and Baayen, R. H. (2020). The processing of pseudoword form and meaning in production and comprehension: A computational modeling approach using Linear Discriminative Learning. Behavior Research Methods, 1-51. pdf
Baayen R.H., Chuang Y., Shafaei-Bajestan E., Blevins J.P. (2019). The discriminative lexicon: A unified computational model for the lexicon and lexical processing in comprehension and production grounded not in (de)composition but in linear discriminative learning. Complexity, 1-39. pdf
Tomaschek, F., Plag, I., Ernestus, M., and Baayen, R. H. (2019). Phonetic effects of morphology and context: Modeling the duration of word-final S in English with naïve discriminative learning. Journal of Linguistics, 1-39. Pdf
Sering K., Milin P., Baayen R. H., (2018). Language comprehension as a multi-label classification problem. Statistica Neerlandica, 72, 339–353. pdf
Tomaschek F., Tucker B.V., Fasiolo M., and Baayen R.H. (2018). Practice makes perfect: The consequences of lexical proficiency for articulation. Linguistics Vanguard, 4, 1-13. pdf
Baayen R.H., Vasishth S., Kliegl R., Bates D. (2017). The cave of shadows. Addressing the human factor with generalized additive mixed models. Journal of Memory and Language, 94:206-234. pdf
Linke M., Bröker F., Ramscar M., Baayen R.H. (2017). Are baboons learning "orthographic" representations? Probably not. PLoS ONE 12(8): e0183876. pdf
Baayen R.H., Shaoul C., Willits J., Ramscar M. (2015). Comprehension without segmentation: A proof of concept with naive discriminative learning. Language, Cognition, and Neuroscience, 31:106–128. pdf
Ramscar M., Baayen R.H. (2014). The myth of cognitive decline: why our minds improve as we age. New Scientist, 221(2961):28–29. pdf
Kösling, K., Kunter, G., Baayen, R. H., and Plag, I. (2013). Prominence in triconstituent compounds: Pitch contours and linguistic theory. Language and Speech, 56, 529–554. Pdf
Baayen R.H., Milin P., Durdević D. F., Hendrix P., Marelli M. (2011). An amorphous model for morphological processing in visual comprehension based on naive discriminative learning. Psychological Review, 118:438–482. pdf
Baayen R.H. (2008). Analyzing linguistic data: A practical introduction to statics using R. Cambridge University Press. url
Baayen, R.H. (2001). Word Frequency Distributions, Kluwer Academic Publishers. pdf
Presentations in the last 2 years
2023
Stupak, I. V., and Baayen, R. H., German affixed words: morphological productivity and semantic transparency. 16th International Cognitive Linguistics Conference (ICLC16), Düsseldorf, Germany, August 8, 2023 (poster presentation).
Chuang, Y.-Y., Baayen, R. H., and Bell, M., Do words sing their own tunes? Word-specific pitch realizations in Mandarin and English, 20th International Congress of Phonetic Sciences (ICPhS), Prague, Czech Republic, August 7, 2023 (poster presentation).
Baayen, R. H., Chuang, Y.-Y., and Heitmeier, M., Discriminative learning and the lexicon: NDL and LDL, STEP2023 – CCP Spring Training in Experimental Psycholinguistics, Edmonton, Canada, June 14, 16 (virtual).
2022
Baayen, R. H., Frequency-informed Linear Discriminative Learning, Ingo Plag Celebration Colloquium, Düsseldorf, Germany, September 1, 2022 (keynote).
Baayen, R. H., Modeling lexical processing with linear mappings, International Seminar on Language Culture and Cognition (part of the series from the National Coordination of the National Institute for Anthropology and History), Mexico City, Mexico, May 31, 2022 (virtual keynote).
Baayen, R. H., Heitmeier, M., and Chuang, Y.-Y., Word learning never stops - evidence from computational modeling, Colloquium Research Training Group "Dynamics and stability of linguistic representations", Marburg, Germany, May 20, 2022.
Baayen, R. H., Understanding what word embeddings understand, Groningen Spring School Cognitive Modeling, Groningen, Netherlands, April 7, 2022 (keynote).
Baayen, R. H., Modeling lexical processing with linear mappings, Surrey Linguistics Circle, Guildford, UK, March 29, 2022 (virtual talk).
Baayen, R. H., Modeling lexical processing with linear mappings, UCL (University College London) Language & Cognition seminar series, London, UK, March 16, 2022 (virtual talk).
Baayen, R. H., Shafaei-Bajestan, E., Chuang, Y.-Y., and Heitmeier, M., Productivity in inflection, 44th Annual Conference of the German Linguistic Society (DGfS 2022), Tübingen, Germany, February 23, 2022 (virtual talk).
Baayen, R. H., and Gahl, S., Time and thyme again: Connecting spoken word duration to models of the mental lexicon, Morphology in Production and Perception (MPP2022), Düsseldorf, Germany, February 7, 2022 (virtual talk)
Baayen, R. H., Chuang, Y.-Y., Hsieh, S.-K., Tseng, S., Chen, J., and Shen, T., Conceptualising for compounding: Mandarin two-syllable compounds and names, Workshop on Morphology and Word Embeddings, Tübingen, Germany, January 18, 2022 (virtual talk).
2021
Baayen, R. H., Explorations into gesture, 2021 International Conference on Multimodal Communication: Emerging Computational and Technical Methods (ICMC2021), Changsha, China, December 11, 2021 (virtual talk).\
Heitmeier, M., Chuang, Y.-Y., and Baayen, R. H., Modeling German nonword plural productions with Linear Discriminative Learning, Words in the World 2021, Montreal, Canada, November 26, 2021 (virtual poster presentation).
Shafaei-Bajestan, E., Moradipour-Tari, M., Uhrig , P., and Baayen, R. H., Inflectional analogies with word embeddings: there is more than the average, Words in the World 2021, Montreal, Canada, November 26, 2021 (virtual talk).
Baayen, R. H., and Chuang, Y.-Y., Modeling morphology with multivariate multiple regression, Workshop Recent Approaches to the Quantitative Study of Language: Rules and Un-rules, Neuchatel, Switzerland, October 14, 2021 (virtual talk).
Shen, T., and Baayen, R. H., Productivity and semantic transparency: An exploration of compounding in Mandarin, Workshop Perspectives on productivity, Leuven, Belgium, May 26, 2021 (virtual talk)
Baayen, R. H., and Gahl, S., Thyme and time again: Semantics all the way down, Internal Workshop FOR2373, Düsseldorf, Germany, March 18, 2021 (virtual talk).
Luo, X., Chuang, Y-Y., and Baayen, R. H., Linear Discriminative Learning in Julia, International Conference on Error-Driven Learning in Language (EDLL 2021), Tübingen, Germany, March 11, 2021 (virtual poster presentation).