Das Oberseminar bietet Vorträge von eingeladenen Referenten oder Kollegen aus der Abteilung an. Die Referenten stellen aktuelle Forschungsergebnisse zu allen für die allgemeine und theoretische Linguistik relevanten Bereichen vor. Jeder ist herzlich eingeladen. Studierende werden besonders ermutigt, teilzunehmen, um forschungsbezogene Vorträge von Spezialisten aus erster Hand zu erleben.
In diesem Semester findet das Oberseminar montags um 16:15 Uhr im Seminar für Sprachwissenschaft in der Wilhelmstraße 19, Raum 1.13 statt.
Ralf Vogel (Universität Bielefeld)
Title: Syntactic inventions
Abstract: It is a consequence of the widely assumed frequentist approach to grammaticalisation that some inventions that speakers make occur too rarely to induce language change. Still, these inventions are synchronic phenomena that at the same time are based on the language system in its current state, and do not follow from it (completely). I will show with case studies from several phenomena of German morphosyntax (verbal complexes, reflexivisation and case conflicts in German) that this slightly paradoxical idea may help to improve our understanding of those phenomena, and perhaps linguistic competence more broadly.
Of special interest for linguistic theory is the fact that such rare inventions are not arbitrary and do not pose any comprehension problems on the side of the addressee. They are based on the linguistic system in its current historical state, and created by speakers applying general mechanisms in a manner that is transparent to the addressee. These general mechanisms are part of linguistic competence and underlie the speakers’ ability to create new items, words or constructions.
I use the term ad hoc constructions for these special morphosyntactic phenomena.
What makes them particularly interesting, are the following properties: 1. they have unexpected features that do not follow from the linguistic units they are based on; 2. they can be shown to occur indeed too rarely to have a chance to grammaticalise; 3. they are nevertheless preferred systematically over alternative variants and have a sufficiently high acceptability rating. Each of these points will be shown to hold for the phenomena I am discussing on the basis of experimental and corpus studies.
The grammatical analysis will combine tools from the theory of generalised conversational implicature, construction grammar and optimality theory.
Elke Teich (Universität des Saarlandes)
Title: Conventionalization in diachronic linguistic change: the case of Scientific English
Abstract: The topic of this talk is conventionalization (i.e. the longer-term linguistic effects of repeated interaction) and its benefits for communication (i.e. message transmission). Widely acknowledged as a relevant process in language change, conventionalization provides a prerequisite for innovation (de Smet 2016) and may lead to grammaticalization as well as the formation of registers (Weinreich et al. 1968, Harris 1991). I will elaborate the idea that conventionalization is a cornerstone in in changing language use because it serves the maintenance of communication function by inducing significant surprisal and entropy-reducing effects. To show this, we pursue an exploratory, corpus-based approach, focusing on scientific writing (Degaetano-Ortlieb and Teich 2019), a well-studied and fairly controlled domain, and its evolution across 250+ years from the mid-17th century onwards. The data set we use is the Royal Society Corpus¹. To capture lexical and syntactic aspects of changing language use, we employ computational language models; and to evaluate the observed effects, we apply various measures of information content (surprisal, entropy, relative entropy). We find for instance that diachronically, within the scientific domain, relative entropy on n-gram models overall decreases pointing to converging language use over time but this is more pronounced for the grammatical level than for the lexical level. For qualitative interpretation, we inspect the linguistic items that significantly contribute to the observed trends looking at their (average) surprisal in syntagmatic context and the entropy of their paradigmatic context over time.
Degaetano-Ortlieb S. and E. Teich, 2019. Towards an optimal code for communication: the case of scientific English. Corpus Linguistics and Linguistic Theory (open access), DOI: doi.org/10.1515/cllt-2018-0088
De Smet, H. 2016. How gradual change progresses: The interaction between convention and innovation. Language Variation and Change, 28:83–102.
Harris, Z., 1991. A theory of language and information: A mathematical approach. Clarendon Press, Oxford. Weinreich U., W. Labov and M. I. Herzog, 1968.
Empirical foundations for a theory of language change. In W.P. Lehmann and Y. Malkiel (eds.), Directions for Historical Linguistics. University of Texas Press, Austin, Texas, pp. 95-195.
Natasha Korotkova (Universität Konstanz)
Title: Find, must, and conflicting evidence (joint work with Pranav Anand)
Abstract: Recent years have seen a lot of interest in the so-called subjective attitudes: English "find" and its counterparts in, e.g., French, German, or Norwegian. Unlike vanilla doxastics (i.e. "think"), find-verbs have been argued to only allow matters of opinion, rather than fact, in their complements. In this talk, we consider one underanalyzed class of expressions in find-complements: epistemic modals. Those modals have been often analyzed as subjective expressions, which makes them prime candidates for embedding under "find". However, epistemics are prohibited in subjective attitude complements. The "find+must" ban has been attributed to "must" not being subjective in the right way. We argue instead that the real culprit is a matter of evidence: find-verbs require their subject to have direct evidence for the complement, while "must" and its counterparts in other languages require a lack of direct evidence. Therefore, the "find+must" combination yields an evidential clash. We support our claim by novel cross-linguistic data on find-verbs and a range of indirect expressions, including bona fide evidentials, and analyze the "find+must" ban as a semantic contradiction.
Chundra Cathcart (Universität Zürich)
Title: Horizontal and vertical pressures in language change: fleshing out admixture models
Abstract: This talk presents preliminary results from a handful of studies using Bayesian admixture models to analyze cross-linguistic patterns with an eye to understanding shared history and historical contact between languages. Admixture models (such as the Structure algorithm from population genetics) have enjoyed less use in linguistics than phylogenetic methods; they have the advantage of directly modeling historical language contact, but do not explicitly model diachronic transitions between linguistic states. I focus on two extensions to this basic model which work towards bridging this gap: a neural model of sound change across Indo-Aryan dialects which accounts for both historical dialect group-level trends as well as individual language-level idiosyncrasies, and a model of typological distributions which models both spread and match factors by enhancing the admixture model with Markovian dynamics.
Johanna Nichols (University of California, Berkeley)
Title: Better characters/variables for historical linguistics
Abstract: Both wordlist-based and typology-based comparisons have become widely used in the last decade or so, and there has been much discussion of methods and interpretation. Here I discuss three problems I consider more fundamental: (1) The quality and usefulness of the characters or variables themselves; I propose several that promise much more as comparanda. (2) The coding strategies, especially the treatment of "no" answers on yes/no variables, can have major impact on distance measures and NeighborNet trees. (3) Handling synonymy in typologically-defined wordlist studies, where synonymy is common. Preliminary findings show that a combination of etymological and typological variables can yield good results for both typological and historical analysis.
Title: Error-driven Learning in Modeling Spoken Word Recognition
Abstract: Effective linguistic communication relies on the recognition of words (McQueen, 2007). Although spoken word recognition (SWR) is a vital task in speech comprehension, psycholinguists are still debating some fundamental assumptions decades after the first cognitive theories of SWR (Marslen-Wilson and Welsh, 1978) and initial computational modelings (McClelland and Elman, 1986; Norris, 1994). I present the current state of two projects in which we investigated the theory of error-driven learning, outlined by Rescorla and Wagner (1972) for animal and human learning, as a theory of SWR. Computational modelings of a excised word recognition task were carried out using the naive discriminative learning (NDL) and the linear discriminative learning (LDL) frameworks. First, Arnold et al. (2017) and Shafaei-Bajestan and Baayen (2018) applied NDL-based models that iteratively learn to classify German and English words, respectively, from their acoustic representations and reported model performance comparable to human performance. Second, Baayen et al. (2019) estimated the linear mappings between words’ feature vectors and the words’ semantic vectors directly and achieved superior accuracy in recognition of words compared to NDL. Assumptions in the models, issues in model implementation, initial results, and plans for future work are discussed.
Title: Linguistic politeness as strategic behavior: (some) costs and benefits of polite language use
Abstract: Behavioral ecology explains the behavior of animals by treating them as rational agents driven by a maximisation of their benefit-cost differentials. On the uncontroversial assumptions that humans are animals, and that speaking is a type of behavior, this overall approach should - at least in principle - generalise to language use: it should be possible to understand speakers as “rational decision-makers who make tradeoffs between costs and benefits”.
Despite the obvious difficulties involved in determining both the relevant types behavior (language use) and the relevant costs and benefits, there has been some success in applying this reasoning to “pockets” of linguistic behavior such as indirect speech (Pinker, Nowak & Lee 2006) or politeness (Quinley 2012). Most recently, the Responsibility Exchange Theory (RET; Chaudhry and Loewenstein 2019) effectively provides a proof of concept of this general approach for a narrow class of dyadic interactions (assignment of responsibility for a positive/negative outcome): it establishes functional classes of linguistic behavior (apologising, thanking) and works out a compelling theory of the associated social costs/benefits. In my talk, I build on the conceptual foundation proposed by Chaudhry and Loewenstein (2019) and look into ways of extending their approach.
Title: Factive presuppositions? An empirical challenge
Abstract: A long-standing and widely-held assumption is that the content of the complement of factive predicates like “know” is presupposed whereas that of non-factive predicates like “think” is not. There is, however, disagreement in the literature about which properties define factive predicates and whether the contents of the complements of particular predicates exhibit the properties attributed to factive predicates. The resulting disagreement about which predicates are factive is troublesome because the distinction between factive and non-factive predicates has played a central role in the study of presuppositions. This talk, which is based on joint work with Judith Degen (Stanford University), investigates properties of the contents of the complements of clause-embedding predicates with the goal of understanding how such predicates can be classified. We argue that predicates presumed factive are more heterogeneous than previously assumed and that there is little empirical support for the assumed categorical distinction between factive and non-factive predicates. We conclude by discussion the implications for future research on and analyses of presuppositions and other projective content.
Title: Why do we have to say certain things? On the obligatorification of dependents
Abstract: A common feature of the grammaticalization of function words is that they develop the requirement for obligatory dependents. For instance, English the does not occur except when followed by a nominal construction. This talk offers an account of the historical development of obligatorification - how dependents develop from optional extras to required accompaniments. I will show how the process leading to obligatorification is driven by universal communicative requirements. In this sense, the development in question sets in before grammaticalization “proper”, rather than being a result of grammaticalization as has been widely (if only implicitly) assumed. In fact, specific semantic structures create the need for overt hosts at every synchronic stage of every language. Only rarely does this requirement for an overt dependent develop into a syntactic requirement as a result of grammaticalization. I illustrate this with both diachronic and synchronic examples from diverse parts of speech that stem from several languages.
Title: Pronouns, Descriptions, Bridging
Abstract: Recent work on coreference and non-coreference anaphora makes no clear distinctions between different kinds of non-coreference anaphora and also seems to imply that coreference anaphora is a special kind of non-coreference anaphora. In this talk I take a closer look at the interpretation processes for anaphoric pronouns and definite descriptions from an interpretation-theoretic perspective. I identify three strategies for the interpretation of pronouns and descriptions. The first strategy always leads to coreference, the second can produce coreference effects and the third normally does not, although it may involve coreference in certain special cases. None of these three strategies can be reduced to either of the two others.
The formal framework in which the investigation is conducted is a version of DRT in which definite noun phrases (definite descriptions and pronouns among them) are treated as triggers of ‘identification presuppositions’ – presuppositions whose resolution identifies the referents of their triggers. The framework makes it possible to describe the strategies in precise and unambiguous terms and to ask precise questions about the ways they are related. A good part of the talk will be devoted to discussing this approach to the semantics and pragmatics of definite noun phrases and to showing how it works for anaphoric pronouns and descriptions.
Time permitting, we will have a look at a potential problem for the analysis: English descriptions with head nouns that denote ‘inalienable relations’, like the mouth, the father, the weight. Given the definition of bridging I will be using these descriptions ought to be perfect bridging descriptions. But in fact they are not. For instance, in normal contexts the sentence ‘No one mentioned the weight’ can’t be understood as meaning that no one mentioned his or her own weight. Nor can in most contexts: ‘Susan grew up like an orphan. She never even met the father’ be used felicitously to say that Susan never met her own father). The solution of this puzzle has to do with the different processes that are available for reinterpreting relational nouns as non-relational and non-relational nouns as relational.
Title: Linguistic diversity within Chibchan
Abstract: Chibchan languages are spoken at the very heart of the Americas, on the isthmus connecting both continents and in adjacent regions (Panama, Costa Rica, Colombia). Among the language families of Central and South America, the Chibchan family is particularly diverse in typological terms (Adelaar 2007). For instance, the Rama language of Nicaragua has only three phonemic vowels, /a/, /i/, /u/ (Craig 1989: 37), whereas Bribri (Costa Rica) has fourteen (Chevrier 2017: 56). In the domain of verbal person marking, some Chibchan languages use unbound elements, whereas others use prefixes, suffixes, or both (Pache 2015). This talk aims to discuss the following questions: (1) which are the domains of particular variability/relative uniformity within Chibchan? (2) What could have been factors triggering family-internal variability?
Title: Presuppositions, scales, and adjective order
Abstract: Many accounts of language assume that communication is inherently Gricean, and thus that contextually enriched meanings depend in part on a sensitivity to speaker states. However, current models of how core semantic phenomena interact with context often ignore speaker-specific information. For example, in the case of gradable adjectives, research on this topic has mainly focused on the role of extra-linguistic context, such as the distribution of a feature across a domain, or informativeness.
Here, we ask whether, in addition to statistical distributions, listener’s standards of comparison for adjectives are also sensitive to thresholds communicated by (a) existential presuppositions, to investigate whether listeners accommodate individual differences, and (b) different adjective orderings, to test whether the compositional operations involved in understanding AAN-sequences can help comprehenders decide whether scalar thresholds are affected by the speaker’s statements. The results are jointly informative for recent discussions of scalar vs. absolute adjectives, the question of how scalar thresholds are computed, and the compositional semantics of multi-adjective sequences.
Hizniye Isabella Boga
Title: The languages of Italy - Measuring the similarity between close and distant varieties
Abstract: One of the oldest questions in dialectology is how to define a “language” as opposed to a “dialect” (Gooskens 2018). The theoretical definition of a language as the standardised form and dialects as sub-categorical varieties of “inferior” character have been assumed for a very long time. Only with J. K. Chambers and Peter Trudgill’s introduction of the definition “language as a collection of mutually intelligible dialects”, an equality of varieties was emphasised.
The task of my thesis research revolved around measuring distances and similarities of 58 Romance varieties with a focus on the Italian varieties. The goal is to determine which varieties are closer to each other and can hence be seen as dialects of the same language or whether they are distant enough to be considered independent languages.
The methods at hand are the Levenshtein Distance Normalized Divided (LDND) and the Needleman-Wunsch algorithm Normalized Divided (NWND) with a built-in scorer system of PMI distances. With the resulting distances and similarities determined by the LDND and the NWND method, I used a model-based clustering method to allocate similar varieties into one cluster, dissimilar varieties into another cluster and varieties of mixed and unclear affiliation into a further one. Within those clusters, it is visible which varieties are close enough to be varieties of the same language and which varieties are distant enough to be independent languages.
Jakub Szymanik (University of Amsterdam)
Title: Ease of learning explains semantic universals
Abstract: Despite extraordinary differences between natural languages, linguists have identified many semantic universals – shared properties of meaning – that are yet to receive a unified explanation. We analyze universals in a domain of content words (color terms) and a domain of function words (quantifiers). Using tools from machine learning, we show that meanings satisfying attested universals are easier to learn than those that are not. Thus, ease of learning can explain the presence of semantic universals in many different linguistic domains.
Ramon Ferrer i Cancho (Universitat Politècnica de Catalunya)
Title: An emerging theory of word order
Word order is a fascinating phenomenon. During decades, researchers have been collecting many word order regularities that have fed theory. Some of these regularities are the Greenbergian universals of word order, consistent branching or the low number of dependency crossings in the syntactic dependency structures of sentences. Here we will argue these regularities can be regarded as adaptations to the limited resources of the human brain with the help of an emergent theory of word order that provides a unified explanation to word variation and word order change. We will discuss the negative consequences of denying or neglecting the role of functional pressures for the construction of a parsimonious theory of language.
An apetizer: here
Torgrim Solstad (ZAS Berlin)
Title: Predictive Language Processing: The View from Implicit Causality
Prediction in language (Kamide 2008; DeLong et al. 2014), whereby we understand the incorporation of possible (and likely) future information states into processing, still hasn't attracted much attention in theoretical linguistics despite the central status of prediction in human cognition (Bubic et al. 2010; Clark 2013). Bringing together insights from experimental and theoretical research for one particular phenomenon, Implicit Causality, I want to argue that much could be gained by attempting to bridge this gap.
Implicit Causality verbs (e.g. Garvey/Caramazza 1974; Brown/Fish 1983) have been at the core of psycholinguistic research on predictive processing. Selecting for two animate arguments, such verbs display a strong preference for an explanation focusing on one argument, as shown in numerous sentence continuation experiments:
(1) JOHN annoyed Mary because... HE was rude.
(2) John admired MARY because... SHE was clever.
Although there is good evidence as to the processing profile of Implicit Causality, its predictive nature is still insufficiently understood. Some important questions include:
- What is predicted: Is it a particular word (e.g. HE/SHE in (1)/(2)), a referent, or a type of explanation (e.g. a property of John's in (1), and one of Mary's in (2))?
- What triggers the prediction: Is it encoded in lexical semantics (annoy/admire in (1)/(2)) or part of world knowledge?
Based on a formal-semantic theory of Implicit Causality (Bott/Solstad 2014), results from previous experimental research (e.g. Koornneef/van Berkum 2006; Pykkönen/Järvikivi 2010; Featherstone/Sturt 2010) and recent insights into the nature of predictive processing in general (e.g. Kuperberg/Jaeger 2016; Yan et al. 2017), I will propose a framework for predictive processing of Implicit Causality. By bringing together insights from theoretical and experimental research, we can delineate more precisely the top-down and bottom-up processes generating and validating predictions: Which linguistic levels are involved and how do they interact?
Although limited to one particular phenomenon, I contend that approaching predictions in this manner has the potential to mutually benefit both psycholinguistics and theoretical linguistics. On the one hand, a number of aspects concerning prediction processes may be better understood if they are based on more elaborate theoretical linguistic analysis, if only for constraining the possible hypothesis space, thus allowing for more precise experimental predictions and better control of experimental design and materials. On the other hand, research on prediction extends an invitation to reconsider or expand theoretical linguistic assumptions to accommodate the results obtained in experimental research, potentially offering a broader empirical base for linguistic studies, connecting phenomena previously assumed to be unrelated.
Susanne Dietrich (Tübingen)
Title: Processing of presuppositions during speech perception: a functional magnetic resonance imaging (fMRI) study
Abstract: Discourse structure enables us to generate expectations based upon linguistic materials that has already been introduced. The present functional magnetic resonance imaging (fMRI) study addresses auditory perception of test-sentences in which discourse coherence was manipulated by using presuppositions (PSP) that either correspond or fail to correspond to items in preceding context-sentences. Thereby, in- and definite determiners referring to either (non-) uniqueness or (not) existence of an item were used as PSP triggers. Discourse violation within the (non-) uniqueness subset yielded hemodynamic activation within the pre-supplementary motor area (pre-SMA) and bilateral inferior frontal gyrus (IFG). Considering the existence subset, these regions occurred only, if subjects accommodated the discourse. These findings indicate involvement of (i) the working memory (IFG) referring the PSP to contextual information and (ii) a regulator (pre-SMA) managing the process of comprehension by signaling detected errors to the system. This enables the system to continue the process of comprehension, for example, by up-dating the context or tolerating slight errors.
Shirley-Ann Rueschemeyer (York)
Title: Perspective taking during language comprehension
Abstract: Humans are constantly engaged in social interactions, and many of these interactions are supported by language. In this talk I will be presenting a series of studies investigating how language and social cognitive mechanisms interact in order to facilitate communication. I will start by showing that embodied lexical-semantic representations are activated by words in a flexible manner that reflects both linguistic and pragmatic constraints. Secondly, I will show the results of studies that suggest that when pragmatic constraints affect semantic processing, this is supported by interactions between neural language and mentalizing systems. Lastly, I will suggest that language comprehension is affected by assumptions we hold about other co-listeners as well as speakers. One key mechanism supporting perspective taking between co-listeners may be simulation. Together the studies presented in this talk provide insight into how high level language and social cognitive processes work in concert during successful communicative acts.