Charles Forceville (University of Amsterdam)
What steers the interpretation of a visual or multimodal message? A relevance theory perspective
Multimodality and semiotics scholarship are in need of an inclusive model of communication that takes into account the identities of the communicator, the audience, as well as their relation, and that does not privilege specific media and/or modes over others. The contours of such a model exist in Relevance Theory/RT (Sperber and Wilson 1995), whose central claim is that each act of communication comes with the presumption of optimal relevance to the envisaged audience. Hitherto RT scholars (typically: linguists) have almost exclusively analysed face-to-face exchanges. To fulfil RT’s potential to develop into an inclusive theory of communication, it is necessary to explore how it can be adapted and refined to account for (1) messages in other modes than (only) the verbal mode; and (2) mass-communication. In Forceville (2020) I propose how RT works for mass-communicative messages that involve static visuals. In my presentation I will specifically focus on how RT approaches the key issue of which factors have an impact on the interpretation of a picture or a multimodal message, discussing this issue by drawing on examples from different genres (logos & pictograms, advertisements, and cartoons).
Xavier Villalba (Universitat Autònoma de Barcelona)
Expressive adjectives and variation at the syntax/semantics interface
Expressive adjectives (EAs) like English fucking, damn, or shitty have raised much interest as contributors to not at-issue meaning (Potts 2005; Potts 2007). Yet, (Gutzmann & Turgay 2014; Gutzmann 2019) have shown that a finer-grained description of these elements is necessary in English and German, between pure EA (fucking), which are modifiers of a predicate (⟨⟨e,t⟩,⟨e,t⟩⟩), and mixed ones (shitty), which are functions from degrees to properties (⟨d,⟨e,t⟩⟩), just like any other intersective adjective. This semantic distinction has consequences at the syntactic level; for example, Gutzmann highlights that pure EAs cannot be graduated, nor appear in predicative position, unlike mixed EAs.
In this communication I will consider new data from Catalan and Spanish, which raise concerns for the extension of such a distinction to Romance EAs.
First, all EAs allow degree modification, regardless of their content: from purely expressive Cat. puto, refotut or Sp. pinche ‘fucking’ to more descriptively ones like Cat. merdós ‘shitty’ or Sp. jodido ‘screwed’. Second, most EAs may appear in attributive uses, again regardless of their content. For instance, even though Sp. puto ‘fucking’ is banned in predicate position, its Mexican equivalent pinche ‘fucking’ is not. Third, Romance EAs consistently coordinate with intersective adjectives, even when they lack any clear descriptive content.
On the basis of these data, I will defend that the distinction between pure and mixed EAs is not a categorical one, but rather a gradual one, where pure EAs like English fucking stand in one extreme, and evaluative intersective adjectives in the other. In between, there is a rich gamut of EAs, like Catalan puto or (re)fotut and Spanish puto, jodido or pinche are always prenominal, and closer to pure EAs, or Catalan merdós or Spanish puñetero, may occupy both prenominal and postnominal positions, and are closer to intersective adjectives. In Romance, these distinctions are typically encoded by position: prenominal position is associated to higher expressive meaning.
Moreover, I will show that cognate items may behave differently from language to language, like Catalan punyeter ‘bloody/nitty’ and puto ‘fucking’ vs. Spanish puñetero ‘bloody/nitty’ and puto ‘fucking’, and even across dialects, as for example Sp. puto ‘fucking’ vs. its Mexican equivalent pinche ‘fucking’.
All in all, the resultant picture suggests a more complex interaction between the semantics of EAs and their syntactic behavior than the one suggested by the groundbreaking work by Gutzmann.
Beate Hampe (University of Erfurt)
From corpus to cognition? On multimodal corpus data in Cognitive Linguistics
Kristin Kersten (University of Mannheim)
Multilingualism – Individual, social, and instructional factors
The linguistic and cognitive development of multilingual learners is dynamically intertwined and shaped by a multitude of social and contextual factors. While traditional Second Language Acquisition (SLA) research has predominantly focused on the interaction of individual variables, recent interdisciplinary approaches emphasize the need for a broader perspective – one that considers SLA as an interdependent system influenced by diverse factors.
This talk explores the significance of learners’ diverse linguistic backgrounds and their interplay with cognitive, social, and educational variables. In particular, it highlights the importance of 'proximal' external factors – those with a direct impact on the child, such as input and interactions within family and school settings. These factors offer a stronger explanatory potential than 'distal' factors like family socioeconomic status or overarching educational policies, whose influence on the child is only indirect.
We operationalize input quality as modified language use, promotion of cognitively stimulating authentic interactions and output, as well as comprehension-enhancing strategies. These aspects have been found to facilitate developmental processes through neural activation, deeper processing, and the stimulation of associative networks of prior knowledge and active knowledge construction, all of which are closely linked to the learners’ linguistic representations. The effects of input quality on multilingual and cognitive development are discussed in terms of its potential to level the playing field for multilingual learner groups.
The lecture will be given digitally on ZOOM.
Susan Arndt (University of Bayreuth)
The Longevity of Racism. A (Linguistic) History
Petra Wagner (Bielefeld University)
Understanding “understanding”: On the multimodal expression and perception of signals of (non-)understanding in dyadic explanations
During an explanation between an explainer (a person who explains) and an explainee (a person something is explained to), explainers crucially rely on the explainee’s feedback about their current level of understanding as well as their level of cognitive load or attention. Based on the monitoring of a wide range of verbal and non-verbal feedback cues, an explainer can then dynamically adjust the explanation strategy, e.g., by changing the tempo of the ongoing explanation, repeat or skip parts of the explanation, or even shift the focus of the explanation.
In my talk, I will report on insights from the TRR318 „Constructing Explainability“ subproject A02 on „Monitoring the understanding of explanations“, in which we gather and investigate multimodal signals of (non-)understanding in explanations, see how they evolve in course of ongoing explanations, and how they are interpreted and reacted to. In particular, I will describe the recording and rich multimodal annotation of a corpus of 87 dyadic board game explanations, provide information about our annotation of different levels of (non-)understanding using a recall task, address the floor management dynamics across different phases of the explanations, present some insights on how explainers adapt their multimodal behavior to different explainees, and show how verbal and non-verbal information combine in a model of classifying (non-)understanding.
Anastasia Bauer, Sonja Gipper, Jana Hosemann, Tobias-Alexander Herrmann (University of Cologne)
Feedback in Language: A modality-agnostic and holistic approach
In this talk, we present our research on multimodal recipient feedback in casual dyadic conversation in four languages: German Sign Language, Russian Sign Language, spoken German, and spoken Russian. Taking a modality-agnostic and holistic approach, we investigate the composition of conversational feedback from different multimodal cues, comparing signed and spoken languages without prioritizing any of the articulators or modalities. We find that in signed and spoken languages alike, feedback events include non-manual cues such as head movements or facial expressions in 85% or more of the instances. Moreover, we find that multimodal feedback cues combine into different feedback styles, ranging from a style employing a rich array of non-manual signals, over one comprising mostly head movements, to a style relying somewhat less on head and more on talk. Our data demonstrate that the basic infrastructure for feedback is shared among signers and speakers, while at the same time, signers and speakers show different probabilities for using one style or another. Our research emphasizes the importance of investigating interactional phenomena from a holistic, multi- and cross-modal perspective. As vocal and manual cues account only for a relatively small percentage of the feedback signals employed by the signers and speakers in our study, a linguistic theory that focuses solely on vocal and/or manual behavior remains incomplete and fails to account for the largest part of feedback in conversation. Our research exemplifies the need for a comprehensive theory of human language, underscoring the importance of embodiment in language and challenging speech-centered models of linguistic competence.
Nora von Dewitz (University of Cologne)
Zur Sprachverwendung Jugendlicher – how multilingual are they?
Im Alltag verwenden oder hören Jugendliche ständig verschiedenen Sprachen und Sprachvarietäten: von Gesprächen der Familie, über Fremdsprachenunterricht, Surfen im Internet oder Musik hören. Allerdings gibt es auch einsprachige Räume, in denen nur eine bestimmte Sprachwahl als legitim gilt. Obwohl wir grob wissen, welche Sprachen verwendet werden oder einzelne sprachliche Praktiken oder der Sprachgebrauch bestimmter Zielgruppen Gegenstand verschiedener Studien ist, fehlt bislang ein zuverlässiger Überblick über domänenübergreifende Sprachverwendungen.
Das Projekt Flexen – Sprachliches Repertoire und Sprachverwendung Jugendlicher zielt darauf ab, die Sprachwahl von Jugendlichen in Deutschland in einer Vielzahl von kommunikativen Kontexten zu erfassen, indem eine Smartphone-App für die Methode des Mobile Experience Sampling (M-ESM) genutzt wird. Im Vortrag werde ich den aktuellen Stand im Projekt Flexen vorstellen und besonders auf die Grundidee und Anlage eingehen. Dazu erläutere ich den Ansatz des M-ESM für die Angewandte Linguistik und diskutiere theoretische sowie methodisch-praktische Herausforderungen.
Patrick Sturt (University of Edinburgh)
Syntactic and Semantic Agreement in British English
In English, it is possible for a morphologically singular collective noun like "government" to control both singular (syntactic) agreement and plural (semantic) agreement in the same sentence (e.g. "The government has praised themselves"). It has been claimed that sentences with the opposite pattern of agreeing elements are ungrammatical (e.g. *The government have praised itself), and there is a corresponding asymmetry in corpus frequencies of these two configurations. In this talk, I'll describe two acceptability judgement experiments, showing that the acceptability contrast is affected by the relative order of the two agreeing elements, with degraded acceptability in the case where the first agreeing element shows plural agreement and the second shows singular agreement, relative to the opposite configuration. This pattern is found both when the agreeing verb precedes the reflexive, and when the reflexive precedes the verb. Overall, the results suggest that the initial formation of a semantic agreement dependency between an agreement target and a collective controller makes subsequent morpho-syntactic agreement with the same controller less accessible. I will argue that any theoretical account of these results would require an important role for incremental processing, and I sketch some ideas about how the contrast might be explained in an incremental model.