Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
1,599
result(s) for
"Multimodal Communication"
Sort by:
Social signal processing
\"Social Signal Processing is the first book to cover all aspects of the modeling, automated detection, analysis, and synthesis of nonverbal behavior in human-human and human-machine interactions. Authoritative surveys address conceptual foundations, machine analysis and synthesis of social signal processing, and applications. Foundational topics include affect perception and interpersonal coordination in communication; later chapters cover technologies for automatic detection and understanding such as computational paralinguistics and facial expression analysis and for the generation of artificial social signals such as social robots and artificial agents. The final section covers a broad spectrum of applications based on social signal processing in healthcare, deception detection, and digital cities, including detection of developmental diseases and analysis of small groups. Each chapter offers a basic introduction to its topic, accessible to students and other newcomers, and then outlines challenges and future perspectives for the benefit of experienced researchers and practitioners in the field\"-- Provided by publisher.
Multilevel rhythms in multimodal communication
by
Proksch, Shannon
,
Drijvers, Linda
,
Schaefer, Rebecca S.
in
Animal Communication
,
Animals
,
Communication
2021
It is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal turn relative to traditional accounts of temporal aspects of animal communication which have focused on a single modality at a time. However, the fields that are currently contributing to the study of multimodal communication are highly varied, and still largely disconnected given their sole focus on a particular level of description or their particular concern with human or non-human animals. Here, we provide an integrative overview of converging findings that show how multimodal processes occurring at neural, bodily, as well as social interactional levels each contribute uniquely to the complex rhythms that characterize communication in human and non-human animals. Though we address findings for each of these levels independently, we conclude that the most important challenge in this field is to identify how processes at these different levels connect.
This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Journal Article
A conceptual framework for the study of demonstrative reference
by
Peeters, David
,
Maes, Alfons
,
Krahmer, Emiel
in
Behavioral Science and Psychology
,
Cognitive Psychology
,
Collaboration
2021
Language allows us to efficiently communicate about the things in the world around us. Seemingly simple words like
this
and
that
are a cornerstone of our capability to refer, as they contribute to guiding the attention of our addressee to the specific entity we are talking about. Such demonstratives are acquired early in life, ubiquitous in everyday talk, often closely tied to our gestural communicative abilities, and present in all spoken languages of the world. Based on a review of recent experimental work, here we introduce a new conceptual framework of demonstrative reference. In the context of this framework, we argue that several physical, psychological, and referent-intrinsic factors dynamically interact to influence whether a speaker will use one demonstrative form (e.g.,
this
) or another (e.g.,
that
) in a given setting. However, the relative influence of these factors themselves is argued to be a function of the cultural language setting at hand, the theory-of-mind capacities of the speaker, and the affordances of the specific context in which the speech event takes place. It is demonstrated that the framework has the potential to reconcile findings in the literature that previously seemed irreconcilable. We show that the framework may to a large extent generalize to instances of endophoric reference (e.g., anaphora) and speculate that it may also describe the specific form and kinematics a speaker’s pointing gesture takes. Testable predictions and novel research questions derived from the framework are presented and discussed.
Journal Article
Virtual reality: A game-changing method for the language sciences
This paper introduces virtual reality as an experimental method for the language sciences and provides a review of recent studies using the method to answer fundamental, psycholinguistic research questions. It is argued that virtual reality demonstrates that ecological validity and experimental control should not be conceived of as two extremes on a continuum, but rather as two orthogonal factors. Benefits of using virtual reality as an experimental method include that in a virtual environment, as in the real world, there is no artificial spatial divide between participant and stimulus. Moreover, virtual reality experiments do not necessarily have to include a repetitive trial structure or an unnatural experimental task. Virtual agents outperform experimental confederates in terms of the consistency and replicability of their behavior, allowing for reproducible science across participants and research labs. The main promise of virtual reality as a tool for the experimental language sciences, however, is that it shifts theoretical focus towards the interplay between different modalities (e.g., speech, gesture, eye gaze, facial expressions) in dynamic and communicative real-world environments, complementing studies that focus on one modality (e.g., speech) in isolation.
Journal Article
Beat gestures influence which speech sounds you hear
2021
Beat gestures—spontaneously produced biphasic movements of the hand—are among the most frequently encountered co-speech gestures in human communication. They are closely temporally aligned to the prosodic characteristics of the speech signal, typically occurring on lexically stressed syllables. Despite their prevalence across speakers of the world's languages, how beat gestures impact spoken word recognition is unclear. Can these simple ‘flicks of the hand' influence speech perception? Across a range of experiments, we demonstrate that beat gestures influence the explicit and implicit perception of lexical stress (e.g. distinguishing OBject from obJECT ), and in turn can influence what vowels listeners hear. Thus, we provide converging evidence for a manual McGurk effect: relatively simple and widely occurring hand movements influence which speech sounds we hear.
Journal Article
The multimodal facilitation effect in human communication
by
Drijvers, Linda
,
Holler, Judith
in
Behavioral Science and Psychology
,
Brief Report
,
Cognitive Psychology
2023
During face-to-face communication, recipients need to rapidly integrate a plethora of auditory and visual signals. This integration of signals from many different bodily articulators, all offset in time, with the information in the speech stream may either tax the cognitive system, thus slowing down language processing, or may result in multimodal facilitation. Using the classical shadowing paradigm, participants shadowed speech from face-to-face, naturalistic dyadic conversations in an audiovisual context, an audiovisual context without visual speech (e.g., lips), and an audio-only context. Our results provide evidence of a multimodal facilitation effect in human communication: participants were faster in shadowing words when seeing multimodal messages compared with when hearing only audio. Also, the more visual context was present, the fewer shadowing errors were made, and the earlier in time participants shadowed predicted lexical items. We propose that the multimodal facilitation effect may contribute to the ease of fast face-to-face conversational interaction.
Journal Article
Processing language in face-to-face conversation: Questions with gestures get faster responses
by
Levinson, Stephen C.
,
Kendrick, Kobin H.
,
Holler, Judith
in
Adult
,
Behavioral Science and Psychology
,
Brief Report
2018
The home of human language use is face-to-face interaction, a context in which communicative exchanges are characterised not only by bodily signals accompanying what is being said but also by a pattern of alternating turns at talk. This transition between turns is astonishingly fast—typically a mere 200-ms elapse between a current and a next speaker’s contribution—meaning that comprehending, producing, and coordinating conversational contributions in time is a significant challenge. This begs the question of whether the additional information carried by bodily signals facilitates or hinders language processing in this time-pressured environment. We present analyses of multimodal conversations revealing that bodily signals appear to profoundly influence language processing in interaction: Questions accompanied by gestures lead to shorter turn transition times—that is, to faster responses—than questions without gestures, and responses come earlier when gestures end before compared to after the question turn has ended. These findings hold even after taking into account prosodic patterns and other visual signals, such as gaze. The empirical findings presented here provide a first glimpse of the role of the body in the psycholinguistic processes underpinning human communication.
Journal Article
An introduction to multimodal communication
2013
Though it has long been known that animal communication is complex, recent years have seen growing interest in understanding the extent to which animals give multicomponent signals in multiple modalities, and how the different types of information extracted by receivers are interpreted and integrated in animal decision-making. This interest has culminated in the production of the present special issue on multimodal communication, which features both theoretical and empirical studies from leading researchers in the field. Reviews, comparative analyses, and species-specific empirical studies include manuscripts on taxa as diverse as spiders, primates, birds, lizards, frogs, and humans. The present manuscript serves as both an introduction to this special issue, as well as an introduction to multimodal communication more generally. We discuss the history of the study of complexity in animal communication, issues relating to defining and classifying multimodal signals, and particular issues to consider with multimodal (as opposed to multicomponent unimodal) communication. We go on to discuss the current state of the field, and outline the contributions contained within the issue. We finish by discussing future avenues for research, in particular emphasizing that 'multimodal' is more than just 'bimodal', and that more integrative frameworks are needed that incorporate more elements of efficacy, such as receiver sensory ecology and the environment.
Journal Article
Vocal-visual combinations in wild chimpanzees
2024
Living organisms throughout the animal kingdom habitually communicate with multi-modal signals that use multiple sensory channels. Such composite signals vary in their communicative function, as well as the extent to which they are recombined freely. Humans typically display complex forms of multi-modal communication, yet the evolution of this capacity remains unknown. One of our two closest living relatives, chimpanzees, also produce multi-modal combinations and therefore may offer a valuable window into the evolutionary roots of human communication. However, a currently neglected step in describing multi-modal systems is to disentangle non-random combinations from those that occur simply by chance. Here we aimed to provide a systematic quantification of communicative behaviour in our closest living relatives, describing non-random combinations produced across auditory and visual modalities. Through recording the behaviour of wild chimpanzees from the Kibale forest, Uganda we generated the first repertoire of non-random combined vocal and visual components. Using collocation analysis, we identified more than 100 vocal-visual combinations which occurred more frequently than expected by chance. We also probed how multi-modal production varied in the population, finding no differences in the number of visual components produced with vocalisations as a function of age, sex or rank. As expected, chimpanzees produced more visual components alongside vocalizations during longer vocalization bouts, however, this was only the case for some vocalization types, not others. We demonstrate that chimpanzees produce a vast array of combined vocal and visual components, exhibiting a hitherto underappreciated level of multi-modal complexity.SignificanceIn humans and non-humans, acoustic communicative signals are typically accompanied by visual information. Such “multi-modal communication” has been argued to function for increasing redundancy as well as for creating new meaning. However, a currently neglected step when describing multi-modal systems and their functions is to disentangle non-random combinations from those that occur simply by chance. These data are essential to providing a faithful illustration of a species’ multi-modal communicative behaviour. Through recording the behaviour of wild chimpanzees from the Kibale forest, Uganda we aimed to bridge this gap in understanding and generated the first repertoire of non-random combined vocal and visual components in animals. Our data suggest chimpanzees combine many components flexibly and these results have important implications for our understanding of the complexity of multi-modal communication already existing in the last common ancestor of humans and chimpanzees.
Journal Article