Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,064
result(s) for
"Multisensory perception"
Sort by:
Sustainable Urban Green Blue Space (UGBS) and Public Participation: Integrating Multisensory Landscape Perception from Online Reviews
2023
The integration of multisensory-based public subjective perception into planning, management, and policymaking is of great significance for the sustainable development and protection of UGBS. Online reviews are a suitable data source for this issue, which includes information about public sentiment, perception of the physical environment, and sensory description. This study adopts the deep learning method to obtain effective information from online reviews and found that in 105 major sites of Tokyo (23 districts), the public overall perception level is not balanced. Rich multi-sense will promote the perception level, especially hearing and somatosensory senses that have a higher positive prediction effect than vision, and overall perception can start improving by optimizing these two senses. Even if only one adverse sense exists, it will seriously affect the perception level, such as bad smell and noise. Optimizing the physical environment by adding natural elements for different senses is conducive to overall perception. Sensory maps can help to quickly find areas that require improvement. This study provides a new method for rapid multisensory analysis and complementary public participation for specific situations, which helps to increase the well-being of UGBS and give play to its multi-functionality.
Journal Article
Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device
by
Poradosu, Keinan
,
Maimon, Amber
,
Yizhar, Or
in
Algorithms
,
auditory spatial perception
,
Experiments
2023
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes’ identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
Journal Article
Over my fake body: body ownership illusions for studying the multisensory basis of own-body perception
by
Kording, Konrad P.
,
Kilteni, Konstantina
,
Maselli, Antonella
in
Bayesian analysis
,
body ownership
,
body semantics
2015
Which is my body and how do I distinguish it from the bodies of others, or from objects in the surrounding environment? The perception of our own body and more particularly our sense of body ownership is taken for granted. Nevertheless, experimental findings from body ownership illusions (BOIs), show that under specific multisensory conditions, we can experience artificial body parts or fake bodies as our own body parts or body, respectively. The aim of the present paper is to discuss how and why BOIs are induced. We review several experimental findings concerning the spatial, temporal, and semantic principles of crossmodal stimuli that have been applied to induce BOIs. On the basis of these principles, we discuss theoretical approaches concerning the underlying mechanism of BOIs. We propose a conceptualization based on Bayesian causal inference for addressing how our nervous system could infer whether an object belongs to our own body, using multisensory, sensorimotor, and semantic information, and we discuss how this can account for several experimental findings. Finally, we point to neural network models as an implementational framework within which the computational problem behind BOIs could be addressed in the future.
Journal Article
Senses of place: architectural design for the multisensory mind
Traditionally, architectural practice has been dominated by the eye/sight. In recent decades, though, architects and designers have increasingly started to consider the other senses, namely sound, touch (including proprioception, kinesthesis, and the vestibular sense), smell, and on rare occasions, even taste in their work. As yet, there has been little recognition of the growing understanding of the
multisensory
nature of the human mind that has emerged from the field of cognitive neuroscience research. This review therefore provides a summary of the role of the human senses in architectural design practice, both when considered individually and, more importantly, when studied collectively. For it is only by recognizing the fundamentally multisensory nature of perception that one can really hope to explain a number of surprising crossmodal environmental or atmospheric interactions, such as between lighting colour and thermal comfort and between sound and the perceived safety of public space. At the same time, however, the contemporary focus on synaesthetic design needs to be reframed in terms of the crossmodal correspondences and multisensory integration, at least if the most is to be made of multisensory interactions and synergies that have been uncovered in recent years. Looking to the future, the hope is that architectural design practice will increasingly incorporate our growing understanding of the human senses, and how they influence one another. Such a multisensory approach will hopefully lead to the development of buildings and urban spaces that do a better job of promoting our social, cognitive, and emotional development, rather than hindering it, as has too often been the case previously.
Journal Article
Bilingualism Modulates Infants' Selective Attention to the Mouth of a Talking Face
2015
Infants growing up in bilingual environments succeed at learning two languages. What adaptive processes enable them to master the more complex nature of bilingual input? One possibility is that bilingual infants take greater advantage of the redundancy of the audiovisual speech that they usually experience during social interactions. Thus, we investigated whether bilingual infants' need to keep languages apart increases their attention to the mouth as a source of redundant and reliable speech cues. We measured selective attention to talking faces in 4-, 8-, and 12-month-old Catalan and Spanish monolingual and bilingual infants. Monolinguals looked more at the eyes than the mouth at 4 months and more at the mouth than the eyes at 8 months in response to both native and nonnative speech, but they looked more at the mouth than the eyes at 12 months only in response to nonnative speech. In contrast, bilinguals looked equally at the eyes and mouth at 4 months, more at the mouth than the eyes at 8 months, and more at the mouth than the eyes at 12 months, and these patterns of responses were found for both native and nonnative speech at all ages. Thus, to support their dual-language acquisition processes, bilingual infants exploit the greater perceptual salience of redundant audiovisual speech cues at an earlier age and for a longer time than monolingual infants.
Journal Article
Crossmodal interaction of flashes and beeps across time and number follows Bayesian causal inference
by
Beierholm, Ulrik
,
Zhu, Haocheng
,
Zhang, Yiyang
in
Adult
,
Auditory Perception - physiology
,
Bayes Theorem
2026
Multisensory perception requires the brain to dynamically infer causal relationships between sensory inputs across various dimensions, such as temporal and spatial attributes. Traditionally, Bayesian Causal Inference (BCI) models have generally provided a robust framework for understanding sensory processing in unidimensional settings where stimuli across sensory modalities vary along one dimension such as spatial location, or numerosity (Samad et al.,
PloS one
, 10 (2), e0117178,
2015
). However, real-world sensory processing involves multidimensional cues, where the alignment of information across multiple dimensions influences whether the brain perceives a unified or segregated source. In an effort to investigate sensory processing in more realistic conditions, this study introduces an expanded BCI model that incorporates multidimensional information, specifically numerosity and temporal discrepancies. Using a modified sound-induced flash illusion (SiFI) paradigm with manipulated audiovisual disparities, we tested the performance of the enhanced BCI model. Results showed that integration probability decreased with increasing temporal discrepancies, and our proposed multidimensional BCI model accurately predicts multisensory perception outcomes under the entire range of stimulus conditions. This multidimensional framework extends the BCI model’s applicability, providing deeper insights into the computational mechanisms underlying multisensory processing and offering a foundation for future quantitative studies on naturalistic sensory processing.
Journal Article
Infants deploy selective attention to the mouth of a talking face when learning speech
2012
The mechanisms underlying the acquisition of speech-production ability in human infancy are not well understood. We tracked 4–12-mo-old English-learning infants’ and adults’ eye gaze while they watched and listened to a female reciting a monologue either in their native (English) or nonnative (Spanish) language. We found that infants shifted their attention from the eyes to the mouth between 4 and 8 mo of age regardless of language and then began a shift back to the eyes at 12 mo in response to native but not nonnative speech. We posit that the first shift enables infants to gain access to redundant audiovisual speech cues that enable them to learn their native speech forms and that the second shift reflects growing native-language expertise that frees them to shift attention to the eyes to gain access to social cues. On this account, 12-mo-old infants do not shift attention to the eyes when exposed to nonnative speech because increasing native-language expertise and perceptual narrowing make it more difficult to process nonnative speech and require them to continue to access redundant audiovisual cues. Overall, the current findings demonstrate that the development of speech production capacity relies on changes in selective audiovisual attention and that this depends critically on early experience.
Journal Article
Perception it is: Processing level in multisensory selection
by
Spence, Charles
,
Jensen, Anne
,
Frings, Christian
in
Associative Learning
,
Attention
,
Auditory Perception
2020
When repeatedly exposed to simultaneously presented stimuli, associations between these stimuli are nearly always established, both within as well as between sensory modalities. Such associations guide our subsequent actions and may also play a role in multisensory selection. Thus, crossmodal associations (i.e., associations between stimuli from different modalities) learned in a multisensory interference task might affect subsequent information processing. The aim of this study was to investigate the processing level of multisensory stimuli in multisensory selection by means of crossmodal aftereffects. Either feature or response associations were induced in a multisensory flanker task while the amount of interference in a subsequent crossmodal flanker task was measured. The results of Experiment
1
revealed the existence of crossmodal interference after multisensory selection. Experiments
2
and
3
then went on to demonstrate the dependence of this effect on the perceptual associations between features themselves, rather than on the associations between feature and response. Establishing response associations did not lead to a subsequent crossmodal interference effect (Experiment
2
), while stimulus feature associations without response associations (obtained by changing the response effectors) did (Experiment
3
). Taken together, this pattern of results suggests that associations in multisensory selection, and the interference of (crossmodal) distractors, predominantly work at the perceptual, rather than at the response, level.
Journal Article
Crossmodal correspondences and interactions between texture and taste perception
2023
In recent years, awareness of the influence of different modalities on taste perception has grown. Although previous research in crossmodal taste perception has touched upon the bipolar distinction between softness/smoothness and roughness/angularity, ambiguity largely remains surrounding other crossmodal correspondences between taste and other specific textures we regularly use to describe our food, such as crispy or crunchy. Sweetness has previously been found to be associated with soft textures but our current understanding does not exceed the basic distinction made between roughness and smoothness. Specifically, the role of texture in taste perception remains relatively understudied. The current study consisted of two parts. First, because of the lack of clarity concerning specific associations between basic tastes and textures, an online questionnaire served to assess whether consistent associations between texture words and taste words exist and how these arise intuitively. The second part consisted of a taste experiment with factorial combinations of four tastes and four textures. The results of the questionnaire study showed that consistent associations are made between soft and sweet and between crispy and salty at the conceptual level. The results of the taste experiment largely showed evidence in support of these findings at the perceptual level. In addition, the experiment allowed for a closer look into the complexity found regarding the association between sour and crunchy, and bitter and sandy.
Journal Article
How prior expectations shape multisensory perception
2016
The brain generates a representation of our environment by integrating signals from a common source, but segregating signals from different sources. This fMRI study investigated how the brain arbitrates between perceptual integration and segregation based on top-down congruency expectations and bottom-up stimulus-bound congruency cues.
Participants were presented audiovisual movies of phonologically congruent, incongruent or McGurk syllables that can be integrated into an illusory percept (e.g. “ti” percept for visual «ki» with auditory /pi/). They reported the syllable they perceived. Critically, we manipulated participants' top-down congruency expectations by presenting McGurk stimuli embedded in blocks of congruent or incongruent syllables.
Behaviorally, participants were more likely to fuse audiovisual signals into an illusory McGurk percept in congruent than incongruent contexts. At the neural level, the left inferior frontal sulcus (lIFS) showed increased activations for bottom-up incongruent relative to congruent inputs. Moreover, lIFS activations were increased for physically identical McGurk stimuli, when participants segregated the audiovisual signals and reported their auditory percept. Critically, this activation increase for perceptual segregation was amplified when participants expected audiovisually incongruent signals based on prior sensory experience.
Collectively, our results demonstrate that the lIFS combines top-down prior (in)congruency expectations with bottom-up (in)congruency cues to arbitrate between multisensory integration and segregation.
Journal Article