Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
4,219
result(s) for
"Face - physiology"
Sort by:
Individual differences in visual salience vary along semantic dimensions
by
Schwarzkopf, D. Samuel
,
de Haas, Benjamin
,
Iakovidis, Alexios L.
in
Adult
,
Attention - physiology
,
Biological Sciences
2019
What determines where we look? Theories of attentional guidance hold that image features and task demands govern fixation behavior, while differences between observers are interpreted as a “noise-ceiling” that strictly limits predictability of fixations. However, recent twin studies suggest a genetic basis of gaze-trace similarity for a given stimulus. This leads to the question of how individuals differ in their gaze behavior and what may explain these differences. Here, we investigated the fixations of >100 human adults freely viewing a large set of complex scenes containing thousands of semantically annotated objects. We found systematic individual differences in fixation frequencies along six semantic stimulus dimensions. These differences were large (>twofold) and highly stable across images and time. Surprisingly, they also held for first fixations directed toward each image, commonly interpreted as “bottom-up” visual salience. Their perceptual relevance was documented by a correlation between individual face salience and face recognition skills. The set of reliable individual salience dimensions and their covariance pattern replicated across samples from three different countries, suggesting they reflect fundamental biological mechanisms of attention. Our findings show stable individual differences in salience along a set of fundamental semantic dimensions and that these differences have meaningful perceptual implications. Visual salience reflects features of the observer as well as the image.
Journal Article
Visual experience is not necessary for the development of face-selectivity in the lateral fusiform gyrus
by
Beeler, David
,
Murty, N. Apurva Ratan
,
Mynick, Anna
in
Adult
,
Biological Sciences
,
Brain mapping
2020
The fusiform face area responds selectively to faces and is causally involved in face perception. How does face-selectivity in the fusiform arise in development, and why does it develop so systematically in the same location across individuals? Preferential cortical responses to faces develop early in infancy, yet evidence is conflicting on the central question of whether visual experience with faces is necessary. Here, we revisit this question by scanning congenitally blind individuals with fMRI while they haptically explored 3D-printed faces and other stimuli. We found robust face-selective responses in the lateral fusiform gyrus of individual blind participants during haptic exploration of stimuli, indicating that neither visual experience with faces nor fovea-biased inputs is necessary for face-selectivity to arise in the lateral fusiform gyrus. Our results instead suggest a role for long-range connectivity in specifying the location of face-selectivity in the human brain.
Journal Article
A fast pathway for fear in human amygdala
by
Strange, Bryan A
,
Moratti, Stephan
,
Lopez-Sosa, Fernando
in
631/378/1457/1284
,
631/477
,
692/699/476/1300
2016
Human intracranial amygdala recordings reveal fast-latency responses to broad and low, but not high, spatial frequency components of fearful, but not happy or neutral, faces, which are not observed with unpleasant scenes. Amygdala fearful face responses are faster than in fusiform cortex, supporting a phylogenetically old, subcortical pathway to human amygdala.
A fast, subcortical pathway to the amygdala is thought to have evolved to enable rapid detection of threat. This pathway's existence is fundamental for understanding nonconscious emotional responses, but has been challenged as a result of a lack of evidence for short-latency fear-related responses in primate amygdala, including humans. We recorded human intracranial electrophysiological data and found fast amygdala responses, beginning 74-ms post-stimulus onset, to fearful, but not neutral or happy, facial expressions. These responses had considerably shorter latency than fear responses that we observed in visual cortex. Notably, fast amygdala responses were limited to low spatial frequency components of fearful faces, as predicted by magnocellular inputs to amygdala. Furthermore, fast amygdala responses were not evoked by photographs of arousing scenes, which is indicative of selective early reactivity to socially relevant visual information conveyed by fearful faces. These data therefore support the existence of a phylogenetically old subcortical pathway providing fast, but coarse, threat-related signals to human amygdala.
Journal Article
Hierarchy of orofacial rhythms revealed through whisking and breathing
2013
Whisking and sniffing are predominant aspects of exploratory behaviour in rodents. Yet the neural mechanisms that generate and coordinate these and other orofacial motor patterns remain largely uncharacterized. Here we use anatomical, behavioural, electrophysiological and pharmacological tools to show that whisking and sniffing are coordinated by respiratory centres in the ventral medulla. We delineate a distinct region in the ventral medulla that provides rhythmic input to the facial motor neurons that drive protraction of the vibrissae. Neuronal output from this region is reset at each inspiration by direct input from the pre-Bötzinger complex, such that high-frequency sniffing has a one-to-one relationship with whisking, whereas basal respiration is accompanied by intervening whisks that occur between breaths. We conjecture that the respiratory nuclei, which project to other premotor regions for oral and facial control, function as a master clock for behaviours that coordinate with breathing.
Motor patterns underlying the rodent exploratory behaviours whisking and sniffing are coordinated by respiratory centres in the ventral medulla; a distinct region in the ventral medulla provides rhythmic input to the facial motor neurons that drive scanning by the vibrissae, and input from the pre-Bötzinger complex coordinates whisking with sniffing and basal breathing.
A master clock for oral and facial control
Rodents explore their environment by rhythmically sniffing and sweeping their whiskers. The coordination of these behaviours is central to their effectiveness, and this study highlights the neural systems involved. David Kleinfeld and colleagues identify a region in the ventral medulla that drives rhythmic whisking, and find that neurons in this area are controlled by input from nuclei that mediate breathing patterns. Breathing pattern generators may serve as a master clock not just for whisking, but for other breath-coordinated behaviours as well.
Journal Article
Holistic face recognition is an emergent phenomenon of spatial processing in face-selective regions
by
Finzi, Dawn
,
Poltoratski, Sonia
,
Grill-Spector, Kalanit
in
59/36
,
631/378/2613/2616
,
631/477/2811
2021
Spatial processing by receptive fields is a core property of the visual system. However, it is unknown how spatial processing in high-level regions contributes to recognition behavior. As face inversion is thought to disrupt typical holistic processing of information in faces, we mapped population receptive fields (pRFs) with upright and inverted faces in the human visual system. Here we show that in face-selective regions, but not primary visual cortex, pRFs and overall visual field coverage are smaller and shifted downward in response to face inversion. From these measurements, we successfully predict the relative behavioral detriment of face inversion at different positions in the visual field. This correspondence between neural measurements and behavior demonstrates how spatial processing in face-selective regions may enable holistic perception. These results not only show that spatial processing in high-level visual regions is dynamically used towards recognition, but also suggest a powerful approach for bridging neural computations by receptive fields to behavior.
It is unknown whether spatial processing in the ventral (‘what’) stream contributes to high-level visual recognition. Here the authors show that spatial processing in face-selective regions directly contributes to whole face recognition behavior.
Journal Article
Illusory faces are more likely to be perceived as male than female
2022
Despite our fluency in reading human faces, sometimes we mistakenly perceive illusory faces in objects, a phenomenon known as face pareidolia. Although illusory faces share some neural mechanisms with real faces, it is unknown to what degree pareidolia engages higher-level social perception beyond the detection of a face. In a series of large-scale behavioral experiments (ntotal
= 3,815 adults), we found that illusory faces in inanimate objects are readily perceived to have a specific emotional expression, age, and gender. Most strikingly, we observed a strong bias to perceive illusory faces as male rather than female. This male bias could not be explained by preexisting semantic or visual gender associations with the objects, or by visual features in the images. Rather, this robust bias in the perception of gender for illusory faces reveals a cognitive bias arising from a broadly tuned face evaluation system in which minimally viable face percepts are more likely to be perceived as male.
Journal Article
Professional actors demonstrate variability, not stereotypical expressions, when portraying emotional states in photographs
by
Barrett, Lisa Feldman
,
Fugate, Jennifer M. B.
,
Le Mau, Tuan
in
706/689/112
,
706/689/477/2811
,
Adult
2021
It is long hypothesized that there is a reliable, specific mapping between certain emotional states and the facial movements that express those states. This hypothesis is often tested by asking untrained participants to pose the facial movements they believe they use to express emotions during generic scenarios. Here, we test this hypothesis using, as stimuli, photographs of facial configurations posed by professional actors in response to contextually-rich scenarios. The scenarios portrayed in the photographs were rated by a convenience sample of participants for the extent to which they evoked an instance of 13 emotion categories, and actors’ facial poses were coded for their specific movements. Both unsupervised and supervised machine learning find that in these photographs, the actors portrayed emotional states with variable facial configurations; instances of only three emotion categories (fear, happiness, and surprise) were portrayed with moderate reliability and specificity. The photographs were separately rated by another sample of participants for the extent to which they portrayed an instance of the 13 emotion categories; they were rated when presented alone and when presented with their associated scenarios, revealing that emotion inferences by participants also vary in a context-sensitive manner. Together, these findings suggest that facial movements and perceptions of emotion vary by situation and transcend stereotypes of emotional expressions. Future research may build on these findings by incorporating dynamic stimuli rather than photographs and studying a broader range of cultural contexts.
It has long been hypothesized that certain emotional states are universally expressed with specific facial movements. Here the authors provide evidence that facial expressions of those emotional states are, in fact, varied among individuals rather than stereotyped.
Journal Article
A behavioral advantage for the face pareidolia illusion in peripheral vision
by
Peluso, Natalie
,
Saurels, Blake W.
,
Taubert, Jessica
in
631/378/2613/2616
,
631/477/2811
,
Adult
2024
Investigation of visual illusions helps us understand how we process visual information. For example, face pareidolia, the misperception of illusory faces in objects, could be used to understand how we process real faces. However, it remains unclear whether this illusion emerges from errors in face detection or from slower, cognitive processes. Here, our logic is straightforward; if examples of face pareidolia activate the mechanisms that rapidly detect faces in visual environments, then participants will look at objects more quickly when the objects also contain illusory faces. To test this hypothesis, we sampled continuous eye movements during a fast saccadic choice task—participants were required to select either faces or food items. During this task, pairs of stimuli were positioned close to the initial fixation point or further away, in the periphery. As expected, the participants were faster to look at face targets than food targets. Importantly, we also discovered an advantage for food items with illusory faces but, this advantage was limited to the peripheral condition. These findings are among the first to demonstrate that the face pareidolia illusion persists in the periphery and, thus, it is likely to be a consequence of erroneous face detection.
Journal Article
Exploring the dog–human relationship by combining fMRI, eye-tracking and behavioural measures
2020
Behavioural studies revealed that the dog–human relationship resembles the human mother–child bond, but the underlying mechanisms remain unclear. Here, we report the results of a multi-method approach combining fMRI (N = 17), eye-tracking (N = 15), and behavioural preference tests (N = 24) to explore the engagement of an attachment-like system in dogs seeing human faces. We presented morph videos of the caregiver, a familiar person, and a stranger showing either happy or angry facial expressions. Regardless of emotion, viewing the caregiver activated brain regions associated with emotion and attachment processing in humans. In contrast, the stranger elicited activation mainly in brain regions related to visual and motor processing, and the familiar person relatively weak activations overall. While the majority of happy stimuli led to increased activation of the caudate nucleus associated with reward processing, angry stimuli led to activations in limbic regions. Both the eye-tracking and preference test data supported the superior role of the caregiver’s face and were in line with the findings from the fMRI experiment. While preliminary, these findings indicate that cutting across different levels, from brain to behaviour, can provide novel and converging insights into the engagement of the putative attachment system when dogs interact with humans.
Journal Article