Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
43
result(s) for
"Adams Jr, Reginald B"
Sort by:
Effects of Gaze on Amygdala Sensitivity to Anger and Fear Faces
by
Adams, Reginald B.
,
Baird, Abigail A.
,
Kleck, Robert E.
in
Amygdala
,
Amygdala (Brain)
,
Amygdala - physiology
2003
The amygdala is thought to be part of a neural system responsive to potential threat. Consistent with this is the amygdala's well-documented sensitivity to fear faces. What is puzzling, however, is the paucity of evidence for a similar involvement of the amygdala in the processing of anger displays.
Journal Article
Culture shapes a mesolimbic response to signals of dominance and subordination that associates with behavior
by
Freeman, Jonathan B.
,
Rule, Nicholas O.
,
Adams Jr, Reginald B.
in
Analysis of Variance
,
Behavior
,
Brain Mapping
2009
It has long been understood that culture shapes individuals' behavior, but how this is accomplished in the human brain has remained largely unknown. To examine this, we made use of a well-established cross-cultural difference in behavior: American culture tends to reinforce dominant behavior whereas, conversely, Japanese culture tends to reinforce subordinate behavior. In 17 Americans and 17 Japanese individuals, we assessed behavioral tendencies towards dominance versus subordination and measured neural responses using fMRI during the passive viewing of stimuli related to dominance and subordination. In Americans, dominant stimuli selectively engaged the caudate nucleus, bilaterally, and the medial prefrontal cortex (mPFC), whereas these were selectively engaged by subordinate stimuli in Japanese. Correspondingly, Americans self-reported a tendency towards more dominant behavior whereas Japanese self-reported a tendency towards more subordinate behavior. Moreover, activity in the right caudate and mPFC correlated with behavioral tendencies towards dominance versus subordination, such that stronger responses in the caudate and mPFC to dominant stimuli were associated with more dominant behavior and stronger responses in the caudate and mPFC to subordinate stimuli were associated with more subordinate behavior. The findings provide a first demonstration that culture can flexibly shape functional activity in the mesolimbic reward system, which in turn may guide behavior.
Journal Article
Magnocellular and parvocellular pathway contributions to facial threat cue processing
2019
Abstract
Human faces evolved to signal emotions, with their meaning contextualized by eye gaze. For instance, a fearful expression paired with averted gaze clearly signals both presence of threat and its probable location. Conversely, direct gaze paired with facial fear leaves the source of the fear-evoking threat ambiguous. Given that visual perception occurs in parallel streams with different processing emphases, our goal was to test a recently developed hypothesis that clear and ambiguous threat cues would differentially engage the magnocellular (M) and parvocellular (P) pathways, respectively. We employed two-tone face images to characterize the neurodynamics evoked by stimuli that were biased toward M or P pathways. Human observers (N = 57) had to identify the expression of fearful or neutral faces with direct or averted gaze while their magnetoencephalogram was recorded. Phase locking between the amygdaloid complex, orbitofrontal cortex (OFC) and fusiform gyrus increased early (0–300 ms) for M-biased clear threat cues (averted-gaze fear) in the β-band (13–30 Hz) while P-biased ambiguous threat cues (direct-gaze fear) evoked increased θ (4–8 Hz) phase locking in connections with OFC of the right hemisphere. We show that M and P pathways are relatively more sensitive toward clear and ambiguous threat processing, respectively, and characterize the neurodynamics underlying emotional face processing in the M and P pathways.
Journal Article
Perceived Gaze Direction and the Processing of Facial Displays of Emotion
2003
There is good reason to believe that gaze direction and facial displays of emotion share an information value as signals of approach or avoidance. The combination of these cues in the analysis of social communication, however, has been a virtually neglected area of inquiry. Two studies were conducted to test the prediction that direct gaze would facilitate the processing of facially communicated approach-oriented emotions (e.g., anger and joy), whereas averted gaze would facilitate the processing of facially communicated avoidance-oriented emotions (e.g., fear and sadness). The results of both studies confirmed the central hypothesis and suggest that gaze direction and facial expression are combined in the processing of emotionally relevant facial information.
Journal Article
Inside jokes : using humor to reverse-engineer the mind
by
Hurley, Matthew M.
,
Dennett, Daniel Clement
,
Adams, Reginald B., Jr
in
Laughter
,
Laughter -- Philosophy
,
Laughter -- Psychological aspects
2011
This evolutionary and cognitive theory of humor seeks to reveal the complex science behind why we crack up.\"A sophisticated analysis...written with clarity, good cheer, and, of course, wit.\" â Steven Pinker, author of How The Mind Works  Some things are funny--jokes, puns, sitcoms, Charlie Chaplin, The Far Side , Malvolio with his yellow.
Angry White Faces: A Contradiction of Racial Stereotypes and Emotion-Resembling Appearance
by
Adams, Reginald B.
,
Garrido, Carlos O.
,
Hedgecoth, Nicole
in
African Americans
,
Anger
,
Behavioral Science and Psychology
2022
Machine learning findings suggest Eurocentric (aka White/European) faces structurally resemble anger more than Afrocentric (aka Black/African) faces (e.g., Albohn,
2020
; Zebrowitz et al.,
2010
); however, Afrocentric faces are typically associated with anger more so than Eurocentric faces (e.g., Hugenberg & Bodenhausen,
2003
,
2004
). Here, we further examine counter-stereotypic associations between Eurocentric faces and anger, and Afrocentric faces and fear. In Study 1, using a computer vision algorithm, we demonstrate that neutral European American faces structurally resemble anger more and fear less than do African American faces. In Study 2, we then found that anger- and fear-resembling facial appearance influences perceived racial prototypicality in this same counter-stereotypic manner. In Study 3, we likewise found that imagined European American versus African American faces were rated counter-stereotypically (i.e., more like anger than fear) on key emotion-related facial characteristics (i.e., size of eyes, size of mouth, overall angularity of features). Finally in Study 4, we again found counter-stereotypic differences, this time in processing fluency, such that angry Eurocentric versus Afrocentric faces and fearful Afrocentric versus Eurocentric faces were categorized more accurately and quickly. Only in Study 5, using race-ambiguous interior facial cues coupled with Afrocentric versus Eurocentric hairstyles and skin tone, did we find the stereotypical effects commonly reported in the literature. These findings are consistent with the conclusion that the “angry Black” association in face perception is socially constructed in that structural cues considered prototypical of African American appearance conflict with common race-emotion stereotypes.
Journal Article
ARBEE: Towards Automated Recognition of Bodily Expression of Emotion in the Wild
2020
Humans are arguably innately prepared to comprehend others’ emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9876 video clips of body movements and 13,239 human characters, named Body Language Dataset (BoLD), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named Automated Recognition of Bodily Expression of Emotion (ARBEE), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans.
Journal Article
Social Vision
2017
A social-functional approach to face processing comes with a number of assumptions. First, given that humans possess limited cognitive resources, it assumes that we naturally allocate attention to processing and integrating the most adaptively relevant social cues. Second, from these cues, we make behavioral forecasts about others in order to respond in an efficient and adaptive manner. This assumption aligns with broader ecological accounts of vision that highlight a direct action-perception link, even for nonsocial vision. Third, humans are naturally predisposed to process faces in this functionally adaptive manner. This latter contention is implied by our attraction to dynamic aspects of the face, including looking behavior and facial expressions, from which we tend to overgeneralize inferences, even when forming impressions of stable traits. The functional approach helps to address how and why observers are able to integrate functionally related compound social cues in a manner that is ecologically relevant and thus adaptive.
Journal Article
Americans weigh an attended emotion more than Koreans in overall mood judgments
by
Adams, Reginald B.
,
Albohn, Daniel N.
,
Kveraga, Kestas
in
631/477
,
631/477/2811
,
Asian culture
2023
Face ensemble coding is the perceptual ability to create a quick and overall impression of a group of faces, triggering social and behavioral motivations towards other people (approaching friendly people or avoiding an angry mob). Cultural differences in this ability have been reported, such that Easterners are better at face ensemble coding than Westerners are. The underlying mechanism has been attributed to differences in processing styles, with Easterners allocating attention globally, and Westerners focusing on local parts. However, the remaining question is how such default attention mode is influenced by salient information during ensemble perception. We created visual displays that resembled a real-world social setting in which one individual in a crowd of different faces drew the viewer's attention while the viewer judged the overall emotion of the crowd. In each trial, one face in the crowd was highlighted by a salient cue, capturing spatial attention before the participants viewed the entire group. American participants’ judgment of group emotion more strongly weighed the attended individual face than Korean participants, suggesting a greater influence of local information on global perception. Our results showed that different attentional modes between cultural groups modulate social-emotional processing underlying people’s perceptions and attributions.
Journal Article
Spatial and feature-based attention to expressive faces
2019
Facial emotion is an important cue for deciding whether an individual is potentially helpful or harmful. However, facial expressions are inherently ambiguous and observers typically employ other cues to categorize emotion expressed on the face, such as race, sex, and context. Here, we explored the effect of increasing or reducing different types of uncertainty associated with a facial expression that is to be categorized. On each trial, observers responded according to the emotion and location of a peripherally presented face stimulus and were provided with either: (1) no information about the upcoming face; (2) its location; (3) its expressed emotion; or (4) both its location and emotion. While cueing emotion or location resulted in faster response times than cueing unpredictive information, cueing face emotion alone resulted in faster responses than cueing face location alone. Moreover, cueing both stimulus location and emotion resulted in a superadditive reduction of response times compared with cueing location or emotion alone, suggesting that feature-based attention to emotion and spatially selective attention interact to facilitate perception of face stimuli. While categorization of facial expressions was significantly affected by stable identity cues (sex and race) in the face, we found that these interactions were eliminated when uncertainty about facial expression, but not spatial uncertainty about stimulus location, was reduced by predictive cueing. This demonstrates that feature-based attention to facial expression greatly attenuates the need to rely on stable identity cues to interpret facial emotion.
Journal Article