Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
76 result(s) for "Adams, Reginald B"
Sort by:
Social Vision
A social-functional approach to face processing comes with a number of assumptions. First, given that humans possess limited cognitive resources, it assumes that we naturally allocate attention to processing and integrating the most adaptively relevant social cues. Second, from these cues, we make behavioral forecasts about others in order to respond in an efficient and adaptive manner. This assumption aligns with broader ecological accounts of vision that highlight a direct action-perception link, even for nonsocial vision. Third, humans are naturally predisposed to process faces in this functionally adaptive manner. This latter contention is implied by our attraction to dynamic aspects of the face, including looking behavior and facial expressions, from which we tend to overgeneralize inferences, even when forming impressions of stable traits. The functional approach helps to address how and why observers are able to integrate functionally related compound social cues in a manner that is ecologically relevant and thus adaptive.
في جوف النكتة : الفكاهة لعكس هندسة العقل
الفكاهة تقف خلف أسئلة حول ما نضحك لأجله لماذا نرى الطرفة القديمة وكأنها موضة قديمة ؟ كيف تاثرت الفكاهة بالتطور ؟ النكتة لها جوف، وفي كل مرة ستخدم غرضا ما، إذ تتعدد أغراضها وبالطبع لن تنجح النكتة المغشوشة ! فإما أن تضحك لها، أو أن لا تتفاعل معها. يفكك هذا الكتاب الفكاهة وعلاقتها بالأدراك ويكشف أدوارا تلعبها الفكاهة في حياتنا اليومية ربما لم ننتبه لها بالاصل، وربما لم لكن نعرف أنها نابعة من الفكاهة الضحك المعدي والفكاهة الإدراكية، كلها مواضيع ساخنة ترتبط بالتطور والنظرية الداروينية التي انطلقت كسفينة لا يمكن إيقافها طالما توفرت رياح المعرفة وتوفر السؤال الصحيح.
ARBEE: Towards Automated Recognition of Bodily Expression of Emotion in the Wild
Humans are arguably innately prepared to comprehend others’ emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9876 video clips of body movements and 13,239 human characters, named Body Language Dataset (BoLD), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named Automated Recognition of Bodily Expression of Emotion (ARBEE), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans.
Perceived Gaze Direction and the Processing of Facial Displays of Emotion
There is good reason to believe that gaze direction and facial displays of emotion share an information value as signals of approach or avoidance. The combination of these cues in the analysis of social communication, however, has been a virtually neglected area of inquiry. Two studies were conducted to test the prediction that direct gaze would facilitate the processing of facially communicated approach-oriented emotions (e.g., anger and joy), whereas averted gaze would facilitate the processing of facially communicated avoidance-oriented emotions (e.g., fear and sadness). The results of both studies confirmed the central hypothesis and suggest that gaze direction and facial expression are combined in the processing of emotionally relevant facial information.
Effects of Gaze on Amygdala Sensitivity to Anger and Fear Faces
The amygdala is thought to be part of a neural system responsive to potential threat. Consistent with this is the amygdala's well-documented sensitivity to fear faces. What is puzzling, however, is the paucity of evidence for a similar involvement of the amygdala in the processing of anger displays.
Americans weigh an attended emotion more than Koreans in overall mood judgments
Face ensemble coding is the perceptual ability to create a quick and overall impression of a group of faces, triggering social and behavioral motivations towards other people (approaching friendly people or avoiding an angry mob). Cultural differences in this ability have been reported, such that Easterners are better at face ensemble coding than Westerners are. The underlying mechanism has been attributed to differences in processing styles, with Easterners allocating attention globally, and Westerners focusing on local parts. However, the remaining question is how such default attention mode is influenced by salient information during ensemble perception. We created visual displays that resembled a real-world social setting in which one individual in a crowd of different faces drew the viewer's attention while the viewer judged the overall emotion of the crowd. In each trial, one face in the crowd was highlighted by a salient cue, capturing spatial attention before the participants viewed the entire group. American participants’ judgment of group emotion more strongly weighed the attended individual face than Korean participants, suggesting a greater influence of local information on global perception. Our results showed that different attentional modes between cultural groups modulate social-emotional processing underlying people’s perceptions and attributions.
Culture shapes a mesolimbic response to signals of dominance and subordination that associates with behavior
It has long been understood that culture shapes individuals' behavior, but how this is accomplished in the human brain has remained largely unknown. To examine this, we made use of a well-established cross-cultural difference in behavior: American culture tends to reinforce dominant behavior whereas, conversely, Japanese culture tends to reinforce subordinate behavior. In 17 Americans and 17 Japanese individuals, we assessed behavioral tendencies towards dominance versus subordination and measured neural responses using fMRI during the passive viewing of stimuli related to dominance and subordination. In Americans, dominant stimuli selectively engaged the caudate nucleus, bilaterally, and the medial prefrontal cortex (mPFC), whereas these were selectively engaged by subordinate stimuli in Japanese. Correspondingly, Americans self-reported a tendency towards more dominant behavior whereas Japanese self-reported a tendency towards more subordinate behavior. Moreover, activity in the right caudate and mPFC correlated with behavioral tendencies towards dominance versus subordination, such that stronger responses in the caudate and mPFC to dominant stimuli were associated with more dominant behavior and stronger responses in the caudate and mPFC to subordinate stimuli were associated with more subordinate behavior. The findings provide a first demonstration that culture can flexibly shape functional activity in the mesolimbic reward system, which in turn may guide behavior.
Magnocellular and parvocellular pathway contributions to facial threat cue processing
Abstract Human faces evolved to signal emotions, with their meaning contextualized by eye gaze. For instance, a fearful expression paired with averted gaze clearly signals both presence of threat and its probable location. Conversely, direct gaze paired with facial fear leaves the source of the fear-evoking threat ambiguous. Given that visual perception occurs in parallel streams with different processing emphases, our goal was to test a recently developed hypothesis that clear and ambiguous threat cues would differentially engage the magnocellular (M) and parvocellular (P) pathways, respectively. We employed two-tone face images to characterize the neurodynamics evoked by stimuli that were biased toward M or P pathways. Human observers (N = 57) had to identify the expression of fearful or neutral faces with direct or averted gaze while their magnetoencephalogram was recorded. Phase locking between the amygdaloid complex, orbitofrontal cortex (OFC) and fusiform gyrus increased early (0–300 ms) for M-biased clear threat cues (averted-gaze fear) in the β-band (13–30 Hz) while P-biased ambiguous threat cues (direct-gaze fear) evoked increased θ (4–8 Hz) phase locking in connections with OFC of the right hemisphere. We show that M and P pathways are relatively more sensitive toward clear and ambiguous threat processing, respectively, and characterize the neurodynamics underlying emotional face processing in the M and P pathways.
Observer’s anxiety facilitates magnocellular processing of clear facial threat cues, but impairs parvocellular processing of ambiguous facial threat cues
Facial expression and eye gaze provide a shared signal about threats. While a fear expression with averted gaze clearly points to the source of threat, direct-gaze fear renders the source of threat ambiguous. Separable routes have been proposed to mediate these processes, with preferential attunement of the magnocellular (M) pathway to clear threat, and of the parvocellular (P) pathway to threat ambiguity. Here we investigated how observers’ trait anxiety modulates M- and P-pathway processing of clear and ambiguous threat cues. We scanned subjects (N = 108) widely ranging in trait anxiety while they viewed fearful or neutral faces with averted or directed gaze, with the luminance and color of face stimuli calibrated to selectively engage M- or P-pathways. Higher anxiety facilitated processing of clear threat projected to M-pathway, but impaired perception of ambiguous threat projected to P-pathway. Increased right amygdala reactivity was associated with higher anxiety for M-biased averted-gaze fear, while increased left amygdala reactivity was associated with higher anxiety for P-biased, direct-gaze fear. This lateralization was more pronounced with higher anxiety. Our findings suggest that trait anxiety differentially affects perception of clear (averted-gaze fear) and ambiguous (direct-gaze fear) facial threat cues via selective engagement of M and P pathways and lateralized amygdala reactivity.