Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
28,525
result(s) for
"Recognition, Psychology"
Sort by:
Impaired Recognition of Basic Emotions from Facial Expressions in Young People with Autism Spectrum Disorder: Assessing the Importance of Expression Intensity
by
Griffiths, Sarah
,
Munafò, Marcus R.
,
Penton-Voak, Ian S.
in
Accuracy
,
Acknowledgment
,
Adolescent
2019
It has been proposed that impairments in emotion recognition in ASD are greater for more subtle expressions of emotion. We measured recognition of 6 basic facial expressions at 8 intensity levels in young people (6–16 years) with ASD (N = 63) and controls (N = 64) via an Internet platform. Participants with ASD were less accurate than controls at labelling expressions across intensity levels, although differences at very low levels were not detected due to floor effects. Recognition accuracy did not correlate with parent-reported social functioning in either group. These findings provide further evidence for an impairment in recognition of basic emotion in ASD and do not support the idea that this impairment is limited solely to low intensity expressions.
Journal Article
Noradrenergic arousal after encoding reverses the course of systems consolidation in humans
2021
It is commonly assumed that episodic memories undergo a time-dependent systems consolidation process, during which hippocampus-dependent memories eventually become reliant on neocortical areas. Here we show that systems consolidation dynamics can be experimentally manipulated and even reversed. We combined a single pharmacological elevation of post-encoding noradrenergic activity through the α
2
-adrenoceptor antagonist yohimbine with fMRI scanning both during encoding and recognition testing either 1 or 28 days later. We show that yohimbine administration, in contrast to placebo, leads to a time-dependent increase in hippocampal activity and multivariate encoding-retrieval pattern similarity, an indicator of episodic reinstatement, between 1 and 28 days. This is accompanied by a time-dependent decrease in neocortical activity. Behaviorally, these neural changes are linked to a reduced memory decline over time after yohimbine intake. These findings indicate that noradrenergic activity shortly after encoding may alter and even reverse systems consolidation in humans, thus maintaining vividness of memories over time.
Memories are assumed to undergo a time-dependent systems consolidation, during which hippocampal contributions to memory decrease while neocortical contributions increase. Here, the authors show that noradrenergic arousal after encoding may reverse this course of systems consolidation in humans
Journal Article
A bottom-up view of toddler word learning
by
Pereira, Alfredo F.
,
Smith, Linda B.
,
Yu, Chen
in
Behavioral Science and Psychology
,
Biological and medical sciences
,
Brief Report
2014
A head camera was used to examine the visual correlates of object name learning by toddlers as they played with novel objects and as the parent spontaneously named those objects. The toddlers’ learning of the object names was tested after play, and the visual properties of the head camera images during naming events associated with learned and unlearned object names were analyzed. Naming events associated with learning had a clear visual signature, one in which the visual information itself was clean and visual competition among objects was minimized. Moreover, for learned object names, the visual advantage of the named target over competitors was sustained, both before and after the heard name. The findings are discussed in terms of the visual and cognitive processes that may depend on clean sensory input for learning and also on the sensory–motor, cognitive, and social processes that may create these optimal visual moments for learning.
Journal Article
Mapping the emotional face. How individual face parts contribute to successful emotion recognition
2017
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Journal Article
Surgical face masks impair human face matching performance for familiar and unfamiliar faces
by
Carragher, Daniel J.
,
Hancock, Peter J. B.
in
Accuracy
,
Adult
,
Behavioral Science and Psychology
2020
In response to the COVID-19 pandemic, many governments around the world now recommend, or require, that their citizens cover the lower half of their face in public. Consequently, many people now wear surgical face masks in public. We investigated whether surgical face masks affected the performance of human observers, and a state-of-the-art face recognition system, on tasks of perceptual face matching. Participants judged whether two simultaneously presented face photographs showed the same person or two different people. We superimposed images of surgical masks over the faces, creating three different mask conditions: control (no masks), mixed (one face wearing a mask), and masked (both faces wearing masks). We found that surgical face masks have a large detrimental effect on human face matching performance, and that the degree of impairment is the same regardless of whether one or both faces in each pair are masked. Surprisingly, this impairment is similar in size for both familiar and unfamiliar faces. When matching masked faces, human observers are biased to reject unfamiliar faces as “mismatches” and to accept familiar faces as “matches”. Finally, the face recognition system showed very high classification accuracy for control and masked stimuli, even though it had not been trained to recognise masked faces. However, accuracy fell markedly when one face was masked and the other was not. Our findings demonstrate that surgical face masks impair the ability of humans, and naïve face recognition systems, to perform perceptual face matching tasks. Identification decisions for masked faces should be treated with caution.
Journal Article