Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
8 result(s) for "Zupan, Barbra"
Sort by:
Emotion Perception From Vocal Cues: Testing the Influence of Emotion Intensity and Sex on In-Group Advantage
The present study examined individuals' ability to identify emotions being expressed in vocal cues depending on the accent of the speaker as well as the intensity of the emotion being expressed. Australian and Canadian participants listened to Australian and Canadian speakers express pairs of emotions that fall within the same emotion family but vary in intensity (e.g., anger vs. irritation). Accent of listener was unrelated to emotion recognition. Instead, performance varied more based on emotion intensity and sex; Australian and Canadian participants generally found high intensity emotions easier to recognize compared to low intensity emotions as well as emotion conveyed by females compared to males. Participants found it particularly difficult to recognize the expressed emotion of Australian males. The results suggest the importance of considering the context in which emotion recognition is embedded. La présente étude a examiné la capacité des individus à identifier les émotions exprimées dans les indices vocaux en fonction de l'accent du locuteur et de l'intensité de l'émotion exprimée. Des participants australiens et canadiens ont écouté des locuteurs australiens et canadiens exprimer des paires d'émotions appartenant à la même famille d'émotions mais dont l'intensité varie (par exemple, la colère par opposition à l'irritation). L'accent de l'auditeur n'était pas lié à la reconnaissance des émotions. Les résultats variaient plutôt en fonction de l'intensité des émotions et du sexe de l'auditeur; les participants australiens et canadiens ont généralement trouvé que les émotions de forte intensité étaient plus faciles à reconnaître que les émotions de faible intensité, et que les émotions exprimées par les femmes étaient plus faciles à reconnaître que celles exprimées par les hommes. Les participants ont trouvé particulièrement difficile de reconnaître les émotions exprimées par les hommes australiens. Les résultats montrent l'importance de prendre en compte le contexte dans lequel la reconnaissance des émotions est intégrée. Public Significance Statement Accents can often interfere with individuals' ability to identify emotions from tone of voice. We explored if the intensity of the emotional vocal expression might influence emotion recognition when judging someone with an accent from a similar cultural background. We found that though accent of the speaker influences judgements about the intensity of the emotion expressed, sex of speaker was a better predictor of emotion recognition.
Validation of Affective Sentences: Extending Beyond Basic Emotion Categories
We use nonverbal and verbal emotion cues to determine how others are feeling. Most studies in vocal emotion perception do not consider the influence of verbal content, using sentences with nonsense words or words that carry no emotional meaning. These online studies aimed to validate 95 sentences with verbal content intended to convey 10 emotions. Participants were asked to select the emotion that best described the emotional meaning of the sentence. Study 1 included 436 participants and Study 2 included 193. The Simpson diversity index was applied as a measure of dispersion of responses. Across the two studies, 38 sentences were labelled as representing 10 emotion categories with a low degree of diversity in participant responses. Expanding current databases beyond basic emotion categories is important for researchers exploring the interaction between tone of voice and verbal content, and/or people’s capacity to make subtle distinctions between their own and others’ emotions.
Categorising emotion words: the influence of response options
Words used to describe emotion are influenced by experience, context and culture; nevertheless, research studies often constrain participant response opt i ons. We explored the influence of response options on how people conceptualise emotion words in two cross-sectional studies. In Study 1 participants rated the degree to which a large set of emotion words ( n  = 497) fit five basic emotion categories – Happy, Sad, Angry, Fearful, Neutral. Twenty-four words that fit well within these categories were included in Study 2. In Study 2 response options were expanded to include two additional basic emotions (Disgust, Joy), and six complex emotions (Amusement, Anxiety, Contentment, Irritated, Pride, Relief). Only half of the Study 1 words were categorised into the same emotion categories in Study 2. An increase in diversity of ratings for both positive and negative valenced words suggested overlaps in people’s conceptualisations of emotion words. Results suggest potential benefits of providing research participants complex emotion categories of varying intensity, which may better reflect people’s nuanced conceptualisations of emotion. Future research exploring varied response options may provide further insight into how people categorise and differentiate emotion words.
Mental Health Practitioners’ Understanding of Speech Pathology in a Regional Australian Community
(1) Background: This study aimed to determine the level of knowledge and the perceptions of speech pathology held by a sample of regional mental health practitioners and to explore factors that facilitate understanding of the roles of speech pathologists in mental health. While mental health is recognised as an area of practice by Speech Pathology Australia, the inclusion of speech pathologists in mental health teams is limited. (2) Methods: An anonymous online survey was created using previously validated surveys and author generated questions and distributed to mental health practitioners in Central Queensland, Australia. (3) Results: Mental health practitioners had difficulty identifying speech pathology involvement when presented with case scenarios. Accuracy was poor for language-based cases, ranging from 28.81% to 37.29%. Participants who reported having worked with a speech pathologist were more likely to demonstrate higher scores on the areas of practice questions, [r(53) = 0.301, p = 0.028], and the language scenarios [r(58) = 0.506, p < 0.001]. They were also more likely to agree to statements regarding the connection between speech pathology and mental health, r(59) = 0.527, p < 0.001. (4) Conclusions: As found in this study, contact with speech pathologists is a strong predictor of mental health providers’ knowledge of the speech pathology profession. Thus, the challenge may be to increase this contact with mental health providers to promote inclusion of speech pathologists in the mental health domain.
The relationship between vocal affect recognition and psychosocial functioning for people with moderate to severe traumatic brain injury: a systematic review
The purpose of this review was to explore how vocal affect recognition deficits impact the psychosocial functioning of people with moderate to severe traumatic brain injury (TBI). A systematic review following the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines was conducted, whereby six databases were searched, with additional hand searching of key journals also completed. The search identified 1847 records after duplicates were removed, and 1749 were excluded through title and abstract screening. After full text screening of 65 peer-reviewed articles published between January 1999 and August 2019, only five met inclusion criteria. The methodological quality of selected studies was assessed using the Mixed Methods Appraisal Tool (MMAT) Version 2018 with a fair level of agreement reached. A narrative synthesis of the results was completed, exploring vocal affect recognition and psychosocial functioning of people with moderate to severe TBI, including aspects of social cognition (i.e., empathy; Theory of Mind) and social behaviour. Results of the review were limited by a paucity of research in this area, a lack of high-level evidence, and wide variation in the outcome measures used. More rigorous study designs are required to establish more conclusive evidence regarding the degree and direction of the association between vocal affect recognition and aspects of psychosocial functioning. This review is registered with Prospero.
Sex Differences in Emotion Recognition and Emotional Inferencing Following Severe Traumatic Brain Injury
The primary objective of the current study was to determine if men and women with traumatic brain injury (TBI) differ in their emotion recognition and emotional inferencing abilities. In addition to overall accuracy, we explored whether differences were contingent upon the target emotion for each task, or upon high- and low-intensity facial and vocal emotion expressions. A total of 160 participants (116 men) with severe TBI completed three tasks – a task measuring facial emotion recognition (DANVA-Faces), vocal emotion recognition (DANVA-Voices) and one measuring emotional inferencing (emotional inference from stories test (EIST)). Results showed that women with TBI were significantly more accurate in their recognition of vocal emotion expressions and also for emotional inferencing. Further analyses of task performance showed that women were significantly better than men at recognising fearful facial expressions and also facial emotion expressions high in intensity. Women also displayed increased response accuracy for sad vocal expressions and low-intensity vocal emotion expressions. Analysis of the EIST task showed that women were more accurate than men at emotional inferencing in sad and fearful stories. A similar proportion of women and men with TBI were impaired (≥ 2 SDs when compared to normative means) at facial emotion perception, χ2 = 1.45, p = 0.228, but a larger proportion of men was impaired at vocal emotion recognition, χ2 = 7.13, p = 0.008, and emotional inferencing, χ2 = 7.51, p = 0.006.
Narrative Comprehension Abilities of Children with Typical Hearing and Children Using Hearing Aids: A Pilot Study
This pilot study explores differences in oral narrative comprehension abilities between children with moderate-to-severe sensorineural hearing loss using hearing aids and their peers with typical hearing matched for age and gender. All children were between 3.5 and 5 years of age. Participants were read a patterned, illustrated storybook. Modified versions of this narrative were then read for a Joint Story Retell task and an Expectancy Violation Detection Task to measure both comprehension of key story elements and comprehension monitoring ability. Speech perception was also assessed. Analyses revealed no statistically significant differences between children with and without hearing loss, but interesting trends emerged. Explanations for the stronger than expected performance of the children with hearing loss on the narrative comprehension measures are discussed. Importantly, this pilot study demonstrates that the joint story retell task and expectancy violation detection task are viable measures of narrative comprehension for children with hearing loss.
Bimodal perception of visual and auditory cues of emotion from early childhood to early adulthood
Nonverbal cues of emotion, such as facial and vocal expressions, have been identified as important cues in determining the thoughts and feelings of others. The ability to interpret these cues in everyday situations is important to successful social interactions but research in this area has focused primarily on static, rather than dynamic cues. The purpose of this research was to investigate how perception of congruent and incongruent visual and auditory cues of emotion changes over the lifespan. Since research in processing has demonstrated an auditory preference in the processing of young children and a visual preference in adults, it was expected that under conditions of incongruency, children would select the auditory portion of the bimodal stimulus and adults would select the visual portion. Forty participants were recruited for this study, ten in each of the following age groups: Age 4-6; 8-10; 12-14; and Adults. Participants were exposed to a total of 184 neutral sentences portrayed by one male and female talker in happy, sad, angry, or fearful emotion expressions. Sixty of these sentences were congruent and consisted of matching bimodal cues. The remaining 124 sentences were incongruent and consisted of mismatched visual and auditory cues (e.g. Sad Face with Angry Voice). Statistical analysis of responses to the congruent stimuli showed increased accuracy in identification with increasing age. Incongruent stimuli did not lead children in the age 4-6 group to select the auditory component of the stimulus more often than the visual. However, statistically significant differences in the overall number of auditory responses by this group when compared to older groups occurred. Additionally, children in the age 4-6 group also provided more responses that reflected integration of the visual and auditory cues than other age groups. The integrations that were provided by children in the age 4-6 age group were primarily related to the acoustic cues of the auditory portion of the incongruent stimulus. Results suggested that the auditory cue influenced the youngest children's responses even if they did not always select the auditory portion of the incongruent stimulus. There was also a significant positive relationship between age and the number of visual responses provided to incongruent stimuli which suggested that as age increases, participants relied more heavily on the visual cues of emotion.