Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
10 result(s) for "nonmanual"
Sort by:
A deep learning framework for Ethiopian sign language recognition using skeleton-based representation
This study proposes an environment- and signer-invariant sign language recognition model. The model first extracts skeletal key-points from the signer via MediaPipe, which is Google’s cross-platform pipeline framework that helps to detect and track human poses, facial landmarks, and hands. After preprocessing the skeletal key-point information, feature extraction and learning are performed via deep learning architectures: a convolutional neural network followed by long short-term memory (CNN-LSTM), long short-term memory (LSTM), bidirectional long short-term memory (BiLSTM), and gated recurrent units (GRUs). This study proposes a deep learning framework for recognizing Ethiopian sign language (EthSL) via skeleton-based features extracted via the MediaPipe Holistic. A dataset of 5600 annotated sign videos was constructed and used to evaluate four deep learning models, namely, CNN-LSTM, LSTM, BiLSTM, and the GRU, which achieved 94% accuracy in signer-dependent settings and 73% accuracy in signer-independent settings. The results demonstrate the model’s potential for scalable, low-cost, and real-time EthSL recognition in unconstrained environments. This study attempted to increase the independence of ASLR models to some level. However, further studies are needed to identify continuous signs in a fully open environment. Therefore, the technique implemented to detect and track key points in this study should be further investigated to recognize continuous EthSL.
A Case Study of a Deaf Autistic Adolescent’s Affective and Linguistic Expressions
Facial expressions and body language play crucial roles in communication by conveying emotional and contextual information. In signed languages, facial expressions also serve linguistic functions. While previous research on autistic individuals’ facial expressions has focused primarily on affective expressions in hearing people, studying deaf autistic individuals offers insight into how autism affects linguistic and affective facial expressions. This case study examines the nonmanual expressions of “Brent,” a Deaf autistic adolescent natively exposed to American Sign Language (ASL). Five video recordings (four monologues and one conversation, totaling 35 m) were coded for nonmanual expressions, including affective facial expressions, question marking, negation, and other functions. Across 590 coded utterances, Brent showed absent or reduced facial expressions for both linguistic and affective purposes. However, he frequently used alternative communicative strategies, including additional manual signs, sign modification, and body enactment. Use of body movement to convey negation, affirmation, or emphasis was observed but inconsistently applied. These findings expand the current understanding of how autistic individuals use facial expressions by including linguistic functions in a signed language and support a broader view of autistic communication that embraces diverse and effective languaging strategies beyond neurotypical norms.
Prosody of focus in Turkish Sign Language
Prosodic realization of focus has been a widely investigated topic across languages and modalities. Simultaneous focus strategies are intriguing to see how they interact regarding their functional and temporal alignment. We explored the multichannel (manual and nonmanual) realization of focus in Turkish Sign Language. We elicited data with focus type, syntactic roles and movement type variables from 20 signers. The results revealed the focus is encoded via increased duration in manual signs, and nonmanuals do not necessarily accompany focused signs. With a multichanneled structure, sign languages use two available channels or opt for one to express focushood.
Emergence or Grammaticalization? The Case of Negation in Kata Kolok
Typological comparisons have revealed that signers can use manual elements and/or a non-manual marker to express standard negation, but little is known about how such systematic marking emerges from its gestural counterparts as a new sign language arises. We analyzed 1.73 h of spontaneous language data, featuring six deaf native signers from generations III-V of the sign language isolate Kata Kolok (Bali). These data show that Kata Kolok cannot be classified as a manual dominant or non-manual dominant sign language since both the manual negative sign and a side-to-side headshake are used extensively. Moreover, the intergenerational comparisons indicate a considerable increase in the use of headshake spreading for generation V which is unlikely to have resulted from contact with Indonesian Sign Language varieties. We also attest a specialized negative existential marker, namely, tongue protrusion, which does not appear in co-speech gesture in the surrounding community. We conclude that Kata Kolok is uniquely placed in the typological landscape of sign language negation, and that grammaticalization theory is essential to a deeper understanding of the emergence of grammatical structure from gesture.
Towards enhanced visual clarity of sign language avatars through recreation of fine facial detail
Facial nonmanual signals and expressions convey critical linguistic and affective information in signed languages. However, the complexity of human facial anatomy has made the implementation of these movements a particular challenge in avatar research. Recent advances have improved the possible range of motion and expression. Because of this, we propose that an important next step is incorporating fine detail such as wrinkles to increase the visual clarity of these facial movements for the purposes of enhancing the legibility of avatar animation, particularly on small screens. This paper reviews research efforts to portray nonmanual signals via avatar technology and surveys extant illumination models for their suitability for this application. Based on this information, The American Sign Language Avatar Project at DePaul University has developed a new technique based on commercial visual effects paradigms for implementing realistic fine detail on the Paula avatar that functions within the complexity constraints of real-time sign language avatars.
Functions of Head and Body Movements in Austrian Sign Language
Over the past decades, the field of sign language linguistics has expanded considerably. Recent research on sign languages includes a wide range of subdomains such as reference grammars, theoretical linguistics, psycho- and neurolinguistics, sociolinguistics, and applied studies on sign languages and Deaf communities. The SLDC series is concerned with the study of sign languages in a comprehensive way, covering various theoretical, experimental, and applied dimensions of sign language research and their relationship to Deaf communities around the world. The series provides a multidisciplinary platform for innovative and outstanding research in sign language linguistics and aims at linking the study of sign languages to current trends in modern linguistics, such as new experimental and theoretical investigations, the importance of language endangerment, the impact of technological developments on data collection and Deaf education, and the broadening geographical scope of typological sign language studies, especially in terms of research on non-Western sign languages and Deaf communities.
Evidence for minimal pairs in Turkish Sign Language (TİD)
Recently, many studies have examined the phonological parameters in sign languages from various research perspectives, paying close attention in particular to manual parameters such as handshape, place of articulation, movement, and orientation of the hands. However, these studies have been conducted on only a few sign languages such as American and British Sign Languages, and have paid little attention to nonmanual features. In this study, we investigated yet another sign language, Turkish Sign Language (TİD), focusing on both manual and nonmanual features to examine \"minimal pairs\", a cornerstone concept of phonology. We applied Brentari's (2005) feature classification and Pfau and Quer's (2010) phonological (or lexical) nonmanual categorization. Our analysis showed that both phonological features and constraints on TİD sign formation have a phonological structure similar to other well-studied sign languages. The results indicate that not only are phonological features a necessary notion for the description of both manual and nonmanual parameters at the lexical level in TİD, but also that nonmanuals have to be considered an essential part of sign as a way of better understanding their phonological roles in sign language phonology.