Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
622 result(s) for "cross-language"
Sort by:
Sign languages in village communities : anthropological and linguistic insights
\"The book is a unique collection of research on sign languages that have emerged in rural communities with a high incidence of, often hereditary, deafness. These \"village sign languages\" represent the latest addition to the comparative investigation of languages in the gestural modality. With analyses and primary data from eleven different rural communities, the volume represents the first concerted effort by leading experts in both anthropology and linguistics to capture the social dynamics of \"deaf villages\". The chapters address pertinent issues in contemporary linguistics, such as cross-modal contact situations, typological diversity across sign languages and the impact of language modality on linguistic structure.\"--P. [4] of cover.
Universal brain signature of proficient reading
We propose and test a theoretical perspective in which a universal hallmark of successful literacy acquisition is the convergence of the speech and orthographic processing systems onto a common network of neural structures, regardless of how spoken words are represented orthographically in a writing system. During functional MRI, skilled adult readers of four distinct and highly contrasting languages, Spanish, English, Hebrew, and Chinese, performed an identical semantic categorization task to spoken and written words. Results from three complementary analytic approaches demonstrate limited language variation, with speech–print convergence emerging as a common brain signature of reading proficiency across the wide spectrum of selected languages, whether their writing system is alphabetic or logographic, whether it is opaque or transparent, and regardless of the phonological and morphological structure it represents.
Multilingual deep learning framework for fake news detection using capsule neural network
Fake news detection is an essential task; however, the complexity of several languages makes fake news detection challenging. It requires drawing many conclusions about the numerous people involved to comprehend the logic behind some fake stories. Existing works cannot collect more semantic and contextual characteristics from documents in a particular multilingual text corpus. To bridge these challenges and deal with multilingual fake news detection, we present a semantic approach to the identification of fake news based on relational variables like sentiment, entities, or facts that may be directly derived from the text. Our model outperformed the state-of-the-art methods by approximately 3.97% for English to English, 1.41% for English to Hindi, 5.47% for English to Indonesian, 2.18% for English to Swahili, and 2.88% for English to Vietnamese language reviews on TALLIP fake news dataset. To the best of our knowledge, our paper is the first study that uses a capsule neural network for multilingual fake news detection.
CROSS-LANGUAGE TRANSFER LEARNING USING VISUAL INFORMATION FOR AUTOMATIC SIGN GESTURE RECOGNITION
Automatic sign gesture recognition (GR) plays a critical role in facilitating communication between hearing-impaired individuals and the rest of society. However, recognizing sign gestures accurately and efficiently remains a challenging task due to the diversity of sign languages (SLs) and their limited availability of labeled data. This scientific paper proposes a new approach to improving the accuracy of automatic sign GR using cross-language transfer learning with visual information. Two large-scale multimodal SL corpora are utilized as the basic SLs for this study: the Ankara University Turkish Sign Language Dataset (AUTSL) and the Thesaurus Russian Sign Language (TheRusLan). Experimental studies were conducted, resulting in an accuracy of 93.33% for 18 different gestures, including the Russian target SL gestures. This result exceeds the previous state-of-the-art accuracy by 2.19%, demonstrating the effectiveness of the proposed approach. The study highlights the potential of the proposed approach to enhance the accuracy and robustness of machine SL translation, improve the naturalness of human-computer interaction, and facilitate the social adaptation of people with hearing impairments. This paper proposes a promising direction for future research to explore the application of the proposed approach to other SLs and to investigate the impact of individual and cultural differences on GR.
Crosslinguistic influence and crosslinguistic interaction in multilingual language learning
\"Which strategies do multilingual learners use when confronted with languages they don't yet know? Which factors are involved in activating prior linguistic knowledge in multilingual learning? This volume offers valuable insights into recent research in multilingualism, crosslinguistic influence and crosslinguistic interaction. Experts in the field examine the role of background languages in multilingual learning. All the chapters point to the heart of the question of what the \"multilingual mind\" is. Does learning one language actually help you learn another, and if so, why?This volume looks at languages and scenarios beyond English as a second language - Italian, Gealic, Dutch and German, amongst others, are covered, as well as instances of third and additional language learning. Research into crosslinguistic influence and crosslinguistic interaction essentially contributes to our understanding of how language learning works when there are three or more languages in contact\"-- Provided by publisher.
An AI based cross‐language aspect‐level sentiment analysis model using English corpus
Accurate cross‐language aspect‐level sentiment analysis methods can provide accurate decision support for social networks, e‐commerce platforms, and other platforms, thereby providing users with higher quality services. However, actual data is very complex and contains a large amount of redundant information. Existing methods face challenges in extracting semantic association information and deep emotional features when dealing with this complex data. To address these issues, an aspect‐level sentiment analysis model (called Multi‐XLNet‐RCNN) is proposed that integrates multi‐channel XLNet and RCNN. First, a multi‐channel XLNet (Multi XLNet) network model is used to perform autoregressive encoding operations on different languages, fully extracting contextual information from the text and better characterizing the ambiguity of the text. Then, in the RCNN module, the contextual features output by the BiGRU layer are concatenated with the pre trained input features to extract deeper emotional features. Finally, in response to the issue of inconsistent aspect‐level information in sentence features extracted from different language channels, a multi head attention mechanism based on aspect class interaction is utilized to obtain a text attention emotion representation for a given aspect, thereby improving the accuracy of aspect‐level emotion classification. The experiment uses the public English corpus provided by SemEval 2016 as the source language, and Chinese comment data on Dianping and JD E‐commerce platforms as the target language. The experimental results show that the proposed Multi XLNet‐RCNN sentiment analysis method can achieve accurate aspect‐level Sentiment analysis, and the accuracy rates on the two data sets of Dianping and Jingdong E‐commerce can be as high as 0.851 and 0.792, respectively, superior to other advanced comparison models. This model has good application value in cross‐language analysis of social networks and e‐commerce platforms. First, a multi‐channel XLNet (Multi‐XLNet) model is used to extract contextual information from the text. Then, in the RCNN module, the contextual features are output by the forward and reverse series GRU (BiGRU) to extract deeper emotional features. Finally, the multi‐head attention mechanism obtains text attention emotion representation.
Cross-language few-shot intent recognition via prompt-based tuning
Cross-language intent recognition is a fundamental task in cross-language understanding. Recently, this task has been addressed by pretrained cross-language language models. Existing approaches typically augment pretrained language models with additional data, such as annotated parallel corpora. However, these additional data are scarce in practice, especially for low-resource languages. Inspired by the recent effective results of prompt learning, this paper proposes a new framework for enhancing cross-language few-shot intent recognition methods based on prompt tuning (CIRP). The proposed method converts the cross-language intent recognition task into a masked language modelling problem by designing prompt templates. To make the proposed model more generalizable, and avoid templates and label words dependent on a specific language, the method encodes the prompt templates into language-independent embedding representations via the multilingual pretrained language models, and initializes the label words into soft label words by averaging the [mask] vector values from different utterances of the same label, which reduces the distance between label word embeddings and encoder outputs of the [mask] to increase the accuracy of cross-language intent recognition. The experimental results on the few-shot cross-language MultiATIS++, MIvD benchmark dataset show that, compared with the four baseline models, the CIRP performs remarkably well in terms of intent recognition accuracy. Notably, when the sample sizes are set to 1 and 8 shots, the cross-language intent recognition accuracy metrics improve by an average of 11.75% compared with those of the baseline models.