Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
219 result(s) for "Cross-linguistic study"
Sort by:
The development of the Cognitive Assessment for Tagalog Speakers (CATS): A culturally and linguistically tailored test battery for Filipino Americans
Filipino Americans are one of the largest Asian American and Pacific Islander (AAPI) populations in the United States (US). Previous studies suggest that Filipino Americans have one of the highest incidence rates of Alzheimer's disease and related dementias (ADRD) among AAPI subgroups. Despite the expected increase in Filipino Americans with ADRD, no studies to-date have validated neuropsychological measures in the United States for speakers of Tagalog, a major language spoken by Filipino Americans. A significant barrier to dementia care and diagnosis is the lack of linguistically and socioculturally appropriate cognitive tasks for Tagalog speakers. To address this need, we developed and piloted the Cognitive Assessment for Tagalog Speakers (CATS), the first neuropsychological battery for the detection of ADRD in Filipino American Tagalog speakers. Based on evidence-based neuropsychological batteries, we adapted and constructed tasks to measure performance across 4 main cognitive domains: visual/verbal memory, visuospatial functioning, speech and language, and frontal/executive functioning. Tasks were developed with a team of bilingual English/Tagalog, bicultural Filipino American/Canadian experts, including a neurologist, speech-language pathologist, linguist, and neuropsychologist. We recruited Tagalog-speaking participants of age 50+ through social media advertisements and recruitment registries for this cross-sectional study. We present the CATS design and protocol. To-date, the CATS battery has been administered to 26 healthy control participants (age 64.5 ± 7.8 years, 18F/8 M) at an academic institution in Northern California, United States. The development and administration of the CATS battery demonstrated its feasibility but also highlighted the need to consider the effects of bilingualism, language typology, and cultural factors in result interpretation. The CATS battery provides a mechanism for cognitive assessment of Filipino Americans, a population that has been underrepresented in ADRD research. As we move toward the treatment and cure of ADRD, linguistically and socioculturally appropriate cognitive tests become even more important for equitable care.
Cross-linguistic processing of idioms: The role of cultural familiarity and Construction Grammar in idiom comprehension
This study explored the cross-linguistic processing of idioms among Azerbaijani, Russian, and English speakers, utilizing Construction Grammar (CxG) to examine the impact of cultural familiarity on idiom comprehension. Idioms, as culturally and linguistically embedded expressions, present a unique cognitive challenge that varies with the cultural background of the speaker. Employing comprehension tasks and self-paced reading tasks, this study measured both the accuracy of idiom comprehension and the speed of processing. The findings suggest that culturally specific idioms are comprehended more effectively and processed faster than non-culturally specific idioms across all participant groups. The results have significant implications for educational policymakers, educators, and school leaders, emphasizing the need to consider cultural context in language education. Integrating culturally relevant idioms into teaching strategies could enhance language comprehension and foster more effective intercultural communication.
Assessing ChatGPT as a Medical Consultation Assistant for Chronic Hepatitis B: Cross-Language Study of English and Chinese
Chronic hepatitis B (CHB) imposes substantial economic and social burdens globally. The management of CHB involves intricate monitoring and adherence challenges, particularly in regions like China, where a high prevalence of CHB intersects with health care resource limitations. This study explores the potential of ChatGPT-3.5, an emerging artificial intelligence (AI) assistant, to address these complexities. With notable capabilities in medical education and practice, ChatGPT-3.5's role is examined in managing CHB, particularly in regions with distinct health care landscapes. This study aimed to uncover insights into ChatGPT-3.5's potential and limitations in delivering personalized medical consultation assistance for CHB patients across diverse linguistic contexts. Questions sourced from published guidelines, online CHB communities, and search engines in English and Chinese were refined, translated, and compiled into 96 inquiries. Subsequently, these questions were presented to both ChatGPT-3.5 and ChatGPT-4.0 in independent dialogues. The responses were then evaluated by senior physicians, focusing on informativeness, emotional management, consistency across repeated inquiries, and cautionary statements regarding medical advice. Additionally, a true-or-false questionnaire was employed to further discern the variance in information accuracy for closed questions between ChatGPT-3.5 and ChatGPT-4.0. Over half of the responses (228/370, 61.6%) from ChatGPT-3.5 were considered comprehensive. In contrast, ChatGPT-4.0 exhibited a higher percentage at 74.5% (172/222; P<.001). Notably, superior performance was evident in English, particularly in terms of informativeness and consistency across repeated queries. However, deficiencies were identified in emotional management guidance, with only 3.2% (6/186) in ChatGPT-3.5 and 8.1% (15/154) in ChatGPT-4.0 (P=.04). ChatGPT-3.5 included a disclaimer in 10.8% (24/222) of responses, while ChatGPT-4.0 included a disclaimer in 13.1% (29/222) of responses (P=.46). When responding to true-or-false questions, ChatGPT-4.0 achieved an accuracy rate of 93.3% (168/180), significantly surpassing ChatGPT-3.5's accuracy rate of 65.0% (117/180) (P<.001). In this study, ChatGPT demonstrated basic capabilities as a medical consultation assistant for CHB management. The choice of working language for ChatGPT-3.5 was considered a potential factor influencing its performance, particularly in the use of terminology and colloquial language, and this potentially affects its applicability within specific target populations. However, as an updated model, ChatGPT-4.0 exhibits improved information processing capabilities, overcoming the language impact on information accuracy. This suggests that the implications of model advancement on applications need to be considered when selecting large language models as medical consultation assistants. Given that both models performed inadequately in emotional guidance management, this study highlights the importance of providing specific language training and emotional management strategies when deploying ChatGPT for medical purposes. Furthermore, the tendency of these models to use disclaimers in conversations should be further investigated to understand the impact on patients' experiences in practical applications.
HeLP: The Hebrew Lexicon project
Lexicon projects (LPs) are large-scale data resources in different languages that present behavioral results from visual word recognition tasks. Analyses using LP data in multiple languages provide evidence regarding cross-linguistic differences as well as similarities in visual word recognition. Here we present the first LP in a Semitic language—the Hebrew Lexicon Project (HeLP). HeLP assembled lexical decision (LD) responses to 10,000 Hebrew words and nonwords, and naming responses to a subset of 5000 Hebrew words. We used the large-scale HeLP data to estimate the impact of general predictors (lexicality, frequency, word length, orthographic neighborhood density), and Hebrew-specific predictors (Semitic structure, presence of clitics, phonological entropy) of visual word recognition performance. Our results revealed the typical effects of lexicality and frequency obtained in many languages, but more complex impact of word length and neighborhood density. Considering Hebrew-specific characteristics, HeLP data revealed better recognition of words with a Semitic structure than words that do not conform to it, and a drop in performance for words comprising clitics. These effects varied, however, across LD and naming tasks. Lastly, a significant inhibitory effect of phonological ambiguity was found in both naming and LD. The implications of these findings for understanding reading in a Semitic language are discussed.
Assessing the Role of Large Language Models Between ChatGPT and DeepSeek in Asthma Education for Bilingual Individuals: Comparative Study
Asthma is a chronic inflammatory airway disease requiring long-term management. Artificial intelligence (AI)-driven tools such as large language models (LLMs) hold potential for enhancing patient education, especially for multilingual populations. However, comparative assessments of LLMs in disease-specific, bilingual health communication are limited. This study aimed to evaluate and compare the performance of two advanced LLMs-ChatGPT-4o (OpenAI) and DeepSeek-v3 (DeepSeek AI)-in providing bilingual (English and Chinese) education for patients with asthma, focusing on accuracy, completeness, clinical relevance, and language adaptability. A total of 53 asthma-related questions were collected from real patient inquiries across 8 clinical domains. Each question was posed in both English and Chinese to ChatGPT-4o and DeepSeek-v3. Responses were evaluated using a 7D clinical quality framework (eg, completeness, consensus consistency, and reasoning ability) adapted from Google Health. Three respiratory clinicians performed blinded scoring evaluations. Descriptive statistics and Wilcoxon signed-rank tests were applied to compare performance across domains and against theoretical maximums. Both models demonstrated high overall quality in generating bilingual educational content. DeepSeek-v3 outperformed ChatGPT-4o in completeness and currency, particularly in treatment-related knowledge and symptom interpretation. ChatGPT-4o showed advantages in clarity and accessibility. In English responses, ChatGPT achieved perfect scores across 5 domains, but scored lower in clinical features (mean 3.78, SD 0.16; P=.02), treatment (mean 3.90, SD 0.05; P=.03), and differential diagnosis (mean 3.83, SD 0.29; P=.08). ChatGPT-4o and DeepSeek-v3 each offer distinct strengths for bilingual asthma education. While ChatGPT is more suitable for general health education due to its expressive clarity, DeepSeek provides more up-to-date and comprehensive clinical content. Both models can serve as effective supplementary tools for patient self-management but cannot replace professional medical advice. Future AI health care systems should enhance clinical reasoning, ensure guideline currency, and integrate human oversight to optimize safety and accuracy.
A simple linguistic approach to the Knobe effect, or the Knobe effect without any vignette
In this paper we will propose a simple linguistic approach to the Knobe effect, or the moral asymmetry of intention attribution in general, which is just to ask the felicity judgments on the relevant sentences without any vignette at all. Through this approach we were in fact able to reproduce the (quasi-) Knobe effects in different languages (English and Japanese), with large effect sizes. We shall defend the significance of this simple approach by arguing that our approach and its results not only tell interesting facts about the concept of intentional action, but also show the existence of the linguistic default, which requires independent investigation. We will then argue that, despite the recent view on experimental philosophy by Knobe himself, there is a legitimate role of the empirical study of concepts in the investigations of cognitive processes in mainstream experimental philosophy, which suggests a broadly supplementary picture of experimental philosophy.
Human Non-linguistic Vocal Repertoire: Call Types and Their Meaning
Recent research on human nonverbal vocalizations has led to considerable progress in our understanding of vocal communication of emotion. However, in contrast to studies of animal vocalizations, this research has focused mainly on the emotional interpretation of such signals. The repertoire of human nonverbal vocalizations as acoustic types, and the mapping between acoustic and emotional categories, thus remain underexplored. In a cross-linguistic naming task (Experiment 1), verbal categorization of 132 authentic (non-acted) human vocalizations by English-, Swedish- and Russian-speaking participants revealed the same major acoustic types: laugh, cry, scream, moan, and possibly roar and sigh. The association between call type and perceived emotion was systematic but non-redundant: listeners associated every call type with a limited, but in some cases relatively wide, range of emotions. The speed and consistency of naming the call type predicted the speed and consistency of inferring the caller’s emotion, suggesting that acoustic and emotional categorizations are closely related. However, participants preferred to name the call type before naming the emotion. Furthermore, nonverbal categorization of the same stimuli in a triad classification task (Experiment 2) was more compatible with classification by call type than by emotion, indicating the former’s greater perceptual salience. These results suggest that acoustic categorization may precede attribution of emotion, highlighting the need to distinguish between the overt form of nonverbal signals and their interpretation by the perceiver. Both within- and between-call acoustic variation can then be modeled explicitly, bringing research on human nonverbal vocalizations more in line with the work on animal communication.
Morphosyntactic production in agrammatic aphasia: A cross-linguistic machine learning approach
Introduction Recent studies on agrammatic aphasia by Fyndanis et al. (2012, 2017) reported evidence against the cross-linguistic validity of unitary accounts of agrammatic morphosyntactic impairment, such as the Distributed Morphology Hypothesis (DMH) (Wang et al., 2014), the two versions of the Interpretable Features’ Impairment Hypothesis (IFIH-1: Fyndanis et al., 2012; IFIH-2: Fyndanis et al., 2018b), and the Tree Pruning Hypothesis (TPH) (Friedmann & Grodzinsky, 1997). However, some of the features/factors emphasized by the accounts above (i.e. involvement of inflectional alternations (DMH), involvement of integration processes (IFIH-1), involvement of both integration processes and inflectional alternations (IFIH-2), position of a morphosyntactic feature/category in the syntactic hierarchy (TPH)) may still play a role in agrammatic morphosyntactic production. These features may act in synergy with other factors in determining the way in which morphosyntactic production is impaired across persons with agrammatic aphasia (PWA) and across languages. Relevant factors may include language-independent and language-specific properties of morphosyntactic categories, as well as subject-specific and task/material-specific variables. The present study addresses which factors determine verb-related morphosyntactic production in PWA and what is their relative importance. Methods We collapsed the datasets of the 24 Greek-, German-, and Italian-speaking PWA underlying Fyndanis et al.’s (2017) study, added the data of two more Greek-speaking PWA, and employed machine learning algorithms to analyze the data. The unified dataset consisted of data on subject-verb agreement, time reference (past reference, future reference), grammatical mood (indicative, subjunctive), and polarity (affirmatives, negatives). All items/conditions were represented as clusters of theoretically motivated features: ±involvement of integration processes, ±involvement of inflectional alternations, ±involvement of both integration processes and inflectional alternations, and low/middle/high position in the syntactic hierarchy. We included 14 subject-specific, category-specific and task/material-specific predictors: Verbal Working Memory (WM), (years of formal) Education, Age, Gender, Mean Length of Utterance in (semi)spontaneous speech (Index 1 of severity of agrammatism), Proportion of Grammatical Sentences in (semi)spontaneous speech (Index 2 of severity of agrammatism), Words per Minute in (semi)spontaneous speech (Index of fluency), Involvement of inflectional alternations, Involvement of integration processes, Involvement of both integration processes and inflectional alternations, Position of a given morphosyntactic category in the syntactic hierarchy (high, middle, low), Item Presentation mode (cross-modal, auditory), Response mode (oral, written), and Language (Greek, German, Italian). Different machine learning models were employed: Random Forest, C5.0 decision tree, RPart, and Support Vector Machine. Results & Discussion Random Forest model outperformed all the other models achieving the highest accuracy (0.786). As shown in Figure 1, the best predictors of accuracy on tasks tapping morphosyntactic production were the involvement of both integration processes and inflectional alternations (categories involving both integration processes and inflectional alternations were more impaired than categories involving one or neither of them), verbal WM capacity (the greater the WM capacity, the better the morphosyntactic production), and severity of agrammatism (the more severe the agrammatism, the worse the morphosyntactic production). Results are consistent with IFIH-2 (Fyndanis et al., 2018b) and studies highlighting the role of verbal WM in morphosyntactic production (e.g., Fyndanis et al., 2018a; Kok et al., 2007).
Recognizing Emotions in a Foreign Language
Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker’s voice, regardless of an individual’s culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances (“nonsense speech”) produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language (“in-group advantage”). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables.
Phonological awareness in Hebrew (L1) and English (L2) in normal and disabled readers
The present study examined cross-linguistic relationships between phonological awareness in L1 (Hebrew) and L2 (English) among normal ( N  = 30) and reading disabled ( N  = 30) Hebrew native speaking college students. Further, it tested the effect of two factors: the lexical status of the stimulus word (real word vs. pseudoword) and the linguistic affiliation of the target phoneme (whether it is within L1 or L2) on phonological awareness. Three parallel experimental phonological awareness tasks were developed in both languages: phoneme isolation, full segmentation, and phoneme deletion. As expected, the results revealed lower levels of phonological awareness in the L2 than in the L1, and in the reading disabled than in the normal reader group. The lexical status of the target word was a reliable factor predicting individual differences in phonological awareness in L2. It was also found that the linguistic affiliation of the target phoneme was a reliable factor in predicting L2 phonological awareness performance in both reader groups. The results are discussed within the framework of phonological representation and language-specific linguistic constraints on phonological awareness.