Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
221 result(s) for "Simplified language"
Sort by:
Classifier Structures in Mandarin Chinese
This monograph addresses fundamental syntactic issues of classifier constructions, based on a thorough study of a typical classifier language, Mandarin Chinese. It shows that the contrast between count and mass is not binary. Instead, there are two independently attested features: Numerability, the ability of a noun to combine with a numeral directly, and Delimitability, the ability of a noun to be modified by a delimitive modifier, such as size, shape, or boundary modifier. Although all nouns in Chinese are non-count nouns, there is still a mass/non-mass contrast, with mass nouns selected by individuating classifiers and non-mass nouns selected by individual classifiers. Some languages have the counterparts of Chinese individuating classifiers only, some languages have the counterparts of Chinese individual classifiers only, and some other languages have no counterpart of either individual or individuating classifiers of Chinese. The book also reports that unit plurality can be expressed by reduplicative classifiers in the language. Moreover, for the constituency of a numeral expression, an individual, individuating, or kind classifier combines with the noun first and then the numeral is integrated; but a partitive or collective classifier, like a measure word, combines with the numeral first, before the noun is integrated into the whole nominal structure. Furthermore, the book identifies the syntactic positions of various uses of classifiers in the language. A classifier is at a functional head position that has a dependency with a numeral, or a position that has a dependency with a generic or existential quantifier, or a position that represents the singular-plural contrast, or a position that licenses a delimitive modifier when the classifier occurs in a compound.
Thinking About Genre
I work in a non-Dewey library that essentially uses a combination of BISAC and simplified language to organize our shelves to create an easily browsable collection. While my area of selection is adult fiction and probably the least different from Dewey libraries, traditional genres can still be a conundrum. It’s not that I don’t know genre conventions—I have fifteen years of library experience, have volunteered for numerous genre award committees, and even write reviews for Library Journal. The puzzle arises from the conflict between
Neural Natural Language Generation: A Survey on Multilinguality, Multimodality, Controllability and Learning
Developing artificial learning systems that can understand and generate natural language has been one of the long-standing goals of artificial intelligence. Recent decades have witnessed an impressive progress on both of these problems, giving rise to a new family of approaches. Especially, the advances in deep learning over the past couple of years have led to neural approaches to natural language generation (NLG). These methods combine generative language learning techniques with neural-networks based frameworks. With a wide range of applications in natural language processing, neural NLG (NNLG) is a new and fast growing field of research. In this state-of-the-art report, we investigate the recent developments and applications of NNLG in its full extent from a multidimensional view, covering critical perspectives such as multimodality, multilinguality, controllability and learning strategies. We summarize the fundamental building blocks of NNLG approaches from these aspects and provide detailed reviews of commonly used preprocessing steps and basic neural architectures. This report also focuses on the seminal applications of these NNLG models such as machine translation, description generation, automatic speech recognition, abstractive summarization, text simplification, question answering and generation, and dialogue generation. Finally, we conclude with a thorough discussion of the described frameworks by pointing out some open research directions.
Rewordify Review
Rewordify is a useful website that can simplify text in order to increase readability and help students learn new vocabulary. The easiest way to get to grips with it, and to discover what it can do for you, is to use the 'demo' option on the home page. This is an incredibly useful feature, as it allows the user to see the various functions in action. The basic idea involves replacing the most difficult words with more simple options, as well as providing additional features such as definitions and read-aloud. Rewordify recognizes that looking up words in a dictionary can be time consuming and disheartening for students, and can often lead to incorrect or misleading definitions. Instead, their replacements allow instant understanding and options that make sense in context, keeping the singular/plural form or verb tense accurate.
Imperfect language learning reduces morphological overspecification: Experimental evidence
It is often claimed that languages with more non-native speakers tend to become morphologically simpler, presumably because non-native speakers learn the language imperfectly. A growing number of studies support this claim, but there is a dearth of experiments that evaluate it and the suggested explanatory mechanisms. We performed a large-scale experiment which directly tested whether imperfect language learning simplifies linguistic structure and whether this effect is amplified by iterated learning. Members of 45 transmission chains, each consisting of 10 one-person generations, learned artificial mini-languages and transmitted them to the next generation. Manipulating the learning time showed that when transmission chains contained generations of imperfect learners, the decrease in morphological complexity was more pronounced than when the chains did not contain imperfect learners. The decrease was partial (complexity did not get fully eliminated) and gradual (caused by the accumulation of small simplifying changes). Simplification primarily affected double agent-marking, which is more redundant, arguably more difficult to learn and less salient than other features. The results were not affected by the number of the imperfect-learner generations in the transmission chains. Thus, we provide strong experimental evidence in support of the hypothesis that iterated imperfect learning leads to language simplification.
Improving Patient Communication by Simplifying AI-Generated Dental Radiology Reports With ChatGPT: Comparative Study
Medical reports, particularly radiology findings, are often written for professional communication, making them difficult for patients to understand. This communication barrier can reduce patient engagement and lead to misinterpretation. Artificial intelligence (AI), especially large language models such as ChatGPT, offers new opportunities for simplifying medical documentation to improve patient comprehension. We aimed to evaluate whether AI-generated radiology reports simplified by ChatGPT improve patient understanding, readability, and communication quality compared to original AI-generated reports. In total, 3 versions of radiology reports were created using ChatGPT: an original AI-generated version (text 1), a patient-friendly, simplified version (text 2), and a further simplified and accessibility-optimized version (text 3). A total of 300 patients (n=100, 33.3% per group), excluding patients with medical education, were randomly assigned to review one text version and complete a standardized questionnaire. Readability was assessed using the Flesch Reading Ease (FRE) score and LIX indices. Both simplified texts showed significantly higher readability scores (text 1: FRE score=51.1; text 2: FRE score=55.0; and text 3: FRE score=56.4; P<.001) and lower LIX scores, indicating enhanced clarity. Text 3 had the shortest sentences, had the fewest long words, and scored best on all patient-rated dimensions. Questionnaire results revealed significantly higher ratings for texts 2 and 3 across clarity (P<.001), tone (P<.001), structure, and patient engagement. For example, patients rated the ability to understand findings without help highest for text 3 (mean 1.5, SD 0.7) and lowest for text 1 (mean 3.1, SD 1.4). Both simplified texts significantly improved patients' ability to prepare for clinical conversations and promoted shared decision-making. AI-generated simplification of radiology reports significantly enhances patient comprehension and engagement. These findings highlight the potential of ChatGPT as a tool to improve patient-centered communication. While promising, future research should focus on ensuring clinical accuracy and exploring applications across diverse patient populations to support equitable and effective integration of AI in health care communication.
Simplification of literary and scientific texts to improve reading fluency and comprehension in beginning readers of French
Reading comprehension and fluency are crucial for successful academic learning and achievement. Yet, a rather large percentage of children still have enormous difficulties in understanding a written text at the end of primary school. In this context, the aim of our study was to investigate whether text simplification, a process of reducing text complexity while keeping its meaning unchanged, can improve reading fluency and comprehension for children learning to read. Furthermore, we were interested in finding out whether some readers would benefit more than others from text simplification as a function of their cognitive and language profile. To address these issues, we developed an iBook application for iPads, which allowed us to present normal and simplified versions of informative and narrative texts to 165 children in grade 2. Reading fluency was measured for each sentence, and text comprehension was measured for each text using multiple-choice questions. The results showed that both reading fluency and reading comprehension were significantly better for simplified than for normal texts. Results showed that poor readers and children with weaker cognitive skills (nonverbal intelligence, memory) benefitted to a greater extent from simplification than good readers and children with somewhat stronger cognitive skills.
Easy-read and large language models: on the ethical dimensions of LLM-based text simplification
The production of easy-read and plain language is a challenging task, requiring well-educated experts to write context-dependent simplifications of texts. Therefore, the domain of easy-read and plain language is currently restricted to the bare minimum of necessary information. Thus, even though there is a tendency to broaden the domain of easy-read and plain language, the inaccessibility of a significant amount of textual information excludes the target audience from partaking or entertainment and restricts their ability to live life autonomously. Large language models can solve a vast variety of natural language tasks, including the simplification of standard language texts to easy-read or plain language. Moreover, with the rise of generative models like GPT, easy-read and plain language may be applicable to all kinds of natural language texts, making formerly inaccessible information accessible to marginalized groups like, a.o., non-native speakers, and people with mental disabilities. In this paper, we argue for the feasibility of text simplification and generation in that context, outline the ethical dimensions, and discuss the implications for researchers in the field of ethics and computer science.