Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
70,274 result(s) for "Reading comprehension."
Sort by:
ScienceQA: a novel resource for question answering on scholarly articles
Machine Reading Comprehension (MRC) of a document is a challenging problem that requires discourse-level understanding. Information extraction from scholarly articles nowadays is a critical use case for researchers to understand the underlying research quickly and move forward, especially in this age of infodemic. MRC on research articles can also provide helpful information to the reviewers and editors. However, the main bottleneck in building such models is the availability of human-annotated data. In this paper, firstly, we introduce a dataset to facilitate question answering (QA) on scientific articles. We prepare the dataset in a semi-automated fashion having more than 100k human-annotated context–question–answer triples. Secondly, we implement one baseline QA model based on Bidirectional Encoder Representations from Transformers (BERT). Additionally, we implement two models: the first one is based on Science BERT (SciBERT), and the second is the combination of SciBERT and Bi-Directional Attention Flow (Bi-DAF). The best model (i.e., SciBERT) obtains an F1 score of 75.46%. Our dataset is novel, and our work opens up a new avenue for scholarly document processing research by providing a benchmark QA dataset and standard baseline. We make our dataset and codes available here at https://github.com/TanikSaikh/Scientific-Question-Answering.
Ameliorating Children's Reading-Comprehension Difficulties: A Randomized Controlled Trial
Children with specific reading-comprehension difficulties can read accurately, but they have poor comprehension. In a randomized controlled trial, we examined the efficacy of three interventions designed to improve such children's reading comprehension: text-comprehension (TC) training, oral-language (OL) training, and TC and OL training combined (COM). Children were assessed preintervention, midintervention, postintervention, and at an 11-month follow-up. All intervention groups made significant improvements in reading comprehension relative to an untreated control group. Although these gains were maintained at follow-up in the TC and COM groups, the OL group made greater gains than the other groups did between the end of the intervention and follow-up. The OL and COM groups also demonstrated significant improvements in expressive vocabulary compared with the control group, and this was a mediator of the improved reading comprehension of the OL and COM groups. We conclude that specific reading-comprehension difficulties reflect (at least partly) underlying oral-language weaknesses that can be effectively ameliorated by suitable teaching.
My weird reading tips
Presents a guide to reading critically, offering tips and activities to improve reading comprehension, covering such topics as point of view, context clues, rhyme schemes, and deciphering fact from fiction.
Neural Machine Reading Comprehension: Methods and Trends
Machine reading comprehension (MRC), which requires a machine to answer questions based on a given context, has attracted increasing attention with the incorporation of various deep-learning techniques over the past few years. Although research on MRC based on deep learning is flourishing, there remains a lack of a comprehensive survey summarizing existing approaches and recent trends, which motivated the work presented in this article. Specifically, we give a thorough review of this research field, covering different aspects including (1) typical MRC tasks: their definitions, differences, and representative datasets; (2) the general architecture of neural MRC: the main modules and prevalent approaches to each; and (3) new trends: some emerging areas in neural MRC as well as the corresponding challenges. Finally, considering what has been achieved so far, the survey also envisages what the future may hold by discussing the open issues left to be addressed.
A Survey on Machine Reading Comprehension Systems
Machine Reading Comprehension (MRC) is a challenging task and hot topic in Natural Language Processing. The goal of this field is to develop systems for answering the questions regarding a given context. In this paper, we present a comprehensive survey on diverse aspects of MRC systems, including their approaches, structures, input/outputs, and research novelties. We illustrate the recent trends in this field based on a review of 241 papers published during 2016–2020. Our investigation demonstrated that the focus of research has changed in recent years from answer extraction to answer generation, from single- to multi-document reading comprehension, and from learning from scratch to using pre-trained word vectors. Moreover, we discuss the popular datasets and the evaluation metrics in this field. The paper ends with an investigation of the most-cited papers and their contributions.
Exploring unanswerability in machine reading comprehension: approaches, benchmarks, and open challenges
The challenge of unanswerable questions in Machine Reading Comprehension (MRC) has drawn considerable attention, as current MRC systems are typically designed under the assumption that every question has a valid answer within the provided context. However, these systems often encounter real-world situations where no valid answer is available. This paper provides a comprehensive review of existing methods for addressing unanswerable questions in MRC systems, categorizing them into model-agnostic and model-specific approaches. It explores key strategies, examines relevant datasets, and evaluates commonly used metrics. This work aims to provide a comprehensive understanding of current techniques and identify critical gaps in the field, offering insights and key challenges to direct future research toward developing more robust MRC systems capable of handling unanswerable questions.