Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
99
result(s) for
"automated error detection"
Sort by:
Discovering the Potential of Automated Phraseological Interference Error Detection: A Transformer-Based Approach
2026
Formulaic language … may comprise strings of letters, words, sounds, or other elements, contiguous or non-contiguous, of any length, size, frequency, degree of compositionality, literality/figurativeness, abstractness and complexity, not necessarily assumed to be stored, retrieved or processed whole, but that necessarily enjoy a degree of conventionality or familiarity among … speakers of a language community … and that hold a strong relationship in communicating meaning. Errors of this type are typically addressed by EFL teachers, but due to globalization and ever-increasing intercultural and international cooperation, the number of EFL students is growing exponentially, with many relying on self-learning. [...]they would benefit greatly from a tool that could highlight errors resulting from L1 interference. Transformer fine-tuning principles and capabilities in error detection Before examining the data collected and the experiments conducted, we explain the basics of the mechanisms underlying neural networks, training, fine-tuning and the conditions required for this process to work, in addition to referring to the existing research on grammatical error detection and the correction process. [...]the numerical representation made by the last layer is converted into human-readable data which is compared to the expected result (e.g., a correct class, a correct answer to the question, etc.).
Journal Article
NxRepair: error correction in de novo sequence assembly using Nextera mate pairs
by
O’Connell, Jared
,
Cox, Anthony J.
,
Schulz-Trieglaff, Ole
in
Algorithms
,
Bioinformatics
,
Computational Science
2015
Scaffolding errors and incorrect repeat disambiguation during de novo assembly can result in large scale misassemblies in draft genomes. Nextera mate pair sequencing data provide additional information to resolve assembly ambiguities during scaffolding. Here, we introduce NxRepair, an open source toolkit for error correction in de novo assemblies that uses Nextera mate pair libraries to identify and correct large-scale errors. We show that NxRepair can identify and correct large scaffolding errors, without use of a reference sequence, resulting in quantitative improvements in the assembly quality. NxRepair can be downloaded from GitHub or PyPI, the Python Package Index; a tutorial and user documentation are also available.
Journal Article
INVESTIGATING CO‐OCCURRENCE PATTERNS OF LEARNERS’ GRAMMATICAL ERRORS ACROSS PROFICIENCY LEVELS AND ESSAY TOPICS BASED ON ASSOCIATION ANALYSIS
by
Ishii, Yutaka
in
automated grammatical error detection
,
educational data mining
,
error analysis
2016
This chapter is based on the framework of educational data mining (EDM). It focuses on the relationship between essay topics and co‐occurrence patterns of learners’ grammatical errors. The techniques of data mining shed light on learners' hidden association pattern of grammatical errors. Investigating learners' grammatical errors is a very important area in language teaching. In the past, this research was conducted only in the area of language teaching. However, in recent years, it has been conducted in the field of natural language processing such as the research on automated scoring of learners' writing or speaking and automated grammatical error detection. The subject 'English writing' was introduced to Japanese high schools after the curriculum was revised in 1989. Error analysis (EA) has been conducted since the 1950s because learners' errors can be considered as a benchmark for the proficiency in a language.
Book Chapter
THE ALICE SYSTEM A WORKBENCH FOR LEARNING AND USING LANGUAGE
by
Levin, Lori S.
,
Gates, Donna M.
,
Evans, David A.
in
ALICE Project
,
Allomorphs
,
Carnegie Mellon University PA
1991
ALICE is a multi-media framework for ICALI programs that is being developed at Carnegie Mellon University. It is not a single instructional program, but rather a set of tools for building a number of different types of ICALI programs in any language. The central components of ALICE are (1) a set of Natural Language Processing (NLP) tools for syntactic error detection, morphological analysis, and generation of morphological paradigms, (2) a set of on-line text, video, and audio corpora that serve as sources of realistic, in-context examples, and (3) an authoring language that allows teachers to configure the NLP tools and excerpts from the corpora into ICALI programs. This paper describes the NLP components of ALICE and the role of excerpts from corpora in treating student errors.
Journal Article
Making inference with messy (citizen science) data
by
Townsend, Philip A.
,
Martin, Karl J.
,
Anhalt-Depies, Christine
in
Accuracy
,
Algorithms
,
automated classification
2019
Measurement or observation error is common in ecological data: as citizen scientists and automated algorithms play larger roles processing growing volumes of data to address problems at large scales, concerns about data quality and strategies for improving it have received greater focus. However, practical guidance pertaining to fundamental data quality questions for data users or managers—how accurate do data need to be and what is the best or most efficient way to improve it?—remains limited. We present a generalizable framework for evaluating data quality and identifying remediation practices, and demonstrate the framework using trail camera images classified using crowdsourcing to determine acceptable rates of misclassification and identify optimal remediation strategies for analysis using occupancy models. We used expert validation to estimate baseline classification accuracy and simulation to determine the sensitivity of two occupancy estimators (standard and false-positive extensions) to different empirical misclassification rates. We used regression techniques to identify important predictors of misclassification and prioritize remediation strategies. More than 93% of images were accurately classified, but simulation results suggested that most species were not identified accurately enough to permit distribution estimation at our predefined threshold for accuracy (<5% absolute bias). A model developed to screen incorrect classifications predicted misclassified images with >97% accuracy: enough to meet our accuracy threshold. Occupancy models that accounted for false-positive error provided even more accurate inference even at high rates of misclassification (30%). As simulation suggested occupancy models were less sensitive to additional false-negative error, screening models or fitting occupancy models accounting for false-positive error emerged as efficient data remediation solutions. Combining simulation-based sensitivity analysis with empirical estimation of baseline error and its variability allows users and managers of potentially error-prone data to identify and fix problematic data more efficiently. It may be particularly helpful for “big data” efforts dependent upon citizen scientists or automated classification algorithms with many downstream users, but given the ubiquity of observation or measurement error, even conventional studies may benefit from focusing more attention upon data quality.
Journal Article
Aircraft Skin Damage Visual Testing System Using Lightweight Devices with YOLO: An Automated Real-Time Material Evaluation System
2024
Inspection and material evaluation are some of the critical factors to ensure the structural integrity and safety of an aircraft in the aviation industry. These inspections are carried out by trained personnel, and while effective, they are prone to human error, where even a minute error could result in a large-scale negative impact. Automated detection devices designed to improve the reliability of inspections could help the industry reduce the potential effects caused by human error. This study aims to develop a system that can automatically detect and identify defects on aircraft skin using relatively lightweight devices, including mobile phones and unmanned aerial vehicles (UAVs). The study combines an internet of things (IoT) network, allowing the results to be reviewed in real time, regardless of distance. The experimental results confirmed the effective recognition of defects with the mean average precision (mAP@0.5) at 0.853 for YOLOv9c for all classes. However, despite the effective detection, the test device (mobile phone) was prone to overheating, significantly reducing its performance. While there is still room for further enhancements, this study demonstrates the potential of introducing automated image detection technology to assist the inspection process in the aviation industry.
Journal Article
Protection and Analysis of Intangible Cultural Heritage Videos Based on Keyframe Extraction and Adaptive Weight Assignment
2025
To preserve the intangible cultural heritage digitally and effectively manage and analyze the intangible cultural heritage video data, the research creatively employs target recognition algorithms and keyframe extraction to perform video extraction and analysis. The keyframe extraction and target detection model is constructed with the help of shot boundary detection, feature pyramid network, and attention mechanism. The experimental results revealed that the designed keyframe extraction model outperformed all the other methods, achieving an accuracy rate of 0.996, a recall rate of 0.984, and an F1 score of 0.936 on the dataset used in the study. This model’s average keyframe redundancy was 0.02, and the missed and false detection rates were both below 0.25. This indicated a strong ability to recognize key content in videos. Meanwhile, the model’s performance changed little under the test with the addition of random noise perturbation, demonstrating good robustness and generalization ability. The detection error converged to the minimum value of 0.126, and the highest value of prediction box generation accuracy could reach 0.834, which was 41.57% improved. In the video processing of intangible cultural heritage, the missing rate and false positive rate of the target object were at the lowest level as low as 0.20. Through keyframe extraction and target detection, the study realizes the effective protection and analysis of intangible cultural heritage cultural videos, and promotes the inheritance and dissemination of intangible cultural heritage.
Journal Article
The Automated Assessment of Postural Stability: Balance Detection Algorithm
2017
Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect
®
sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0.05. Despite some level of disagreement between human and AAPS-generated scores, the use of an automated system yields important advantages over currently available human-based alternatives. These results underscore the value of using the AAPS, that can be quickly deployed in the field and/or in outdoor settings with minimal set-up time. Finally, the AAPS can record multiple error types and their time course with extremely high temporal resolution. These features are not achievable by humans, who cannot keep track of multiple balance errors with such a high resolution. Together, these results suggest that computerized BESS calculation may provide more accurate and consistent measures of balance than those derived from human experts.
Journal Article
Application of BERT-Based Japanese Writing Intelligent Grading System in Blended Teaching
2026
Japanese writing instruction in foreign language education continues to face challenges such as low correction efficiency, limited error identification, and insufficient personalized feedback. This study examines the application of a BERT-based intelligent grading system within a blended teaching framework to address these issues. The research explores three key questions: (1) how BERT can be leveraged for automatic detection of grammatical, spelling, and sentence structure errors in Japanese writing; (2) how the system can be integrated into blended teaching; and (3) what measurable impact it has on student writing outcomes. We developed a BERT-based encoder–decoder model and conducted a controlled experiment involving an experimental group ( n =150) using the system and a control group ( n =150) relying on manual grading. The results showed that the experimental group achieved higher writing accuracy (89.3 vs. 79.5), improved logical coherence (4.4 vs. 3.7 on a 5-point rubric), and faster feedback (average 4.8 minutes vs. 26 minutes). The system also achieved a grammar error detection F1-score of 84.4%, outperforming traditional RNN and Transformer models. Despite its strengths, limitations persist in addressing discourse-level coherence and context-sensitive semantics. This study offers empirical evidence for integrating deep learning with pedagogy, providing a scalable and effective approach to enhancing writing instruction in second language education.
Journal Article