Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,441 result(s) for "Computer aided testing"
Sort by:
Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI: The CADDementia challenge
Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n=30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org. [Display omitted] •We objectively compared 29 algorithms for computer-aided diagnosis of dementia.•15 international teams tested their algorithms on a blinded multicenter dataset.•Algorithms combining types of features performed best: the highest AUC was 78.8%.
Two Decades of Artificial Intelligence in Education: Contributors, Collaborations, Research Topics, Challenges, and Future Directions
With the increasing use of Artificial Intelligence (AI) technologies in education, the number of published studies in the field has increased. However, no large-scale reviews have been conducted to comprehensively investigate the various aspects of this field. Based on 4,519 publications from 2000 to 2019, we attempt to fill this gap and identify trends and topics related to AI applications in education (AIEd) using topic-based bibliometrics. Results of the review reveal an increasing interest in using AI for educational purposes from the academic community. The main research topics include intelligent tutoring systems for special education; natural language processing for language education; educational robots for AI education; educational data mining for performance prediction; discourse analysis in computer-supported collaborative learning; neural networks for teaching evaluation; affective computing for learner emotion detection; and recommender systems for personalized learning. We also discuss the challenges and future directions of AIEd.
Computer-aided diagnosis of external and middle ear conditions: A machine learning approach
In medicine, a misdiagnosis or the absence of specialists can affect the patient's health, leading to unnecessary tests and increasing the costs of healthcare. In particular, the lack of specialists in otolaryngology in third world countries forces patients to seek medical attention from general practitioners, whom might not have enough training and experience for making correct diagnosis in this field. To tackle this problem, we propose and test a computer-aided system based on machine learning models and image processing techniques for otoscopic examination, as a support for a more accurate diagnosis of ear conditions at primary care before specialist referral; in particular, for myringosclerosis, earwax plug, and chronic otitis media. To characterize the tympanic membrane and ear canal for each condition, we implemented three different feature extraction methods: color coherence vector, discrete cosine transform, and filter bank. We also considered three machine learning algorithms: support vector machine (SVM), k-nearest neighbor (k-NN) and decision trees to develop the ear condition predictor model. To conduct the research, our database included 160 images as testing set and 720 images as training and validation sets of 180 patients. We repeatedly trained the learning models using the training dataset and evaluated them using the validation dataset to thus obtain the best feature extraction method and learning model that produce the highest validation accuracy. The results showed that the SVM and k-NN presented the best performance followed by decision trees model. Finally, we performed a classification stage -i.e., diagnosis- using testing data, where the SVM model achieved an average classification accuracy of 93.9%, average sensitivity of 87.8%, average specificity of 95.9%, and average positive predictive value of 87.7%. The results show that this system might be used for general practitioners as a reference to make better decisions in the ear pathologies diagnosis.
Deep learning for diabetic retinopathy assessments: a literature review
Diabetic retinopathy (DR) is the most important complication of diabetes. Early diagnosis by performing retinal image analysis helps avoid visual loss or blindness. A computer-aided diagnosis (CAD) system that uses images of the retinal fundus is an effective and efficient technique for the early diagnosis of diabetic retinopathy and helps specialists assess the disease. Many computer-aided diagnosis (CAD) systems have been developed to help in various stages like segmentation, detection and classification of lesions in fundus images. In the first way, the field actors have vocalized traditional machine learning (ML) techniques based on feature extraction and selection and then applied classification algorithms. The revolution of deep learning (DL) and its decisive victory over traditional ML methods for various applications motivated researchers to employ it for the diagnosis of DR and many deep learning-based methods have been introduced. In this article, we review these methods and highlight their pros and cons. We also talk about how hard it is to make deep learning methods that are good at diagnosing RD. So, our primary goal is to collaborate with experts to develop computer-aided diagnosis systems and test them in various hospital settings with varying picture quality. Finally, we highlight the remaining gaps and future research avenues to pursue.
Quantitative analysis of patients with celiac disease by video capsule endoscopy: A deep learning method
Background. Celiac disease is one of the most common diseases in the world. Capsule endoscopy is an alternative way to visualize the entire small intestine without invasiveness to the patient. It is useful to characterize celiac disease, but hours are need to manually analyze the retrospective data of a single patient. Computer-aided quantitative analysis by a deep learning method helps in alleviating the workload during analysis of the retrospective videos. Method. Capsule endoscopy clips from 6 celiac disease patients and 5 controls were preprocessed for training. The frames with a large field of opaque extraluminal fluid or air bubbles were removed automatically by using a pre-selection algorithm. Then the frames were cropped and the intensity was corrected prior to frame rotation in the proposed new method. The GoogLeNet is trained with these frames. Then, the clips of capsule endoscopy from 5 additional celiac disease patients and 5 additional control patients are used for testing. The trained GoogLeNet was able to distinguish the frames from capsule endoscopy clips of celiac disease patients vs controls. Quantitative measurement with evaluation of the confidence was developed to assess the severity level of pathology in the subjects. Results. Relying on the evaluation confidence, the GoogLeNet achieved 100% sensitivity and specificity for the testing set. The t-test confirmed the evaluation confidence is significant to distinguish celiac disease patients from controls. Furthermore, it is found that the evaluation confidence may also relate to the severity level of small bowel mucosal lesions. Conclusions. A deep convolutional neural network was established for quantitative measurement of the existence and degree of pathology throughout the small intestine, which may improve computer-aided clinical techniques to assess mucosal atrophy and other etiologies in real-time with videocapsule endoscopy.
A novel systematic method to evaluate computer-supported collaborative design technologies
Selection of suitable computer-supported collaborative design (CSCD) technologies is crucial to facilitate successful projects. This paper presents the first systematic method for engineering design teams to evaluate and select the most suitable CSCD technologies comparing technology functionality and project requirements established in peer-reviewed literature. The paper first presents 220 factors that influence successful CSCD. These factors were then systematically mapped and categorised to create CSCD requirement statements. The novel evaluation and selection method incorporates these requirement statements within a matrix and develops a discourse analysis text processing algorithm with data from collaborative projects to automate the population of how technologies impact the success of CSCD in engineering design teams. This method was validated using data collected across 3 years of a student global design project. The impact of this method is the potential to change the way engineering design teams consider the technology they use and how the selection of appropriate tools impacts the success of their CSCD projects. The development of the CSCD evaluation matrix is the first of its kind enabling a systematic and justifiable comparison and technology selection, with the aim of best supporting the engineering designers collaborative design activity.
Computer-aided detection of brain metastasis on 3D MR imaging: Observer performance study
To assess the effect of computer-aided detection (CAD) of brain metastasis (BM) on radiologists' diagnostic performance in interpreting three-dimensional brain magnetic resonance (MR) imaging using follow-up imaging and consensus as the reference standard. The institutional review board approved this retrospective study. The study cohort consisted of 110 consecutive patients with BM and 30 patients without BM. The training data set included MR images of 80 patients with 450 BM nodules. The test set included MR images of 30 patients with 134 BM nodules and 30 patients without BM. We developed a CAD system for BM detection using template-matching and K-means clustering algorithms for candidate detection and an artificial neural network for false-positive reduction. Four reviewers (two neuroradiologists and two radiology residents) interpreted the test set images before and after the use of CAD in a sequential manner. The sensitivity, false positive (FP) per case, and reading time were analyzed. A jackknife free-response receiver operating characteristic (JAFROC) method was used to determine the improvement in the diagnostic accuracy. The sensitivity of CAD was 87.3% with an FP per case of 302.4. CAD significantly improved the diagnostic performance of the four reviewers with a figure-of-merit (FOM) of 0.874 (without CAD) vs. 0.898 (with CAD) according to JAFROC analysis (p < 0.01). Statistically significant improvement was noted only for less-experienced reviewers (FOM without vs. with CAD, 0.834 vs. 0.877, p < 0.01). The additional time required to review the CAD results was approximately 72 sec (40% of the total review time). CAD as a second reader helps radiologists improve their diagnostic performance in the detection of BM on MR imaging, particularly for less-experienced reviewers.
The Effects of Light Crystal Display 3D Printers, Storage Time and Steam Sterilization on the Dimensional Stability of a Photopolymer Resin for Surgical Guides: An In Vitro Study
Background: Implant surgical guides manufactured in-house using 3D printing technology are widely used in clinical practice to translate virtual planning to the operative field. Aim: The present in vitro study investigated the dimensional changes of 3D surgical guides printed in-house using Shining 3D surgical guide resin (SG01). Materials and methods: Five test bodies, varying in shape and dimensions, were designed using computer-aided design (CAD) software and manufactured using three different Light Crystal Display (LCD) 3D printers (AccuFab-L4D, Elegoo Mars Pro 3, and Zortrax Inspire). Specific printing and post-processing parameters for the SG01 resin were set to produce 25 test bodies (5 of each shape) from each of the three printers, resulting in a total of 75 samples. The dimensional changes were evaluated using a digital calliper at four different time points: immediately after printing (T0), one month after storage (T1), immediately after sterilization (T2), and one month after sterilization (T3). Results: All the test bodies showed deviations from the overall CAD reference value of 12.25 mm after printing and post-processing (T0) and following steam sterilization (T2). Similar trends were observed for the effect of storage times at T1 and T3. The AccuFab prints demonstrated a better dimensional stability than the Elegoo and Zortrax samples. Conclusions: The LCD 3D printers, sterilization, and storage times influenced the dimensional stability of the test bodies made with SGO1 resin.
A three-stage novel framework for efficient and automatic glaucoma classification from retinal fundus images
Glaucoma is one of the leading causes of visual impairment worldwide. If diagnosed too late, the disease can irreversibly cause severe damage to the optic nerve, resulting in permanent loss of central vision and blindness. Therefore, early diagnosis of the disease is critical. Recent advancements in machine learning techniques have greatly aided ophthalmologists in timely and efficient diagnosis through the use of automated systems. Training the machine learning models with the most informative features can significantly enhance their performance. However, selecting the most informative feature subset is a real challenge because there are 2 n potential feature subsets for a dataset with n features, and the conventional feature selection techniques are also not very efficient. Thus, extracting relevant features from medical images and selecting the most informative is a challenging task. Additionally, a considerable field of study has evolved around the discovery and selection of highly influential features (characteristics) from a large number of features. Through the inclusion of the most informative features, this method has the potential to improve machine learning classifiers by enhancing their classification performance, reducing training and testing time, and lowering system diagnostic costs by incorporating the most informative features. This work aims in the same direction to propose a unique, novel, and highly efficient feature selection (FS) approach using the Whale Optimization Algorithm (WOA), the Grey Wolf Optimization Algorithm (GWO), and a hybridized version of these two metaheuristics. To the best of our knowledge, the use of these two algorithms and their amalgamated version for FS in human disease prediction, particularly glaucoma prediction, has been rare in the past. The objective is to create a highly influential subset of characteristics using this approach. The suggested FS strategy seeks to maximize classification accuracy while reducing the total number of characteristics used. We evaluated the efficacy of the proposed approach in classifying eye-related glaucoma illnesses. In this study, we aim to assist professionals in identifying glaucoma by utilizing a proposed clinical decision support system that integrates image processing, soft-computing algorithms, and machine learning, and validates it on benchmark fundus images. Initially, we extract 65 features from the 646 retinal fundus images in the ORIGA benchmark dataset, from which a subset of features is created. For two-class classification, different machine learning classifiers receive the elected features. Employing 5-fold and 10-fold stratified cross-validation has enhanced the generalized performance of the proposed model. We assess performance using several well-established statistical criteria. The tests show that the suggested computer-aided diagnosis (CAD) model has an F1-score of 97.50%, an accuracy score of 96.50%, a precision score of 97%, a sensitivity score of 98.10%, a specificity score of 93.30%, and an AUC score of 94.2% on the ORIGA dataset. To demonstrate its excellence, we compared the suggested approach’s performance with other current state-of-the-art models. The suggested approach shows promising results in predicting glaucoma, potentially aiding in the early diagnosis and treatment of the disease. Furthermore, real-time applications showcase the proposed approach’s suitability, enabling its deployment in areas lacking expert medical practitioners. Overburdened expert ophthalmologists can use this approach as a second opinion, as it requires very little time for processing the retinal fundus images. The proposed model can also aid, after incorporating required modifications, in making clinical decisions for various diseases like lung infection and, diabetic retinopathy.
Deep Learning for Describing Breast Ultrasound Images with BI-RADS Terms
Breast cancer is the most common cancer in women. Ultrasound is one of the most used techniques for diagnosis, but an expert in the field is necessary to interpret the test. Computer-aided diagnosis (CAD) systems aim to help physicians during this process. Experts use the Breast Imaging-Reporting and Data System (BI-RADS) to describe tumors according to several features (shape, margin, orientation...) and estimate their malignancy, with a common language. To aid in tumor diagnosis with BI-RADS explanations, this paper presents a deep neural network for tumor detection, description, and classification. An expert radiologist described with BI-RADS terms 749 nodules taken from public datasets. The YOLO detection algorithm is used to obtain Regions of Interest (ROIs), and then a model, based on a multi-class classification architecture, receives as input each ROI and outputs the BI-RADS descriptors, the BI-RADS classification (with 6 categories), and a Boolean classification of malignancy. Six hundred of the nodules were used for 10-fold cross-validation (CV) and 149 for testing. The accuracy of this model was compared with state-of-the-art CNNs for the same task. This model outperforms plain classifiers in the agreement with the expert (Cohen’s kappa), with a mean over the descriptors of 0.58 in CV and 0.64 in testing, while the second best model yielded kappas of 0.55 and 0.59, respectively. Adding YOLO to the model significantly enhances the performance (0.16 in CV and 0.09 in testing). More importantly, training the model with BI-RADS descriptors enables the explainability of the Boolean malignancy classification without reducing accuracy.