Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
408
result(s) for
"692/700/139/1735"
Sort by:
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma
by
Krieghoff-Henning, Eva
,
Crnaric, Iva
,
Peternel, Sandra
in
692/308/53/2421
,
692/700/139/1735
,
692/700/459/1748
2024
Artificial intelligence (AI) systems have been shown to help dermatologists diagnose melanoma more accurately, however they lack transparency, hindering user acceptance. Explainable AI (XAI) methods can help to increase transparency, yet often lack precise, domain-specific explanations. Moreover, the impact of XAI methods on dermatologists’ decisions has not yet been evaluated. Building upon previous research, we introduce an XAI system that provides precise and domain-specific explanations alongside its differential diagnoses of melanomas and nevi. Through a three-phase study, we assess its impact on dermatologists’ diagnostic accuracy, diagnostic confidence, and trust in the XAI-support. Our results show strong alignment between XAI and dermatologist explanations. We also show that dermatologists’ confidence in their diagnoses, and their trust in the support system significantly increase with XAI compared to conventional AI. This study highlights dermatologists’ willingness to adopt such XAI systems, promoting future use in the clinic.
Artificial intelligence has become popular as a cancer classification tool, but there is distrust of such systems due to their lack of transparency. Here, the authors develop an explainable AI system which produces text- and region-based explanations alongside its classifications which was assessed using clinicians’ diagnostic accuracy, diagnostic confidence, and their trust in the system.
Journal Article
AI co-pilot bronchoscope robot
2024
The unequal distribution of medical resources and scarcity of experienced practitioners confine access to bronchoscopy primarily to well-equipped hospitals in developed regions, contributing to the unavailability of bronchoscopic services in underdeveloped areas. Here, we present an artificial intelligence (AI) co-pilot bronchoscope robot that empowers novice doctors to conduct lung examinations as safely and adeptly as experienced colleagues. The system features a user-friendly, plug-and-play catheter, devised for robot-assisted steering, facilitating access to bronchi beyond the fifth generation in average adult patients. Drawing upon historical bronchoscopic videos and expert imitation, our AI–human shared control algorithm enables novice doctors to achieve safe steering in the lung, mitigating misoperations. Both in vitro and in vivo results underscore that our system equips novice doctors with the skills to perform lung examinations as expertly as seasoned practitioners. This study offers innovative strategies to address the pressing issue of medical resource disparities through AI assistance.
The unequal distribution of medical resources means that bronchoscopic services are often unavailable in underdeveloped areas. Here, the authors present an AI co-pilot bronchoscope robot that features a user-friendly plug-and-play catheter and an AI-human shared control algorithm, to enable novice doctors to conduct lung examinations safely.
Journal Article
Deep learning-based automated detection for diabetic retinopathy and diabetic macular oedema in retinal fundus photographs
2022
ObjectivesTo present and validate a deep ensemble algorithm to detect diabetic retinopathy (DR) and diabetic macular oedema (DMO) using retinal fundus images.MethodsA total of 8739 retinal fundus images were collected from a retrospective cohort of 3285 patients. For detecting DR and DMO, a multiple improved Inception-v4 ensembling approach was developed. We measured the algorithm’s performance and made a comparison with that of human experts on our primary dataset, while its generalization was assessed on the publicly available Messidor-2 dataset. Also, we investigated systematically the impact of the size and number of input images used in training on model’s performance, respectively. Further, the time budget of training/inference versus model performance was analyzed.ResultsOn our primary test dataset, the model achieved an 0.992 (95% CI, 0.989–0.995) AUC corresponding to 0.925 (95% CI, 0.916-0.936) sensitivity and 0.961 (95% CI, 0.950–0.972) specificity for referable DR, while the sensitivity and specificity for ophthalmologists ranged from 0.845 to 0.936, and from 0.912 to 0.971, respectively. For referable DMO, our model generated an AUC of 0.994 (95% CI, 0.992–0.996) with a 0.930 (95% CI, 0.919–0.941) sensitivity and 0.971 (95% CI, 0.965–0.978) specificity, whereas ophthalmologists obtained sensitivities ranging between 0.852 and 0.946, and specificities ranging between 0.926 and 0.985.ConclusionThis study showed that the deep ensemble model exhibited excellent performance in detecting DR and DMO, and had good robustness and generalization, which could potentially help support and expand DR/DMO screening programs.
Journal Article
A reinforcement learning model for AI-based decision support in skin cancer
by
Barata, Catarina
,
Rosendahl, Cliff
,
Codella, Noel C. F.
in
692/1807/1812
,
692/699/67/2321
,
692/700/139/1735
2023
We investigated whether human preferences hold the potential to improve diagnostic artificial intelligence (AI)-based decision support using skin cancer diagnosis as a use case. We utilized nonuniform rewards and penalties based on expert-generated tables, balancing the benefits and harms of various diagnostic errors, which were applied using reinforcement learning. Compared with supervised learning, the reinforcement learning model improved the sensitivity for melanoma from 61.4% to 79.5% (95% confidence interval (CI): 73.5–85.6%) and for basal cell carcinoma from 79.4% to 87.1% (95% CI: 80.3–93.9%). AI overconfidence was also reduced while simultaneously maintaining accuracy. Reinforcement learning increased the rate of correct diagnoses made by dermatologists by 12.0% (95% CI: 8.8–15.1%) and improved the rate of optimal management decisions from 57.4% to 65.3% (95% CI: 61.7–68.9%). We further demonstrated that the reward-adjusted reinforcement learning model and a threshold-based model outperformed naïve supervised learning in various clinical scenarios. Our findings suggest the potential for incorporating human preferences into image-based diagnostic algorithms.
A reinforcement learning model developed to adapt artificial intelligence (AI) predictions to human preferences showed better sensitivity for skin cancer diagnoses and improved management decisions compared to a supervised learning model.
Journal Article
A 12-lead electrocardiogram database for arrhythmia research covering more than 10,000 patients
2020
This newly inaugurated research database for 12-lead electrocardiogram signals was created under the auspices of Chapman University and Shaoxing People’s Hospital (Shaoxing Hospital Zhejiang University School of Medicine) and aims to enable the scientific community in conducting new studies on arrhythmia and other cardiovascular conditions. Certain types of arrhythmias, such as atrial fibrillation, have a pronounced negative impact on public health, quality of life, and medical expenditures. As a non-invasive test, long term ECG monitoring is a major and vital diagnostic tool for detecting these conditions. This practice, however, generates large amounts of data, the analysis of which requires considerable time and effort by human experts. Advancement of modern machine learning and statistical tools can be trained on high quality, large data to achieve exceptional levels of automated diagnostic accuracy. Thus, we collected and disseminated this novel database that contains 12-lead ECGs of 10,646 patients with a 500 Hz sampling rate that features 11 common rhythms and 67 additional cardiovascular conditions, all labeled by professional experts. The dataset consists of 10-second, 12-dimension ECGs and labels for rhythms and other conditions for each subject. The dataset can be used to design, compare, and fine-tune new and classical statistical and machine learning techniques in studies focused on arrhythmia and other cardiovascular conditions.Measurement(s)cardiac arrhythmiaTechnology Type(s)12 lead electrocardiography • digital curationFactor Type(s)sex • experimental condition • age groupSample Characteristic - OrganismHomo sapiensMachine-accessible metadata file describing the reported data: 10.6084/m9.figshare.11698521
Journal Article
Gradient boosting decision tree becomes more reliable than logistic regression in predicting probability for diabetes with big data
2022
We sought to verify the reliability of machine learning (ML) in developing diabetes prediction models by utilizing big data. To this end, we compared the reliability of gradient boosting decision tree (GBDT) and logistic regression (LR) models using data obtained from the Kokuho-database of the Osaka prefecture, Japan. To develop the models, we focused on 16 predictors from health checkup data from April 2013 to December 2014. A total of 277,651 eligible participants were studied. The prediction models were developed using a light gradient boosting machine (LightGBM), which is an effective GBDT implementation algorithm, and LR. Their reliabilities were measured based on expected calibration error (ECE), negative log-likelihood (Logloss), and reliability diagrams. Similarly, their classification accuracies were measured in the area under the curve (AUC). We further analyzed their reliabilities while changing the sample size for training. Among the 277,651 participants, 15,900 (7978 males and 7922 females) were newly diagnosed with diabetes within 3 years. LightGBM (LR) achieved an ECE of 0.0018 ± 0.00033 (0.0048 ± 0.00058), a Logloss of 0.167 ± 0.00062 (0.172 ± 0.00090), and an AUC of 0.844 ± 0.0025 (0.826 ± 0.0035). From sample size analysis, the reliability of LightGBM became higher than LR when the sample size increased more than
10
4
. Thus, we confirmed that GBDT provides a more reliable model than that of LR in the development of diabetes prediction models using big data. ML could potentially produce a highly reliable diabetes prediction model, a helpful tool for improving lifestyle and preventing diabetes.
Journal Article
Predicting female pelvic tilt and lumbar angle using machine learning in case of urinary incontinence and sexual dysfunction
2023
Urinary incontinence (UI) is defined as any uncontrolled urine leakage. Pelvic floor muscles (PFM) appear to be a crucial aspect of trunk and lumbo-pelvic stability, and UI is one indication of pelvic floor dysfunction. The evaluation of pelvic tilt and lumbar angle is critical in assessing the alignment and posture of the spine in the lower back region and pelvis, and both of these variables are directly related to female dysfunction in the pelvic floor. UI affects a significant number of women worldwide and can have a major impact on their quality of life. However, traditional methods of assessing these parameters involve manual measurements, which are time-consuming and prone to variability. The rehabilitation programs for pelvic floor dysfunction (FSD) in physical therapy often focus on pelvic floor muscles (PFMs), while other core muscles are overlooked. Therefore, this study aimed to predict the activity of various core muscles in multiparous women with FSD using multiple scales instead of relying on Ultrasound imaging. Decision tree, SVM, random forest, and AdaBoost models were applied to predict pelvic tilt and lumbar angle using the train set. Performance was evaluated on the test set using MSE, RMSE, MAE, and R
2
. Pelvic tilt prediction achieved R
2
values > 0.9, with AdaBoost (R
2
= 0.944) performing best. Lumbar angle prediction performed slightly lower with decision tree achieving the highest R
2
of 0.976. Developing a machine learning model to predict pelvic tilt and lumbar angle has the potential to revolutionize the assessment and management of this condition, providing faster, more accurate, and more objective assessments than traditional methods.
Journal Article
Spaceflight associated neuro-ocular syndrome (SANS) and the neuro-ophthalmologic effects of microgravity: a review and an update
2020
Prolonged microgravity exposure during long-duration spaceflight (LDSF) produces unusual physiologic and pathologic neuro-ophthalmic findings in astronauts. These microgravity associated findings collectively define the “Spaceflight Associated Neuro-ocular Syndrome” (SANS). We compare and contrast prior published work on SANS by the National Aeronautics and Space Administration’s (NASA) Space Medicine Operations Division with retrospective and prospective studies from other research groups. In this manuscript, we update and review the clinical manifestations of SANS including: unilateral and bilateral optic disc edema, globe flattening, choroidal and retinal folds, hyperopic refractive error shifts, and focal areas of ischemic retina (i.e., cotton wool spots). We also discuss the knowledge gaps for in-flight and terrestrial human research including potential countermeasures for future study. We recommend that NASA and its research partners continue to study SANS in preparation for future longer duration manned space missions.
Journal Article
Integration of force and IMU sensors for developing low-cost portable gait measurement system in lower extremities
by
Kaimuk, Panya
,
Tanthuwapathom, Ratikanlaya
,
Charoensuk, Warakorn
in
639/166/985
,
692/700/139/1735
,
Acceleration
2023
Gait analysis is the method to accumulate walking data. It is useful in diagnosing diseases, follow-up of symptoms, and rehabilitation post-treatment. Several techniques have been developed to assess human gait. In the laboratory, gait parameters are analyzed by using a camera capture and a force plate. However, there are several limitations, such as high operating costs, the need for a laboratory and a specialist to operate the system, and long preparation time. This paper presents the development of a low-cost portable gait measurement system by using the integration of flexible force sensors and IMU sensors in outdoor applications for early detection of abnormal gait in daily living. The developed device is designed to measure ground reaction force, acceleration, angular velocity, and joint angles of the lower extremities. The commercialized device, including the motion capture system (Motive-OptiTrack) and force platform (MatScan), is used as the reference system to validate the performance of the developed system. The results of the system show that it has high accuracy in measuring gait parameters such as ground reaction force and joint angles in lower limbs. The developed device has a strong correlation coefficient compared with the commercialized system. The percent error of the motion sensor is below 8%, and the force sensor is lower than 3%. The low-cost portable device with a user interface was successfully developed to measure gait parameters for non-laboratory applications to support healthcare applications.
Journal Article
Medical multimodal multitask foundation model for lung cancer screening
2025
Lung cancer screening (LCS) reduces mortality and involves vast multimodal data such as text, tables, and images. Fully mining such big data requires multitasking; otherwise, occult but important features may be overlooked, adversely affecting clinical management and healthcare quality. Here we propose a medical multimodal-multitask foundation model (M3FM) for three-dimensional low-dose computed tomography (CT) LCS. After curating a multimodal multitask dataset of 49 clinical data types, 163,725 chest CT series, and 17 tasks involved in LCS, we develop a scalable multimodal question-answering model architecture for synergistic multimodal multitasking. M3FM consistently outperforms the state-of-the-art models, improving lung cancer risk and cardiovascular disease mortality risk prediction by up to 20% and 10% respectively. M3FM processes multiscale high-dimensional images, handles various combinations of multimodal data, identifies informative data elements, and adapts to out-of-distribution tasks with minimal data. In this work, we show that M3FM advances various LCS tasks through large-scale multimodal and multitask learning.
Lung cancer screening (LCS) requires effectively and efficiently mining big, multimodal datasets. Here, the authors develop a medical multimodal-multitask foundation model (M3FM) for LCS from 3D low-dose computed tomography and medical multimodal data, outperforming state-of-the-art methods and allowing the identification of informative data elements.
Journal Article