Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
41,389
result(s) for
"Deep learning model"
Sort by:
Brain Tumor/Mass Classification Framework Using Magnetic-Resonance-Imaging-Based Isolated and Developed Transfer Deep-Learning Model
2022
With the advancement in technology, machine learning can be applied to diagnose the mass/tumor in the brain using magnetic resonance imaging (MRI). This work proposes a novel developed transfer deep-learning model for the early diagnosis of brain tumors into their subclasses, such as pituitary, meningioma, and glioma. First, various layers of isolated convolutional-neural-network (CNN) models are built from scratch to check their performances for brain MRI images. Then, the 22-layer, binary-classification (tumor or no tumor) isolated-CNN model is re-utilized to re-adjust the neurons’ weights for classifying brain MRI images into tumor subclasses using the transfer-learning concept. As a result, the developed transfer-learned model has a high accuracy of 95.75% for the MRI images of the same MRI machine. Furthermore, the developed transfer-learned model has also been tested using the brain MRI images of another machine to validate its adaptability, general capability, and reliability for real-time application in the future. The results showed that the proposed model has a high accuracy of 96.89% for an unseen brain MRI dataset. Thus, the proposed deep-learning framework can help doctors and radiologists diagnose brain tumors early.
Journal Article
Modeling the fluctuations of groundwater level by employing ensemble deep learning techniques
by
Huang, Yuk Feng
,
Ibrahem Ahmed Osman, Ahmedbahaaaldin
,
Essam, Yusuf
in
Deep learning
,
deep learning model
,
ensemble deep learning model
2021
This study proposes two techniques: Deep Learning (DL) and Ensemble Deep Learning (EDL) to predict groundwater level (GWL) for five wells in Malaysia. Two scenarios were proposed, scenario-1 (S1): GWL from 4 wells was used as inputs to predict the GWL in the fifth well and scenario-2 (S2): time series with lag time up to 20 days for all five wells. The results from S1 prove that the ensemble EDL generally performs superior to the DL in the estimation of GWL of each station using data of remaining four wells except the Paya Indah Wetland in which the DL method provide better estimates compared to EDL. Regarding S2, the EDL also exhibits superior performance in predicting daily GWL in all five stations compared to the DL model. Implementing EDL decreased the RMSE, NAE and RRMSE by 11.6%, 27.3% and 22.3% and increased the R, Spearman rho and Kendall tau by 0.4%, 1.1% and 3.5%, respectively. Moreover, EDL for S2 shows a high level of precision within less time lag, ranging between 2 and 4 compared to DL. Therefore, the EDL model has the potential in managing the sustainability of groundwater in Malaysia.
Journal Article
A Transformer‐Based Deep Learning Model for Successful Predictions of the 2021 Second‐Year La Niña Condition
by
Zhou, Lu
,
Gao, Chuan
,
Zhang, Rong‐Hua
in
3D multivariate prediction
,
a transformer‐based deep learning model
,
Anomalies
2023
A purely data‐driven and transformer‐based model with a novel self‐attention mechanism (3D‐Geoformer) is used to make predictions by adopting a rolling predictive manner similar to that in dynamical coupled models. The 3D‐Geoformer yields a successful prediction of the 2021 second‐year cooling conditions that followed the 2020 La Niña event, including covarying anomalies of surface wind stress and three‐dimensional (3D) upper‐ocean temperature, the reoccurrence of negative subsurface temperature anomalies in the eastern equatorial Pacific and a corresponding turning point of sea surface temperature (SST) evolution in mid‐2021. The reasons for the successful prediction with interpretability are explored comprehensively by performing sensitivity experiments with modulating effects on SST due to wind and subsurface thermal forcings being separately considered in the input predictors for prediction. A comparison is also conducted with physics‐based modeling, illustrating the suitability and effectiveness of 3D‐Geoformer as a new platform for El Niño and Southern Oscillation studies.
Plain Language Summary
The tropical Pacific experienced the prolonged cooling conditions during 2020–2022 (often called a triple La Niña), which exerted great impacts on the weather and climate globally. However, physics‐derived coupled models still have difficulty in accurately making long‐lead real‐time predictions for sea surface temperature (SST) evolution in the tropical Pacific. With the rapid development of deep learning‐based modeling, purely data‐driven models provide an innovative way for SST predictions. Here, a transformer‐based deep learning model is used to evaluate its performance in predicting the evolution of SST in the tropical Pacific during 2020–2022 and explore process representations that are important for SST evolution during 2021, including subsurface thermal effect and surface wind forcing on SST, the crucial factors determining the second‐year prolonged La Niña conditions and turning point of SST evolution. A comparison is made between the completely differently constructed physics‐derived dynamical coupled model and the pure‐data driven deep learning model, showing they both can be used for predictions of SST evolution in the 2021 second‐year cooling conditions. This indicates that it is necessary to adequately represent the thermocline feedback in predictive models, either in dynamical coupled models or purely data‐driven models, so that El Niño and Southern Oscillation predictions can be improved.
Key Points
A transformer‐based deep learning model is used for El Niño‐Southern Oscillation multivariate prediction in a rolling predictive manner
The purely data‐driven model successfully predicts the 2021 second‐year La Niña and turning point of temperature evolution in mid‐2021
Applications of purely data‐driven model for process representations and understanding are demonstrated as in dynamical coupled models
Journal Article
Accelerating Urban Flood Inundation Simulation Under Spatio‐Temporally Varying Rainstorms Using ConvLSTM Deep Learning Model
by
Liao, Yaoxing
,
Lai, Chengguang
,
Wang, Zhaoli
in
Artificial neural networks
,
Correlation coefficient
,
Correlation coefficients
2025
Urban floods induced by rainstorms can lead to severe losses of lives and property, making rapid flood prediction essential for effective disaster prevention and mitigation. However, traditional deep learning (DL) models often overlook the spatial heterogeneity of rainstorms and lack interpretability. Here, we propose an end‐to‐end rapid prediction method for urban flood inundation incorporating spatiotemporal varying rainstorms using a Convolutional Long Short‐Term Memory Network (ConvLSTM) DL model. We compare the performance of the proposed method with that of a 3D Convolutional Neural Network (3D CNN) model and introduce the spatial visualization technique Grad‐CAM to interpret the rainstorms contributions to flood predictions. Results demonstrate that: (a) Compared to the physics‐based model, the proposed ConvLSTM model achieves satisfactory accuracy in predicting flood inundation evolution under spatio‐temporal varying rainstorms, with an average Pearson correlation coefficient (PCC) of 0.958 and a mean absolute error (MAE) of 0.021 m, successfully capturing the locations of observed inundation points under actual rainstorm conditions. (b) The ConvLSTM model can rapidly predict urban rainstorm inundation process in just 2 s for a study area of 74 km2, which is 170 times more efficient than a physics‐based model. (c) The interpretability of the ConvLSTM model for urban flood prediction can be enhanced through Grad‐CAM, revealing the model naturally focuses on local or upstream rainfall concentration areas most responsible for inundation, aligning well with hydrological understanding. Overall, the ConvLSTM model serves as a powerful surrogate for rapid urban flood simulation, providing an important reference for real‐time flood early warning and mitigation.
Journal Article
Chest X‐ray‐based opportunistic screening of sarcopenia using deep learning
2023
Background
Early detection and management of sarcopenia is of clinical importance. We aimed to develop a chest X‐ray‐based deep learning model to predict presence of sarcopenia.
Methods
Data of participants who visited osteoporosis clinic at Severance Hospital, Seoul, South Korea, between January 2020 and June 2021 were used as derivation cohort as split to train, validation and test set (65:15:20). A community‐based older adults cohort (KURE) was used as external test set. Sarcopenia was defined based on Asian Working Group 2019 guideline. A deep learning model was trained to predict appendicular lean mass (ALM), handgrip strength (HGS) and chair rise test performance from chest X‐ray images; then the machine learning model (SARC‐CXR score) was built using the age, sex, body mass index and chest X‐ray predicted muscle parameters along with estimation uncertainty values.
Results
Mean age of the derivation cohort (n = 926; women n = 700, 76%; sarcopenia n = 141, 15%) and the external test (n = 149; women n = 95, 64%; sarcopenia n = 18, 12%) cohort was 61.4 and 71.6 years, respectively. In the internal test set (a hold‐out set, n = 189, from the derivation cohort) and the external test set (n = 149), the concordance correlation coefficient for ALM prediction was 0.80 and 0.76, with an average difference of 0.18 ± 2.71 and 0.21 ± 2.28, respectively. Gradient‐weight class activation mapping for deep neural network models to predict ALM and HGS commonly showed highly weight pixel values at bilateral lung fields and part of the cardiac contour. SARC‐CXR score showed good discriminatory performance for sarcopenia in both internal test set [area under the receiver‐operating characteristics curve (AUROC) 0.813, area under the precision‐recall curve (AUPRC) 0.380, sensitivity 0.844, specificity 0.739, F1‐score 0.540] and external test set (AUROC 0.780, AUPRC 0.440, sensitivity 0.611, specificity 0.855, F1‐score 0.458). Among SARC‐CXR model features, predicted low ALM from chest X‐ray was the most important predictor of sarcopenia based on SHapley Additive exPlanations values. Higher estimation uncertainty of HGS contributed to elevate the predicted risk of sarcopenia. In internal test set, SARC‐CXR score showed better discriminatory performance than SARC‐F score (AUROC 0.813 vs. 0.691, P = 0.029).
Conclusions
Chest X‐ray‐based deep leaning model improved detection of sarcopenia, which merits further investigation.
Journal Article
LightRoseTTA: High‐Efficient and Accurate Protein Structure Prediction Using a Light‐Weight Deep Graph Model
2025
Accurately predicting protein structure, from sequences to 3D structures, is of great significance in biological research. To tackle this issue, a representative deep big model, RoseTTAFold, is proposed with promising success. Here, “a light‐weight deep graph network, named LightRoseTTA,” is reported to achieve accurate and highly efficient prediction for proteins. Notably, three highlights are possessed by LightRoseTTA: i) high‐accurate structure prediction for proteins, being “competitive with RoseTTAFold” on multiple popular datasets including CASP14 and CAMEO; ii) high‐efficient training and inference with a light‐weight model, costing “only 1 week on one single NVIDIA 3090 GPU for model‐training” (vs 30 days on 8 NVIDIA V100 GPUs for RoseTTAFold) and containing “only 1.4M parameters” (vs 130M in RoseTTAFold); iii) low dependency on multi‐sequence alignment (MSA), achieving the best performance on three MSA‐insufficient datasets: Orphan, De novo, and Orphan25. Besides, LightRoseTTA is “transferable” from general proteins to antibody data, as verified in the experiments. The time and resource costs of LightRoseTTA and RoseTTAFold are further discussed to demonstrate the feasibility of light‐weight models for protein structure prediction, which may be crucial in resource‐limited research for universities and academic institutions. The code and model are released to speed biological research (https://github.com/psp3dcg/LightRoseTTA).
Accurately predicting protein structure is of great significance in biological research. LightRoseTTA, a light‐weight deep graph network, to achieve prediction for proteins is presented. Notably, three highlights are possessed by LightRoseTTA: i) high‐accurate structure prediction for proteins; ii) high‐efficient training and inference; and iii) low dependency on multi‐sequence alignment (MSA). Finally, LightRoseTTA is evaluated on several benchmarks and outperforms alternative methods.
Journal Article
Explainable artificial intelligence for heart rate variability in ECG signal
by
E.A, Gopalakrishnan
,
V, Sowmya
,
K, Sanjana
in
Accuracy
,
Artificial intelligence
,
atrial fibrillation
2020
Electrocardiogram (ECG) signal is one of the most reliable methods to analyse the cardiovascular system. In the literature, there are different deep learning architectures proposed to detect various types of tachycardia diseases, such as atrial fibrillation, ventricular fibrillation, and sinus tachycardia. Even though all types of tachycardia diseases have fast beat rhythm as the common characteristic feature, existing deep learning architectures are trained with the corresponding disease-specific features. Most of the proposed works lack the interpretation and understanding of the results obtained. Hence, the objective of this letter is to explore the features learned by the deep learning models. For the detection of the different types of tachycardia diseases, the authors used a transfer learning approach. In this method, the model is trained with one of the tachycardia diseases called atrial fibrillation and tested with other tachycardia diseases, such as ventricular fibrillation and sinus tachycardia. The analysis was done using different deep learning models, such as RNN, LSTM, GRU, CNN, and RSCNN. RNN achieved an accuracy of 96.47% for atrial fibrillation data set, 90.88% accuracy for CU-ventricular tachycardia data set, and also achieved an accuracy of 94.71, and 94.18% for MIT-BIH malignant ventricular ectopy database for ECG lead I and lead II, respectively. The RNN model could only achieve an accuracy of 23.73% for the sinus tachycardia data set. A similar trend is shown by other models. From the analysis, it was evident that even though tachycardia diseases have fast beat rhythm as their common feature, the model was not able to detect different types of tachycardia diseases. The deep learning model could only detect atrial fibrillation and ventricular fibrillation and failed in the case of sinus tachycardia. From the analysis, they were able to interpret that, along with the fast beat rhythm, the model has learned the absence of P-wave which is a common feature for ventricular fibrillation and atrial fibrillation but sinus tachycardia disease has an upright positive P-wave. The time-based analysis is conducted to find the time complexity of the models. The analysis conveyed that RNN and RSCNN models could achieve better performance with lesser time complexity.
Journal Article
Extraction and Calculation of Roadway Area from Satellite Images Using Improved Deep Learning Model and Post-Processing
2022
Roadway area calculation is a novel problem in remote sensing and urban planning. This paper models this problem as a two-step problem, roadway extraction, and area calculation. Roadway extraction from satellite images is a problem that has been tackled many times before. This paper proposes a method using pixel resolution to calculate the area of the roads covered in satellite images. The proposed approach uses novel U-net and Resnet architectures called U-net++ and ResNeXt. The state-of-the-art model is combined with the proposed efficient post-processing approach to improve the overlap with ground truth labels. The performance of the proposed road extraction algorithm is evaluated on the Massachusetts dataset and it is shown that the proposed approach outperforms the existing solutions which use models from the U-net family.
Journal Article
Deep Learning in Medical Imaging: A Case Study on Lung Tissue Classification
by
Naga Ramesh, Janjhyam Venkata
,
Panda, Sandeep Kumar
,
Sobur, Abdus
in
Accuracy
,
Cancer
,
Classification
2024
INTRODUCTION: In the field of medical imaging, accurate categorization of lung tissue is essential for timely diagnosis and management of lung-related conditions, including cancer. Deep Learning (DL) methodologies have revolutionized this domain, promising improved precision and effectiveness in diagnosing ailments based on image analysis. This research delves into the application of DL models for classifying lung tissue, particularly focusing on histopathological imagery.
OBJECTIVES: The primary objective of this study is to explore the deployment of DL models for the classification of lung tissue, emphasizing histopathological images. The research aims to assess the performance of various DL models in accurately distinguishing between different classes of lung tissue, including benign tissue, lung adenocarcinoma, and lung squamous cell carcinoma.
METHODS: A dataset comprising 9,000 histopathological images of lung tissue was utilized, sourced from HIPAA compliant and validated sources. The dataset underwent augmentation to ensure diversity and robustness. The images were categorized into three distinct classes and balanced before being split into training, validation, and testing sets. Six DL models - DenseNet201, EfficientNetB7, EfficientNetB5, Vgg19, Vgg16, and Alexnet - were trained and evaluated on this dataset. Performance assessment was conducted based on precision, recall, F1-score for each class, and overall accuracy.
RESULTS: The results revealed varying performance levels among the DL models, with EfficientNetB5 achieving perfect scores across all metrics. This highlights the capability of DL in improving the accuracy of lung tissue classification, which holds promise for enhancing diagnosis and treatment outcomes in lung-related conditions.
CONCLUSION: This research significantly contributes to understanding the effective utilization of DL models in medical imaging, particularly for lung tissue classification. It emphasizes the critical role of a diverse and balanced dataset in developing robust and accurate models. The insights gained from this study lay the groundwork for further exploration into refining DL methodologies for medical imaging applications, with a focus on improving diagnostic accuracy and ultimately, patient outcomes.
Journal Article
An Explainable Deep Learning Model to Prediction Dental Caries Using Panoramic Radiograph Images
by
Zeynep Ozpolat
,
Ozal Yildirim
,
U. Rajendra Acharya
in
Accuracy
,
caries
,
caries; dental health; explainable deep models; deep learning; Grad-CAM
2023
Dental caries is the most frequent dental health issue in the general population. Dental caries can result in extreme pain or infections, lowering people’s quality of life. Applying machine learning models to automatically identify dental caries can lead to earlier treatment. However, physicians frequently find the model results unsatisfactory due to a lack of explainability. Our study attempts to address this issue with an explainable deep learning model for detecting dental caries. We tested three prominent pre-trained models, EfficientNet-B0, DenseNet-121, and ResNet-50, to determine which is best for the caries detection task. These models take panoramic images as the input, producing a caries–non-caries classification result and a heat map, which visualizes areas of interest on the tooth. The model performance was evaluated using whole panoramic images of 562 subjects. All three models produced remarkably similar results. However, the ResNet-50 model exhibited a slightly better performance when compared to EfficientNet-B0 and DenseNet-121. This model obtained an accuracy of 92.00%, a sensitivity of 87.33%, and an F1-score of 91.61%. Visual inspection showed us that the heat maps were also located in the areas with caries. The proposed explainable deep learning model diagnosed dental caries with high accuracy and reliability. The heat maps help to explain the classification results by indicating a region of suspected caries on the teeth. Dentists could use these heat maps to validate the classification results and reduce misclassification.
Journal Article