Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
82
result(s) for
"SegNet"
Sort by:
COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet
2021
Background
Currently, there is an urgent need for efficient tools to assess the diagnosis of COVID-19 patients. In this paper, we present feasible solutions for detecting and labeling infected tissues on CT lung images of such patients. Two structurally-different deep learning techniques,
SegNet
and
U-NET
, are investigated for semantically segmenting infected tissue regions in CT lung images.
Methods
We propose to use two known deep learning networks,
SegNet
and
U-NET
, for image tissue classification.
SegNet
is characterized as a scene segmentation network and
U-NET
as a medical segmentation tool. Both networks were exploited as binary segmentors to discriminate between infected and healthy lung tissue, also as multi-class segmentors to learn the infection type on the lung. Each network is trained using seventy-two data images, validated on ten images, and tested against the left eighteen images. Several statistical scores are calculated for the results and tabulated accordingly.
Results
The results show the superior ability of
SegNet
in classifying infected/non-infected tissues compared to the other methods (with 0.95 mean accuracy), while the
U-NET
shows better results as a multi-class segmentor (with 0.91 mean accuracy).
Conclusion
Semantically segmenting CT scan images of COVID-19 patients is a crucial goal because it would not only assist in disease diagnosis, also help in quantifying the severity of the illness, and hence, prioritize the population treatment accordingly. We propose computer-based techniques that prove to be reliable as detectors for infected tissue in lung CT scans. The availability of such a method in today’s pandemic would help automate, prioritize, fasten, and broaden the treatment of COVID-19 patients globally.
Journal Article
VGG19 Network Assisted Joint Segmentation and Classification of Lung Nodules in CT Images
by
Mohanty, Jnyana Ranjan
,
Satapathy, Suresh Chandra
,
Tariq, Usman
in
Accuracy
,
Automation
,
Classification
2021
Pulmonary nodule is one of the lung diseases and its early diagnosis and treatment are essential to cure the patient. This paper introduces a deep learning framework to support the automated detection of lung nodules in computed tomography (CT) images. The proposed framework employs VGG-SegNet supported nodule mining and pre-trained DL-based classification to support automated lung nodule detection. The classification of lung CT images is implemented using the attained deep features, and then these features are serially concatenated with the handcrafted features, such as the Grey Level Co-Occurrence Matrix (GLCM), Local-Binary-Pattern (LBP) and Pyramid Histogram of Oriented Gradients (PHOG) to enhance the disease detection accuracy. The images used for experiments are collected from the LIDC-IDRI and Lung-PET-CT-Dx datasets. The experimental results attained show that the VGG19 architecture with concatenated deep and handcrafted features can achieve an accuracy of 97.83% with the SVM-RBF classifier.
Journal Article
Evaluation of Deep Neural Networks for Semantic Segmentation of Prostate in T2W MRI
by
Khan, Zia
,
Alsaih, Khaled
,
Meriaudeau, Fabrice
in
CNNs
,
encoder–decoder
,
Engineering Sciences
2020
In this paper, we present an evaluation of four encoder–decoder CNNs in the segmentation of the prostate gland in T2W magnetic resonance imaging (MRI) image. The four selected CNNs are FCN, SegNet, U-Net, and DeepLabV3+, which was originally proposed for the segmentation of road scene, biomedical, and natural images. Segmentation of prostate in T2W MRI images is an important step in the automatic diagnosis of prostate cancer to enable better lesion detection and staging of prostate cancer. Therefore, many research efforts have been conducted to improve the segmentation of the prostate gland in MRI images. The main challenges of prostate gland segmentation are blurry prostate boundary and variability in prostate anatomical structure. In this work, we investigated the performance of encoder–decoder CNNs for segmentation of prostate gland in T2W MRI. Image pre-processing techniques including image resizing, center-cropping and intensity normalization are applied to address the issues of inter-patient and inter-scanner variability as well as the issue of dominating background pixels over prostate pixels. In addition, to enrich the network with more data, to increase data variation, and to improve its accuracy, patch extraction and data augmentation are applied prior to training the networks. Furthermore, class weight balancing is used to avoid having biased networks since the number of background pixels is much higher than the prostate pixels. The class imbalance problem is solved by utilizing weighted cross-entropy loss function during the training of the CNN model. The performance of the CNNs is evaluated in terms of the Dice similarity coefficient (DSC) and our experimental results show that patch-wise DeepLabV3+ gives the best performance with DSC equal to 92.8 % . This value is the highest DSC score compared to the FCN, SegNet, and U-Net that also competed the recently published state-of-the-art method of prostate segmentation.
Journal Article
Early yield prediction in different grapevine varieties using computer vision and machine learning
by
Palacios, Fernando
,
Melo-Pinto, Pedro
,
Diago, Maria P
in
Algorithms
,
Berries
,
Computer vision
2023
Yield assessment is a highly relevant task for the wine industry. The goal of this work was to develop a new algorithm for early yield prediction in different grapevine varieties using computer vision and machine learning. Vines from six grapevine (Vitis vinifera L.) varieties were photographed using a mobile platform in a commercial vineyard at pea-size berry stage. A SegNet architecture was employed to detect the visible berries and canopy features. All features were used to train support vector regression (SVR) models for predicting number of actual berries and yield. Regarding the berries’ detection step, a F1-score average of 0.72 and coefficients of determination (R2) above 0.92 were achieved for all varieties between the number of estimated and the number of actual visible berries. The method yielded average values for root mean squared error (RMSE) of 195 berries, normalized RMSE (NRMSE) of 23.83% and R2 of 0.79 between the number of estimated and the number of actual berries per vine using the leave-one-out cross validation method. In terms of yield forecast, the correlation between the actual yield and its estimated value yielded R2 between 0.54 and 0.87 among different varieties and NRMSE between 16.47% and 39.17% while the global model (including all varieties) had a R2 equal to 0.83 and NRMSE of 29.77%. The number of actual berries and yield per vine can be predicted up to 60 days prior to harvest in several grapevine varieties using the new algorithm.
Journal Article
Convolutional neural network for automated mass segmentation in mammography
by
Bi, Jinbo
,
Abdelhafiz, Dina
,
Nabavi, Sheida
in
Algorithms
,
Artificial neural networks
,
Automation
2020
Background
Automatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesions in MG images. The proposed end-to-end convolutional neural network (CNN) based model extracts contextual information by combining low-level and high-level features. We trained the proposed model using huge publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and a private database from the University of Connecticut Health Center (UCHC).
Results
We compared the performance of the proposed model with those of the state-of-the-art DL models including the fully convolutional network (FCN), SegNet, Dilated-Net, original U-Net, and Faster R-CNN models and the conventional region growing (RG) method. The proposed Vanilla U-Net model outperforms the Faster R-CNN model significantly in terms of the runtime and the Intersection over Union metric (IOU). Training with digitized film-based and fully digitized MG images, the proposed Vanilla U-Net model achieves a mean test accuracy of 92.6%. The proposed model achieves a mean Dice coefficient index (DI) of 0.951 and a mean IOU of 0.909 that show how close the output segments are to the corresponding lesions in the ground truth maps. Data augmentation has been very effective in our experiments resulting in an increase in the mean DI and the mean IOU from 0.922 to 0.951 and 0.856 to 0.909, respectively.
Conclusions
The proposed Vanilla U-Net based model can be used for precise segmentation of masses in MG images. This is because the segmentation process incorporates more multi-scale spatial context, and captures more local and global context to predict a precise pixel-wise segmentation map of an input full MG image. These detected maps can help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. We show that using transfer learning, introducing augmentation, and modifying the architecture of the original model results in better performance in terms of the mean accuracy, the mean DI, and the mean IOU in detecting mass lesion compared to the other DL and the conventional models.
Journal Article
OD-XAI: Explainable AI-Based Semantic Object Detection for Autonomous Vehicles
by
Hong, Wei-Chiang
,
Jadav, Dhairya
,
Sharma, Ravi
in
autonomous vehicles
,
Decision making
,
explainable AI
2022
In recent years, artificial intelligence (AI) has become one of the most prominent fields in autonomous vehicles (AVs). With the help of AI, the stress levels of drivers have been reduced, as most of the work is executed by the AV itself. With the increasing complexity of models, explainable artificial intelligence (XAI) techniques work as handy tools that allow naive people and developers to understand the intricate workings of deep learning models. These techniques can be paralleled to AI to increase their interpretability. One essential task of AVs is to be able to follow the road. This paper attempts to justify how AVs can detect and segment the road on which they are moving using deep learning (DL) models. We trained and compared three semantic segmentation architectures for the task of pixel-wise road detection. Max IoU scores of 0.9459 and 0.9621 were obtained on the train and test set. Such DL algorithms are called “black box models” as they are hard to interpret due to their highly complex structures. Integrating XAI enables us to interpret and comprehend the predictions of these abstract models. We applied various XAI methods and generated explanations for the proposed segmentation model for road detection in AVs.
Journal Article
SEGNET-BASED EXTRACTION OF WETLAND VEGETATION INFORMATION FROM UAV IMAGES
2020
This study takes Guangxi Huixian National Wetland Park as the research area, and uses the UAV image and ground measured tag data as the data source. The SegNet model is used to extract the wetland vegetation information in the study area, further verification multiple classification SegNet model and fusion multiple SegNet model of single/double classification precision of the two ways of extracting karst wetland vegetation information. The experimental results show that the Kappa coefficient of the multi-segmented SegNet model is 0.68, while the multi-class SegNet model has a classification effect of 0.59. The classification effect of the karst wetland vegetation information extracted by multiple single/double-class SegNet models is more than the multi-classification. The SegNet model has high precision.
Journal Article
A Novel Bio-Inspired Deep Learning Approach for Liver Cancer Diagnosis
Current research on computer-aided diagnosis (CAD) of liver cancer is based on traditional feature engineering methods, which have several drawbacks including redundant features and high computational cost. Recent deep learning models overcome these problems by implicitly capturing intricate structures from large-scale medical image data. However, they are still affected by network hyperparameters and topology. Hence, the state of the art in this area can be further optimized by integrating bio-inspired concepts into deep learning models. This work proposes a novel bio-inspired deep learning approach for optimizing predictive results of liver cancer. This approach contributes to the literature in two ways. Firstly, a novel hybrid segmentation algorithm is proposed to extract liver lesions from computed tomography (CT) images using SegNet network, UNet network, and artificial bee colony optimization (ABC), namely, SegNet-UNet-ABC. This algorithm uses the SegNet for separating liver from the abdominal CT scan, then the UNet is used to extract lesions from the liver. In parallel, the ABC algorithm is hybridized with each network to tune its hyperparameters, as they highly affect the segmentation performance. Secondly, a hybrid algorithm of the LeNet-5 model and ABC algorithm, namely, LeNet-5/ABC, is proposed as feature extractor and classifier of liver lesions. The LeNet-5/ABC algorithm uses the ABC to select the optimal topology for constructing the LeNet-5 network, as network structure affects learning time and classification accuracy. For assessing performance of the two proposed algorithms, comparisons have been made to the state-of-the-art algorithms on liver lesion segmentation and classification. The results reveal that the SegNet-UNet-ABC is superior to other compared algorithms regarding Jaccard index, Dice index, correlation coefficient, and convergence time. Moreover, the LeNet-5/ABC algorithm outperforms other algorithms regarding specificity, F1-score, accuracy, and computational time.
Journal Article
Deep Learning Measurement Model to Segment the Nuchal Translucency Region for the Early Identification of Down Syndrome
by
Thomas, Mary Christeena
,
Arjunan, Sridhar P.
in
Artificial neural networks
,
convolution
,
Deep learning
2022
Down syndrome (DS) or Trisomy 21 is a genetic disorder that causes intellectual and mental disability in fetuses. The most essential marker for detecting DS during the first trimester of pregnancy is nuchal translucency (NT). Effective segmentation of the NT contour from the ultrasound (US) images becomes challenging due to the presence of speckle noise and weak edges. This study presents a Convolutional Neural Network (CNN) based SegNet model using a Visual Geometry Group (VGG-16) for semantically segmenting the NT region from the US fetal images and providing a fast and affordable diagnosis during the early stages of gestation. A transfer learning approach using AlexNet is implemented to train the NT segmented regions for the identification of DS. The proposed model achieved a Jaccard index of 0.96 and classification accuracy of 91.7 %, sensitivity of 85.7 %, and a Receiver operating characteristic (ROC) of 0.95.
Journal Article
Autosegmentation of Prostate Zones and Cancer Regions from Biparametric Magnetic Resonance Images by Using Deep-Learning-Based Neural Networks
2021
The accuracy in diagnosing prostate cancer (PCa) has increased with the development of multiparametric magnetic resonance imaging (mpMRI). Biparametric magnetic resonance imaging (bpMRI) was found to have a diagnostic accuracy comparable to mpMRI in detecting PCa. However, prostate MRI assessment relies on human experts and specialized training with considerable inter-reader variability. Deep learning may be a more robust approach for prostate MRI assessment. Here we present a method for autosegmenting the prostate zone and cancer region by using SegNet, a deep convolution neural network (DCNN) model. We used PROSTATEx dataset to train the model and combined different sequences into three channels of a single image. For each subject, all slices that contained the transition zone (TZ), peripheral zone (PZ), and PCa region were selected. The datasets were produced using different combinations of images, including T2-weighted (T2W) images, diffusion-weighted images (DWI) and apparent diffusion coefficient (ADC) images. Among these groups, the T2W + DWI + ADC images exhibited the best performance with a dice similarity coefficient of 90.45% for the TZ, 70.04% for the PZ, and 52.73% for the PCa region. Image sequence analysis with a DCNN model has the potential to assist PCa diagnosis.
Journal Article