Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
251 result(s) for "MobileNetV2"
Sort by:
COVID-19 Detection Using Deep Learning Algorithm on Chest X-ray Images
COVID-19, regarded as the deadliest virus of the 21st century, has claimed the lives of millions of people around the globe in less than two years. Since the virus initially affects the lungs of patients, X-ray imaging of the chest is helpful for effective diagnosis. Any method for automatic, reliable, and accurate screening of COVID-19 infection would be beneficial for rapid detection and reducing medical or healthcare professional exposure to the virus. In the past, Convolutional Neural Networks (CNNs) proved to be quite successful in the classification of medical images. In this study, an automatic deep learning classification method for detecting COVID-19 from chest X-ray images is suggested using a CNN. A dataset consisting of 3616 COVID-19 chest X-ray images and 10,192 healthy chest X-ray images was used. The original data were then augmented to increase the data sample to 26,000 COVID-19 and 26,000 healthy X-ray images. The dataset was enhanced using histogram equalization, spectrum, grays, cyan and normalized with NCLAHE before being applied to CNN models. Initially using the dataset, the symptoms of COVID-19 were detected by employing eleven existing CNN models; VGG16, VGG19, MobileNetV2, InceptionV3, NFNet, ResNet50, ResNet101, DenseNet, EfficientNetB7, AlexNet, and GoogLeNet. From the models, MobileNetV2 was selected for further modification to obtain a higher accuracy of COVID-19 detection. Performance evaluation of the models was demonstrated using a confusion matrix. It was observed that the modified MobileNetV2 model proposed in the study gave the highest accuracy of 98% in classifying COVID-19 and healthy chest X-rays among all the implemented CNN models. The second-best performance was achieved from the pre-trained MobileNetV2 with an accuracy of 97%, followed by VGG19 and ResNet101 with 95% accuracy for both the models. The study compares the compilation time of the models. The proposed model required the least compilation time with 2 h, 50 min and 21 s. Finally, the Wilcoxon signed-rank test was performed to test the statistical significance. The results suggest that the proposed method can efficiently identify the symptoms of infection from chest X-ray images better than existing methods.
Fruit Image Classification Model Based on MobileNetV2 with Deep Transfer Learning Technique
Due to the rapid emergence and evolution of AI applications, the utilization of smart imaging devices has increased significantly. Researchers have started using deep learning models, such as CNN, for image classification. Unlike the traditional models, which require a lot of features to perform well, CNN does not require any handcrafted features to perform well. It uses numerous filters, which extract required features from images automatically for classification. One of the issues in the horticulture industry is fruit classification, which requires an expert with a lot of experience. To overcome this issue an automated system is required which can classify different types of fruits without the need for any human effort. In this study, a dataset of a total of 26,149 images of 40 different types of fruits was used for experimentation. The training and test set were randomly recreated and divided into the ratio of 3:1. The experiment introduces a customized head of five different layers into MobileNetV2 architecture. The classification layer of the MobileNetV2 model is replaced by the customized head, which produced the modified version of MobileNetV2 called TL-MobileNetV2. In addition, transfer learning is used to retain the pre-trained model. TL-MobileNetV2 achieves an accuracy of 99%, which is 3% higher than MobileNetV2, and the equal error rate of TL-MobileNetV2 is just 1%. Compared to AlexNet, VGG16, InceptionV3, and ResNet, the accuracy is better by 8, 11, 6, and 10%, respectively. Furthermore, the TL-MobileNetV2 model obtained 99% precision, 99% for recall, and a 99% F1-score. It can be concluded that transfer learning plays a big part in achieving better results, and the dropout technique helps to reduce the overfitting in transfer learning.
Attention 3D U-Net with Multiple Skip Connections for Segmentation of Brain Tumor Images
Among researchers using traditional and new machine learning and deep learning techniques, 2D medical image segmentation models are popular. Additionally, 3D volumetric data recently became more accessible, as a result of the high number of studies conducted in recent years regarding the creation of 3D volumes. Using these 3D data, researchers have begun conducting research on creating 3D segmentation models, such as brain tumor segmentation and classification. Since a higher number of crucial features can be extracted using 3D data than 2D data, 3D brain tumor detection models have increased in popularity among researchers. Until now, various significant research works have focused on the 3D version of the U-Net and other popular models, such as 3D U-Net and V-Net, while doing superior research works. In this study, we used 3D brain image data and created a new architecture based on a 3D U-Net model that uses multiple skip connections with cost-efficient pretrained 3D MobileNetV2 blocks and attention modules. These pretrained MobileNetV2 blocks assist our architecture by providing smaller parameters to maintain operable model size in terms of our computational capability and help the model to converge faster. We added additional skip connections between the encoder and decoder blocks to ease the exchange of extracted features between the two blocks, which resulted in the maximum use of the features. We also used attention modules to filter out irrelevant features coming through the skip connections and, thus, preserved more computational power while achieving improved accuracy.
Classification of tea leaf disease using convolutional neural network approach
Leaf diseases on tea plants affect the quality of tea. This issue must be overcome since preparing tea drinks requires high-quality tea leaves. Various automatic models for identifying disease in tea leaves have been developed; however, their performance is typically low since the extracted features are not selective enough. This work presents a classification model for tea leaf disease that distinguishes six leaf classes: algal spot, brown, blight, grey blight, helopeltis, red spot, and healthy. Deep learning using a convolutional neural network (CNN) builds an effective model for detecting tea leaf illness. The Kaggle public dataset contains 5,980 tea leaf images on a white background. Pre-processing was performed to reduce computing time, which involved shrinking and normalizing the image prior to augmentation. Augmentation techniques included rotation, shear, flip horizontal, and flip vertical. The CNN model was used to classify tea leaf disease using the MobileNetV2 backbone, Adam optimizer, and rectified linear unit (ReLU) activation function with 224×224 input data. The proposed model attained the highest performance, as evidenced by the accuracy value 0.9455.
Dimensionality Reduction and Classification of Dermatological Images using PCA and Machine Learning
Skin diseases pose grave diagnosis issues since they are highly similar among classes and have varied patterns over the various skin colors, particularly in Indian subjects. The current work proposes a mixed strategy using transfer learning-based feature extraction, dimensionality reduction, and traditional machine learning classification to effectively detect skin diseases. In an experiment conducted on a database of 9478 images for five dermatological classes, features were extracted from a pre-trained MobileNetV2 network. The statistical technique, Principal Component Analysis (PCA) was used to diminish feature dimensionality to facilitate effective visualization (3D PCA plots) and computational performance. Support Vector Machine (SVM) classifiers that used PCA-reduced features were highly accurate, with evident class separability illustrated in confusion matrices and performance metrics. The suggested framework emphasizes the promise of explainable PCA-based pipelines for skin disease analysis and presents a scalable solution for dermatological AI systems in resource-limited clinical environments.
A study on expression recognition based on improved mobilenetV2 network
This paper proposes an improved strategy for the MobileNetV2 neural network(I-MobileNetV2) in response to problems such as large parameter quantities in existing deep convolutional neural networks and the shortcomings of the lightweight neural network MobileNetV2 such as easy loss of feature information, poor real-time performance, and low accuracy rate in facial emotion recognition tasks. The network inherits the characteristics of MobilenetV2 depthwise separated convolution, signifying a reduction in computational load while maintaining a lightweight profile. It utilizes a reverse fusion mechanism to retain negative features, which makes the information less likely to be lost. The SELU activation function is used to replace the RELU6 activation function to avoid gradient vanishing. Meanwhile, to improve the feature recognition capability, the channel attention mechanism (Squeeze-and-Excitation Networks (SE-Net)) is integrated into the MobilenetV2 network. Experiments conducted on the facial expression datasets FER2013 and CK + showed that the proposed network model achieved facial expression recognition accuracies of 68.62% and 95.96%, improving upon the MobileNetV2 model by 0.72% and 6.14% respectively, and the parameter count decreased by 83.8%. These results empirically verify the effectiveness of the improvements made to the network model.
Investigating the Impact of Train / Test Split Ratio on the Performance of Pre-Trained Models with Custom Datasets
The proper allocation of data between training and testing is a critical factor influencing the performance of deep learning models, especially those built upon pre-trained architectures. Having the suitable training set size is an important factor for the classification model’s generalization performance. The main goal of this study is to find the appropriate training set size for three pre-trained networks using different custom datasets. For this aim, the study presented in this paper explores the effect of varying the train / test split ratio on the performance of three popular pre-trained models, namely MobileNetV2, ResNet50v2 and VGG19, with a focus on image classification task. In this work, three balanced datasets never seen by the models have been used, each containing 1000 images divided into two classes. The train / test split ratios used for this study are: 60-40, 70-30, 80-20 and 90-10. The focus was on the critical metrics of sensitivity, specificity and overall accuracy to evaluate the performance of the classifiers under the different ratios. Experimental results show that, the performance of the classifiers is affected by varying the training / testing split ratio for the three custom datasets. Moreover, with the three pre-trained models, using more than 70% of the dataset images for the training task gives better performance.
Skin Cancer Disease Detection Using Transfer Learning Technique
Melanoma is a fatal type of skin cancer; the fury spread results in a high fatality rate when the malignancy is not treated at an initial stage. The patients’ lives can be saved by accurately detecting skin cancer at an initial stage. A quick and precise diagnosis might help increase the patient’s survival rate. It necessitates the development of a computer-assisted diagnostic support system. This research proposes a novel deep transfer learning model for melanoma classification using MobileNetV2. The MobileNetV2 is a deep convolutional neural network that classifies the sample skin lesions as malignant or benign. The performance of the proposed deep learning model is evaluated using the ISIC 2020 dataset. The dataset contains less than 2% malignant samples, raising the class imbalance. Various data augmentation techniques were applied to tackle the class imbalance issue and add diversity to the dataset. The experimental results demonstrate that the proposed deep learning technique outperforms state-of-the-art deep learning techniques in terms of accuracy and computational cost.
A Lightweight CNN Based on Transfer Learning for COVID-19 Diagnosis
The key to preventing the COVID-19 is to diagnose patients quickly and accurately. Studies have shown that using Convolutional Neural Networks (CNN) to analyze chest Computed Tomography (CT) images is helpful for timely COVID-19 diagnosis. However, personal privacy issues, public chest CT data sets are relatively few, which has limited CNN's application to COVID-19 diagnosis. Also, many CNNs have complex structures and massive parameters. Even if equipped with the dedicated Graphics Processing Unit (GPU) for acceleration, it still takes a long time, which is not conductive to widespread application. To solve above problems, this paper proposes a lightweight CNN classification model based on transfer learning. Use the lightweight CNN MobileNetV2 as the backbone of the model to solve the shortage of hardware resources and computing power. In order to alleviate the problem of model overfitting caused by insufficient data set, transfer learning is used to train the model. The study first exploits the weight parameters trained on the ImageNet database to initialize the MobileNetV2 network, and then retrain the model based on the CT image data set provided by Kaggle. Experimental results on a computer equipped only with the Central Processing Unit (CPU) show that it consumes only 1.06 s on average to diagnose a chest CT image. Compared to other lightweight models, the proposed model has a higher classification accuracy and reliability while having a lightweight architecture and few parameters, which can be easily applied to computers without GPU acceleration. Code:github.com/ZhouJie-520/paper-codes.
Detection of Oil Spill in SAR Image Using an Improved DeepLabV3
Oil spill SAR images are characterized by high noise, low contrast, and irregular boundaries, which lead to the problems of overfitting and insufficient capturing of detailed features of the oil spill region in the current method when processing oil spill SAR images. An improved DeepLabV3+ model is proposed to address the above problems. First, the original backbone network Xception is replaced by the lightweight MobileNetV2, which significantly improves the generalization ability of the model while drastically reducing the number of model parameters and effectively addresses the overfitting problem. Further, the spatial and channel Squeeze and Excitation module (scSE) is introduced and the joint loss function of Bce + Dice is adopted to enhance the sensitivity of the model to the detailed parts of the oil spill area, which effectively solves the problem of insufficient capture of the detailed features of the oil spill area. The experimental results show that the mIOU and F1-score of the improved model in an oil spill region in the Gulf of Mexico reach 80.26% and 88.66%, respectively. In an oil spill region in the Persian Gulf, the mIOU and F1-score reach 81.34% and 89.62%, respectively, which are better than the metrics of the control model.