Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
52 result(s) for "DenseNet201"
Sort by:
LDDNet: A Deep Learning Framework for the Diagnosis of Infectious Lung Diseases
This paper proposes a new deep learning (DL) framework for the analysis of lung diseases, including COVID-19 and pneumonia, from chest CT scans and X-ray (CXR) images. This framework is termed optimized DenseNet201 for lung diseases (LDDNet). The proposed LDDNet was developed using additional layers of 2D global average pooling, dense and dropout layers, and batch normalization to the base DenseNet201 model. There are 1024 Relu-activated dense layers and 256 dense layers using the sigmoid activation method. The hyper-parameters of the model, including the learning rate, batch size, epochs, and dropout rate, were tuned for the model. Next, three datasets of lung diseases were formed from separate open-access sources. One was a CT scan dataset containing 1043 images. Two X-ray datasets comprising images of COVID-19-affected lungs, pneumonia-affected lungs, and healthy lungs exist, with one being an imbalanced dataset with 5935 images and the other being a balanced dataset with 5002 images. The performance of each model was analyzed using the Adam, Nadam, and SGD optimizers. The best results have been obtained for both the CT scan and CXR datasets using the Nadam optimizer. For the CT scan images, LDDNet showed a COVID-19-positive classification accuracy of 99.36%, a 100% precision recall of 98%, and an F1 score of 99%. For the X-ray dataset of 5935 images, LDDNet provides a 99.55% accuracy, 73% recall, 100% precision, and 85% F1 score using the Nadam optimizer in detecting COVID-19-affected patients. For the balanced X-ray dataset, LDDNet provides a 97.07% classification accuracy. For a given set of parameters, the performance results of LDDNet are better than the existing algorithms of ResNet152V2 and XceptionNet.
IoMT-Enabled Computer-Aided Diagnosis of Pulmonary Embolism from Computed Tomography Scans Using Deep Learning
The Internet of Medical Things (IoMT) has revolutionized Ambient Assisted Living (AAL) by interconnecting smart medical devices. These devices generate a large amount of data without human intervention. Learning-based sophisticated models are required to extract meaningful information from this massive surge of data. In this context, Deep Neural Network (DNN) has been proven to be a powerful tool for disease detection. Pulmonary Embolism (PE) is considered the leading cause of death disease, with a death toll of 180,000 per year in the US alone. It appears due to a blood clot in pulmonary arteries, which blocks the blood supply to the lungs or a part of the lung. An early diagnosis and treatment of PE could reduce the mortality rate. Doctors and radiologists prefer Computed Tomography (CT) scans as a first-hand tool, which contain 200 to 300 images of a single study for diagnosis. Most of the time, it becomes difficult for a doctor and radiologist to maintain concentration going through all the scans and giving the correct diagnosis, resulting in a misdiagnosis or false diagnosis. Given this, there is a need for an automatic Computer-Aided Diagnosis (CAD) system to assist doctors and radiologists in decision-making. To develop such a system, in this paper, we proposed a deep learning framework based on DenseNet201 to classify PE into nine classes in CT scans. We utilized DenseNet201 as a feature extractor and customized fully connected decision-making layers. The model was trained on the Radiological Society of North America (RSNA)-Pulmonary Embolism Detection Challenge (2020) Kaggle dataset and achieved promising results of 88%, 88%, 89%, and 90% in terms of the accuracy, sensitivity, specificity, and Area Under the Curve (AUC), respectively.
A novel deep transfer learning models for recognition of birds sounds in different environment
Automatic detection of calling bird species is advantageous for monitoring the environment on a broad scale, both temporally and spatially. Numerous investigations have been influenced by feature representations employed in the field of automatic voice recognition. In this study, we investigated deep neural networks on a dataset of 12,061 files for voice recognition for 22 bird species. The methodology adopted in the current study deviates from the existing approaches by integrating transfer learning. Also, multiple feature extraction techniques have been used to extract features from audio to analyze bird sounds, including the Fourier Transform, Mel-Spectrogram, and Mel Frequency Cepstral Coefficients. The study’s main objective is to develop intelligent systems that can predict the different species of bird from the collected set of audio data recordings. The current work verifies that deep transfer learning models like ResNet50, DenseNet201, InceptionV3, Xception and Efficient Net can effectively extract and recognize the audio signals from different bird species with significant prediction accuracy. The absolute best classification accuracy is 97.43%, which DenseNet201 and ResNet50 classification model attained on validation set. Also, DenseNet201 incurred least validation loss (0.1080). The Xception model performed best with the training data and achieved 100% training accuracy and incurred least loss (0.0011). Thus, our study brings us a solution to quantify/test deep learning models appropriately.
Deep learning based classification of facial dermatological disorders
Common properties of dermatological diseases are mostly lesions with abnormal pattern and skin color (usually redness). Therefore, dermatology is one of the most appropriate areas in medicine for automated diagnosis from images using pattern recognition techniques to provide accurate, objective, early diagnosis and interventions. Also, automated techniques provide diagnosis without depending on location and time. In addition, the number of patients in dermatology departments and costs of dermatologist visits can be reduced. Therefore, in this work, an automated method is proposed to classify dermatological diseases from color digital photographs. Efficiency of the proposed approach is provided by 2 stages. In the 1st stage, lesions are detected and extracted by using a variational level set technique after noise reduction and intensity normalization steps. In the 2nd stage, lesions are classified using a pre-trained DenseNet201 architecture with an efficient loss function. In this study, five common facial dermatological diseases are handled since they also cause anxiety, depression and even suicide death. The main contributions provided by this work can be identified as follows: (i) A comprehensive survey about the state-of-the-art works on classifications of dermatological diseases using deep learning; (ii) A new fully automated lesion detection and segmentation based on level sets; (iii) A new adaptive, hybrid and non-symmetric loss function; (iv) Using a pre-trained DenseNet201 structure with the new loss function to classify skin lesions; (v) Comparative evaluations of ten convolutional networks for skin lesion classification. Experimental results indicate that the proposed approach can classify lesions with high performance (95.24% accuracy). •A new fully automated segmentation was proposed to detect and extract skin lesions.•A new adaptive, hybrid and non-symmetric loss function was proposed.•A pre-trained DenseNet201 model was used for lesion classification from photographs.•Comparative performance analysis was performed for 10 CNN-based networks.•High classification accuracy (95.24%) was obtained with the proposed DenseNet201.
Dilated Semantic Segmentation for Breast Ultrasonic Lesion Detection Using Parallel Feature Fusion
Breast cancer is becoming more dangerous by the day. The death rate in developing countries is rapidly increasing. As a result, early detection of breast cancer is critical, leading to a lower death rate. Several researchers have worked on breast cancer segmentation and classification using various imaging modalities. The ultrasonic imaging modality is one of the most cost-effective imaging techniques, with a higher sensitivity for diagnosis. The proposed study segments ultrasonic breast lesion images using a Dilated Semantic Segmentation Network (Di-CNN) combined with a morphological erosion operation. For feature extraction, we used the deep neural network DenseNet201 with transfer learning. We propose a 24-layer CNN that uses transfer learning-based feature extraction to further validate and ensure the enriched features with target intensity. To classify the nodules, the feature vectors obtained from DenseNet201 and the 24-layer CNN were fused using parallel fusion. The proposed methods were evaluated using a 10-fold cross-validation on various vector combinations. The accuracy of CNN-activated feature vectors and DenseNet201-activated feature vectors combined with the Support Vector Machine (SVM) classifier was 90.11 percent and 98.45 percent, respectively. With 98.9 percent accuracy, the fused version of the feature vector with SVM outperformed other algorithms. When compared to recent algorithms, the proposed algorithm achieves a better breast cancer diagnosis rate.
Optimizing CNN Architectures for Face Liveness Detection: Performance, Efficiency, and Generalization across Datasets
Face liveness detection is essential for securing biometric authentication systems against spoofing attacks, including printed photos, replay videos, and 3D masks. This study systematically evaluates pre-trained CNN models— DenseNet201, VGG16, InceptionV3, ResNet50, VGG19, MobileNetV2, Xception, and InceptionResNetV2—leveraging transfer learning and fine-tuning to enhance liveness detection performance. The models were trained and tested on NUAA and Replay-Attack datasets, with cross-dataset generalization validated on SiW-MV2 to assess real-world adaptability. Performance was evaluated using accuracy, precision, recall, FAR, FRR, HTER, and specialized spoof detection metrics (APCER, NPCER, ACER). Fine-tuning significantly improved detection accuracy, with DenseNet201 achieving the highest performance (98.5% on NUAA, 97.71% on Replay-Attack), while MobileNetV2 proved the most efficient model for real-time applications (latency: 15 ms, memory usage: 45 MB, energy consumption: 30 mJ). A statistical significance analysis (paired t-tests, confidence intervals) validated these improvements. Cross-dataset experiments identified DenseNet201 and MobileNetV2 as the most generalizable architectures, with DenseNet201 achieving 86.4% accuracy on Replay-Attack when trained on NUAA, demonstrating robust feature extraction and adaptability. In contrast, ResNet50 showed lower generalization capabilities, struggling with dataset variability and complex spoofing attacks. These findings suggest that MobileNetV2 is well-suited for low-power applications, while DenseNet201 is ideal for high-security environments requiring superior accuracy. This research provides a framework for improving real-time face liveness detection, enhancing biometric security, and guiding future advancements in AI-driven anti-spoofing techniques.
OralNet: Fused Optimal Deep Features Framework for Oral Squamous Cell Carcinoma Detection
Humankind is witnessing a gradual increase in cancer incidence, emphasizing the importance of early diagnosis and treatment, and follow-up clinical protocols. Oral or mouth cancer, categorized under head and neck cancers, requires effective screening for timely detection. This study proposes a framework, OralNet, for oral cancer detection using histopathology images. The research encompasses four stages: (i) Image collection and preprocessing, gathering and preparing histopathology images for analysis; (ii) feature extraction using deep and handcrafted scheme, extracting relevant features from images using deep learning techniques and traditional methods; (iii) feature reduction artificial hummingbird algorithm (AHA) and concatenation: Reducing feature dimensionality using AHA and concatenating them serially and (iv) binary classification and performance validation with three-fold cross-validation: Classifying images as healthy or oral squamous cell carcinoma and evaluating the framework’s performance using three-fold cross-validation. The current study examined whole slide biopsy images at 100× and 400× magnifications. To establish OralNet’s validity, 3000 cropped and resized images were reviewed, comprising 1500 healthy and 1500 oral squamous cell carcinoma images. Experimental results using OralNet achieved an oral cancer detection accuracy exceeding 99.5%. These findings confirm the clinical significance of the proposed technique in detecting oral cancer presence in histology slides.
Enhancing traditional machine learning methods using concatenation two transfer learning for classification desert regions
Deserts cover a significant portion of the earth and present environmental and economic difficulties owing to their harsh conditions. Satellite remote sensing images (SRSI) have evolved into an important tool for monitoring and studying these regions as technology has advanced. Machine learning (ML) is critical in evaluating these images and extracting valuable information from them, resulting in a better knowledge of hard settings and increasing efforts toward sustainable development in desert regions. As a result, in this study, four ML approaches were enhanced by hybridizing them with pre-training methods to achieve multi model learning. Two pre-training approaches (Xception and DeneseNet201) were used to extract features, which were concatenated and fed into ML algorithms light gradient boosting model (LGBM), decision tree (DT), k-nearest neighbors (KNN), and naïve Bayes (NB). In addition, an ensemble voting was used to improve the outcomes of ML algorithms (DT, NB, and KNN) and overcome their flaws. The models were tested on two datasets and hybrid LGBM outperformed other traditional ML methods by 99% in accuracy, precision, recall, and F1 score, and by 100% in area under the curve (AUC)-receiver operating characteristic (ROC).
Hybrid transfer learning and self-attention framework for robust MRI-based brain tumor classification
Brain tumors are a significant contributor to cancer-related deaths worldwide. Accurate and prompt detection is crucial to reduce mortality rates and improve patient survival prospects. Magnetic Resonance Imaging (MRI) is crucial for diagnosis, but manual analysis is resource-intensive and error-prone, highlighting the need for robust Computer-Aided Diagnosis (CAD) systems. This paper proposes a novel hybrid model combining Transfer Learning (TL) and attention mechanisms to enhance brain tumor classification accuracy. Leveraging features from the pre-trained DenseNet201 Convolutional Neural Networks (CNN) model and integrating a Transformer-based architecture, our approach overcomes challenges like computational intensity, detail detection, and noise sensitivity. We also evaluated five additional pre-trained models-VGG19, InceptionV3, Xception, MobileNetV2, and ResNet50V2 and incorporated Multi-Head Self-Attention (MHSA) and Squeeze-and-Excitation Attention (SEA) blocks individually to improve feature representation. Using the Br35H dataset of 3,000 MRI images, our proposed DenseTransformer model achieved a consistent accuracy of 99.41%, demonstrating its reliability as a diagnostic tool. Statistical analysis using Z-test based on Cohen’s Kappa Score, DeLong’s test based on AUC Score and McNemar’s test based on F1-score confirms the model’s reliability. Additionally, Explainable AI (XAI) techniques like Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-agnostic Explanations (LIME) enhanced model transparency and interpretability. This study underscores the potential of hybrid Deep Learning (DL) models in advancing brain tumor diagnosis and improving patient outcomes.