Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
128 result(s) for "Fadhel, Mohammed A."
Sort by:
A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications
Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for many applications dismissing the use of DL. Having sufficient data is the first step toward any successful and trustworthy DL application. This paper presents a holistic survey on state-of-the-art techniques to deal with training DL models to overcome three challenges including small, imbalanced datasets, and lack of generalization. This survey starts by listing the learning techniques. Next, the types of DL architectures are introduced. After that, state-of-the-art solutions to address the issue of lack of training data are listed, such as Transfer Learning (TL), Self-Supervised Learning (SSL), Generative Adversarial Networks (GANs), Model Architecture (MA), Physics-Informed Neural Network (PINN), and Deep Synthetic Minority Oversampling Technique (DeepSMOTE). Then, these solutions were followed by some related tips about data acquisition needed prior to training purposes, as well as recommendations for ensuring the trustworthiness of the training dataset. The survey ends with a list of applications that suffer from data scarcity, several alternatives are proposed in order to generate more data in each application including Electromagnetic Imaging (EMI), Civil Structural Health Monitoring, Medical imaging, Meteorology, Wireless Communications, Fluid Mechanics, Microelectromechanical system, and Cybersecurity. To the best of the authors’ knowledge, this is the first review that offers a comprehensive overview on strategies to tackle data scarcity in DL.
Review of deep learning: concepts, CNN architectures, challenges, applications, future directions
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
Reinforcement Learning Algorithms and Applications in Healthcare and Robotics: A Comprehensive and Systematic Review
Reinforcement learning (RL) has emerged as a dynamic and transformative paradigm in artificial intelligence, offering the promise of intelligent decision-making in complex and dynamic environments. This unique feature enables RL to address sequential decision-making problems with simultaneous sampling, evaluation, and feedback. As a result, RL techniques have become suitable candidates for developing powerful solutions in various domains. In this study, we present a comprehensive and systematic review of RL algorithms and applications. This review commences with an exploration of the foundations of RL and proceeds to examine each algorithm in detail, concluding with a comparative analysis of RL algorithms based on several criteria. This review then extends to two key applications of RL: robotics and healthcare. In robotics manipulation, RL enhances precision and adaptability in tasks such as object grasping and autonomous learning. In healthcare, this review turns its focus to the realm of cell growth problems, clarifying how RL has provided a data-driven approach for optimizing the growth of cell cultures and the development of therapeutic solutions. This review offers a comprehensive overview, shedding light on the evolving landscape of RL and its potential in two diverse yet interconnected fields.
Towards a Better Understanding of Transfer Learning for Medical Imaging: A Case Study
One of the main challenges of employing deep learning models in the field of medicine is a lack of training data due to difficulty in collecting and labeling data, which needs to be performed by experts. To overcome this drawback, transfer learning (TL) has been utilized to solve several medical imaging tasks using pre-trained state-of-the-art models from the ImageNet dataset. However, there are primary divergences in data features, sizes, and task characteristics between the natural image classification and the targeted medical imaging tasks. Therefore, TL can slightly improve performance if the source domain is completely different from the target domain. In this paper, we explore the benefit of TL from the same and different domains of the target tasks. To do so, we designed a deep convolutional neural network (DCNN) model that integrates three ideas including traditional and parallel convolutional layers and residual connections along with global average pooling. We trained the proposed model against several scenarios. We utilized the same and different domain TL with the diabetic foot ulcer (DFU) classification task and with the animal classification task. We have empirically shown that the source of TL from the same domain can significantly improve the performance considering a reduced number of images in the same domain of the target dataset. The proposed model with the DFU dataset achieved F1-score value of 86.6% when trained from scratch, 89.4% with TL from a different domain of the targeted dataset, and 97.6% with TL from the same domain of the targeted dataset.
IoT and Cloud Computing in Health-Care: A New Wearable Device and Cloud-Based Deep Learning Algorithm for Monitoring of Diabetes
Diabetes is a chronic disease that can affect human health negatively when the glucose levels in the blood are elevated over the creatin range called hyperglycemia. The current devices for continuous glucose monitoring (CGM) supervise the glucose level in the blood and alert user to the type-1 Diabetes class once a certain critical level is surpassed. This can lead the body of the patient to work at critical levels until the medicine is taken in order to reduce the glucose level, consequently increasing the risk of causing considerable health damages in case of the intake is delayed. To overcome the latter, a new approach based on cutting-edge software and hardware technologies is proposed in this paper. Specifically, an artificial intelligence deep learning (DL) model is proposed to predict glucose levels in 30 min horizons. Moreover, Cloud computing and IoT technologies are considered to implement the prediction model and combine it with the existing wearable CGM model to provide the patients with the prediction of future glucose levels. Among the many DL methods in the state-of-the-art (SoTA) have been considered a cascaded RNN-RBM DL model based on both recurrent neural networks (RNNs) and restricted Boltzmann machines (RBM) due to their superior properties regarding improved prediction accuracy. From the conducted experimental results, it has been shown that the proposed Cloud&DL-based wearable approach achieves an average accuracy value of 15.589 in terms of RMSE, then outperforms similar existing blood glucose prediction methods in the SoTA.
Towards unbiased skin cancer classification using deep feature fusion
This paper introduces SkinWiseNet (SWNet), a deep convolutional neural network designed for the detection and automatic classification of potentially malignant skin cancer conditions. SWNet optimizes feature extraction through multiple pathways, emphasizing network width augmentation to enhance efficiency. The proposed model addresses potential biases associated with skin conditions, particularly in individuals with darker skin tones or excessive hair, by incorporating feature fusion to assimilate insights from diverse datasets. Extensive experiments were conducted using publicly accessible datasets to evaluate SWNet’s effectiveness.This study utilized four datasets-Mnist-HAM10000, ISIC2019, ISIC2020, and Melanoma Skin Cancer-comprising skin cancer images categorized into benign and malignant classes. Explainable Artificial Intelligence (XAI) techniques, specifically Grad-CAM, were employed to enhance the interpretability of the model’s decisions. Comparative analysis was performed with three pre-existing deep learning networks-EfficientNet, MobileNet, and Darknet. The results demonstrate SWNet’s superiority, achieving an accuracy of 99.86% and an F1 score of 99.95%, underscoring its efficacy in gradient propagation and feature capture across various levels. This research highlights the significant potential of SWNet in advancing skin cancer detection and classification, providing a robust tool for accurate and early diagnosis. The integration of feature fusion enhances accuracy and mitigates biases associated with hair and skin tones. The outcomes of this study contribute to improved patient outcomes and healthcare practices, showcasing SWNet’s exceptional capabilities in skin cancer detection and classification.
Energy Efficiency for Green Internet of Things (IoT) Networks: A Survey
The last decade has witnessed the rise of the proliferation of Internet-enabled devices. The Internet of Things (IoT) is becoming ever more pervasive in everyday life, connecting an ever-greater array of diverse physical objects. The key vision of the IoT is to bring a massive number of smart devices together in integrated and interconnected heterogeneous networks, making the Internet even more useful. Therefore, this paper introduces a brief introduction to the history and evolution of the Internet. Then, it presents the IoT, which is followed by a list of application domains and enabling technologies. The wireless sensor network (WSN) is revealed as one of the important elements in IoT applications, and the paper describes the relationship between WSNs and the IoT. This research is concerned with developing energy-efficiency techniques for WSNs that enable the IoT. After having identified sources of energy wastage, this paper reviews the literature that discusses the most relevant methods to minimizing the energy exhaustion of IoT and WSNs. We also identify the gaps in the existing literature in terms of energy preservation measures that could be researched and it can be considered in future works. The survey gives a near-complete and up-to-date view of the IoT in the energy field. It provides a summary and recommendations of a large range of energy-efficiency methods proposed in the literature that will help and support future researchers. Please note that the manuscript is an extended version and based on the summary of the Ph.D. thesis. This paper will give to the researchers an introduction to what they need to know and understand about the networks, WSNs, and IoT applications from scratch. Thus, the fundamental purpose of this paper is to introduce research trends and recent work on the use of IoT technology and the conclusion that has been reached as a result of undertaking the Ph.D. study.
Deepening into the suitability of using pre-trained models of ImageNet against a lightweight convolutional neural network in medical imaging: an experimental study
Transfer learning (TL) has been widely utilized to address the lack of training data for deep learning models. Specifically, one of the most popular uses of TL has been for the pre-trained models of the ImageNet dataset. Nevertheless, although these pre-trained models have shown an effective performance in several domains of application, those models may not offer significant benefits in all instances when dealing with medical imaging scenarios. Such models were designed to classify a thousand classes of natural images. There are fundamental differences between these models and those dealing with medical imaging tasks regarding learned features. Most medical imaging applications range from two to ten different classes, where we suspect that it would not be necessary to employ deeper learning models. This paper investigates such a hypothesis and develops an experimental study to examine the corresponding conclusions about this issue. The lightweight convolutional neural network (CNN) model and the pre-trained models have been evaluated using three different medical imaging datasets. We have trained the lightweight CNN model and the pre-trained models with two scenarios which are with a small number of images once and a large number of images once again. Surprisingly, it has been found that the lightweight model trained from scratch achieved a more competitive performance when compared to the pre-trained model. More importantly, the lightweight CNN model can be successfully trained and tested using basic computational tools and provide high-quality results, specifically when using medical imaging datasets.
Face Recognition Based on Deep Learning and FPGA for Ethnicity Identification
In the last decade, there has been a surge of interest in addressing complex Computer Vision (CV) problems in the field of face recognition (FR). In particular, one of the most difficult ones is based on the accurate determination of the ethnicity of mankind. In this regard, a new classification method using Machine Learning (ML) tools is proposed in this paper. Specifically, a new Deep Learning (DL) approach based on a Deep Convolutional Neural Network (DCNN) model is developed, which outperforms a reliable determination of the ethnicity of people based on their facial features. However, it is necessary to make use of specialized high-performance computing (HPC) hardware to build a workable DCNN-based FR system due to the low computation power given by the current central processing units (CPUs). Recently, the latter approach has increased the efficiency of the network in terms of power usage and execution time. Then, the usage of field-programmable gate arrays (FPGAs) was considered in this work. The performance of the new DCNN-based FR method using FPGA was compared against that using graphics processing units (GPUs). The experimental results considered an image dataset composed of 3141 photographs of citizens from three distinct countries. To our knowledge, this is the first image collection gathered specifically to address the ethnicity identification problem. Additionally, the ethnicity dataset was made publicly available as a novel contribution to this work. Finally, the experimental results proved the high performance provided by the proposed DCNN model using FPGAs, achieving an accuracy level of 96.9 percent and an F1 score of 94.6 percent while using a reasonable amount of energy and hardware resources.
Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data
Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes—either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.