Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
22
result(s) for
"Alwakid, Ghadah"
Sort by:
Unsupervised Outlier Detection in IOT Using Deep VAE
2022
The Internet of Things (IoT) refers to a system of interconnected, internet-connected devices and sensors that allows the collection and dissemination of data. The data provided by these sensors may include outliers or exhibit anomalous behavior as a result of attack activities or device failure, for example. However, the majority of existing outlier detection algorithms rely on labeled data, which is frequently hard to obtain in the IoT domain. More crucially, the IoT’s data volume is continually increasing, necessitating the requirement for predicting and identifying the classes of future data. In this study, we propose an unsupervised technique based on a deep Variational Auto-Encoder (VAE) to detect outliers in IoT data by leveraging the characteristic of the reconstruction ability and the low-dimensional representation of the input data’s latent variables of the VAE. First, the input data are standardized. Then, we employ the VAE to find a reconstructed output representation from the low-dimensional representation of the latent variables of the input data. Finally, the reconstruction error between the original observation and the reconstructed one is used as an outlier score. Our model was trained only using normal data with no labels in an unsupervised manner and evaluated using Statlog (Landsat Satellite) dataset. The unsupervised model achieved promising and comparable results with the state-of-the-art outlier detection schemes with a precision of ≈90% and an F1 score of 79%.
Journal Article
Deep learning-enhanced diabetic retinopathy image classification
2023
Objective
Diabetic retinopathy (DR) can sometimes be treated and prevented from causing irreversible vision loss if caught and treated properly. In this work, a deep learning (DL) model is employed to accurately identify all five stages of DR.
Methods
The suggested methodology presents two examples, one with and one without picture augmentation. A balanced dataset meeting the same criteria in both cases is then generated using augmentative methods. The DenseNet-121-rendered model on the Asia Pacific Tele-Ophthalmology Society (APTOS) and dataset for diabetic retinopathy (DDR) datasets performed exceptionally well when compared to other methods for identifying the five stages of DR.
Results
Our propose model achieved the highest test accuracy of 98.36%, top-2 accuracy of 100%, and top-3 accuracy of 100% for the APTOS dataset, and the highest test accuracy of 79.67%, top-2 accuracy of 92.%76, and top-3 accuracy of 98.94% for the DDR dataset. Additional criteria (precision, recall, and F1-score) for gauging the efficacy of the proposed model were established with the help of APTOS and DDR.
Conclusions
It was discovered that feeding a model with higher-quality photographs increased its efficiency and ability for learning, as opposed to both state-of-the-art technology and the other, non-enhanced model.
Journal Article
Transformative synergy: SSEHCET—bridging mobile edge computing and AI for enhanced eHealth security and efficiency
by
Alsirhani, Amjad
,
Alwakid, Ghadah
,
Alserhani, Faeiz
in
5G mobile communication
,
Blockchain
,
Cloud computing
2024
Blockchain technologies (BCT) are utilized in healthcare to facilitate a smart and secure transmission of patient data. BCT solutions, however, are unable to store data produced by IoT devices in smart healthcare applications because these applications need a quick consensus process, meticulous key management, and enhanced eprivacy standards. In this work, a smart and secure eHealth framework SSEHCET (Smart and Secure EHealth Framework using Cutting-edge Technologies) is proposed that leverages the potentials of modern cutting-edge technologies (IoT, 5G, mobile edge computing, and BCT), which comprises six layers: 1) The sensing layer-WBAN consists of medical sensors that normally are on or within the bodies of patients and communicate data to smartphones. 2) The edge layer consists of elements that are near IoT devices to collect data. 3) The Communication layer leverages the potential of 5G technology to transmit patients' data between multiple layers efficiently. 4) The storage layer consists of cloud servers or other powerful computers. 5) Security layer, which uses BCT to transmit and store patients' data securely. 6) The healthcare community layer includes healthcare professionals and institutions. For the processing of medical data and to guarantee dependable, safe, and private communication, a Smart Agent (SA) program was duplicated on all layers. The SA leverages the potential of BCT to protect patients' privacy when outsourcing data. The contribution is substantiated through a meticulous evaluation, encompassing security, ease of use, user satisfaction, and SSEHCET structure. Results from an in-depth case study with a prominent healthcare provider underscore SSEHCET's exceptional performance, showcasing its pivotal role in advancing the security, usability, and user satisfaction paradigm in modern eHealth landscapes.
Journal Article
Enhancement of Diabetic Retinopathy Prognostication Using Deep Learning, CLAHE, and ESRGAN
2023
One of the primary causes of blindness in the diabetic population is diabetic retinopathy (DR). Many people could have their sight saved if only DR were detected and treated in time. Numerous Deep Learning (DL)-based methods have been presented to improve human analysis. Using a DL model with three scenarios, this research classified DR and its severity stages from fundus images using the “APTOS 2019 Blindness Detection” dataset. Following the adoption of the DL model, augmentation methods were implemented to generate a balanced dataset with consistent input parameters across all test scenarios. As a last step in the categorization process, the DenseNet-121 model was employed. Several methods, including Enhanced Super-resolution Generative Adversarial Networks (ESRGAN), Histogram Equalization (HIST), and Contrast Limited Adaptive HIST (CLAHE), have been used to enhance image quality in a variety of contexts. The suggested model detected the DR across all five APTOS 2019 grading process phases with the highest test accuracy of 98.36%, top-2 accuracy of 100%, and top-3 accuracy of 100%. Further evaluation criteria (precision, recall, and F1-score) for gauging the efficacy of the proposed model were established with the help of APTOS 2019. Furthermore, comparing CLAHE + ESRGAN against both state-of-the-art technology and other recommended methods, it was found that its use was more effective in DR classification.
Journal Article
Integrating Edge Intelligence with Blockchain-Driven Secured IoT Healthcare Optimization Model
by
Alwakid, Ghadah Naif
,
Alshudukhi, Khulud Salem
,
Humayun, Mamoona
in
Blockchain
,
Collaboration
,
Data processing
2025
The Internet of Things (IoT) and edge computing have substantially contributed to the development and growth of smart cities. It handled time-constrained services and mobile devices to capture the observing environment for surveillance applications. These systems are composed of wireless cameras, digital devices, and tiny sensors to facilitate the operations of crucial healthcare services. Recently, many interactive applications have been proposed, including integrating intelligent systems to handle data processing and enable dynamic communication functionalities for crucial IoT services. Nonetheless, most solutions lack optimizing relaying methods and impose excessive overheads for maintaining devices’ connectivity. Alternatively, data integrity and trust are another vital consideration for next-generation networks. This research proposed a load-balanced trusted surveillance routing model with collaborative decisions at network edges to enhance energy management and resource balancing. It leverages graph-based optimization to enable reliable analysis of decision-making parameters. Furthermore, mobile devices integrate with the proposed model to sustain trusted routes with lightweight privacy-preserving and authentication. The proposed model analyzed its performance results in a simulation-based environment and illustrated an exceptional improvement in packet loss ratio, energy consumption, detection anomaly, and blockchain overhead than related solutions.
Journal Article
Optimized machine learning framework for cardiovascular disease diagnosis: a novel ethical perspective
by
Tariq, Noshina
,
Alwakid, Ghadah
,
Ul Haq, Farman
in
Angiology
,
Artificial intelligence
,
Blood Transfusion Medicine
2025
Alignment of advanced cutting-edge technologies such as Artificial Intelligence (AI) has emerged as a significant driving force to achieve greater precision and timeliness in identifying cardiovascular diseases (CVDs). However, it is difficult to achieve high accuracy and reliability in CVD diagnostics due to complex clinical data and the selection and modeling process of useful features. Therefore, this paper studies advanced AI-based feature selection techniques and the application of AI technologies in the CVD classification. It uses methodologies such as Chi-square, Info Gain, Forward Selection, and Backward Elimination as an essence of cardiovascular health indicators into a refined eight-feature subset. This study emphasizes ethical considerations, including transparency, interpretability, and bias mitigation. This is achieved by employing unbiased datasets, fair feature selection techniques, and rigorous validation metrics to ensure fairness and trustworthiness in the AI-based diagnostic process. In addition, the integration of various Machine Learning (ML) models, encompassing Random Forest (RF), XGBoost, Decision Trees (DT), and Logistic Regression (LR), facilitates a comprehensive exploration of predictive performance. Among this diverse range of models, XGBoost stands out as the top performer, achieving exceptional scores with a 99% accuracy rate, 100% recall, 99% F1-measure, and 99% precision. Furthermore, we venture into dimensionality reduction, applying Principal Component Analysis (PCA) to the eight-feature subset, effectively refining it to a compact six-attribute feature subset. Once again, XGBoost shines as the model of choice, yielding outstanding results. It achieves accuracy, recall, F1-measure, and precision scores of 98%, 100%, 98%, and 97%, respectively, when applied to the feature subset derived from the combination of Chi-square and Forward Selection methods.
Journal Article
AI-Driven Sentiment-Enhanced Secure IoT Communication Model Using Resilience Behavior Analysis
by
Haseeb, Khalid
,
Alwakid, Ghadah Naif
,
Alshammeri, Menwa
in
Artificial intelligence
,
Communications systems
,
Configuration management
2025
Wireless technologies and the Internet of Things (IoT) are being extensively utilized for advanced development in traditional communication systems. This evolution lowers the cost of the extensive use of sensors, changing the way devices interact and communicate in dynamic and uncertain situations. Such a constantly evolving environment presents enormous challenges to preserving a secure and lightweight IoT system. Therefore, it leads to the design of effective and trusted routing to support sustainable smart cities. This research study proposed a Genetic Algorithm sentiment-enhanced secured optimization model, which combines big data analytics and analysis rules to evaluate user feedback. The sentiment analysis is utilized to assess the perception of network performance, allowing the classification of device behavior as positive, neutral, or negative. By integrating sentiment-driven insights, the IoT network adjusts the system configurations to enhance the performance using network behaviour in terms of latency, reliability, fault tolerance, and sentiment score. Accordingly to the analysis, the proposed model categorizes the behavior of devices as positive, neutral, or negative, facilitating real-time monitoring for crucial applications. Experimental results revealed a significant improvement in the proposed model for threat prevention and network efficiency, demonstrating its resilience for real-time IoT applications.
Journal Article
Computer-Vision- and Edge-Enabled Real-Time Assistance Framework for Visually Impaired Persons with LPWAN Emergency Signaling
by
Alwakid, Ghadah Naif
,
Ahmad, Zulfiqar
,
Humayun, Mamoona
in
Algorithms
,
assistive technology
,
Blindness
2025
In recent decades, various assistive technologies have emerged to support visually impaired individuals. However, there remains a gap in terms of solutions that provide efficient, universal, and real-time capabilities by combining robust object detection, robust communication, continuous data processing, and emergency signaling in dynamic environments. In many existing systems, trade-offs are made in range, latency, or reliability when applied in changing outdoor or indoor scenarios. In this study, we propose a comprehensive framework specifically tailored for visually impaired people, integrating computer vision, edge computing, and a dual-channel communication architecture including low-power wide-area network (LPWAN) technology. The system utilizes the YOLOv5 deep-learning model for the real-time detection of obstacles, paths, and assistive tools (such as the white cane) with high performance: precision 0.988, recall 0.969, and mAP 0.985. Implementation of edge-computing devices is introduced to offload computational load from central servers, enabling fast local processing and decision-making. The communications subsystem uses Wi-Fi as the primary link, while a LoRaWAN channel acts as a fail-safe emergency alert network. An IoT-based panic button is incorporated to transmit immediate location-tagged alerts, enabling rapid response by authorities or caregivers. The experimental results demonstrate the system’s low latency and reliable operations under varied real-world conditions, indicating significant potential to improve independent mobility and quality of life for visually impaired people. The proposed solution offers cost-effective and scalable architecture suitable for deployment in complex and challenging environments where real-time assistance is essential.
Journal Article
Diagnosing Melanomas in Dermoscopy Images Using Deep Learning
by
Jhanjhi, N. Z
,
Alwakid, Ghadah
,
Gouda, Walaa
in
Accuracy
,
Artificial intelligence
,
Automation
2023
When it comes to skin tumors and cancers, melanoma ranks among the most prevalent and deadly. With the advancement of deep learning and computer vision, it is now possible to quickly and accurately determine whether or not a patient has malignancy. This is significant since a prompt identification greatly decreases the likelihood of a fatal outcome. Artificial intelligence has the potential to improve healthcare in many ways, including melanoma diagnosis. In a nutshell, this research employed an Inception-V3 and InceptionResnet-V2 strategy for melanoma recognition. The feature extraction layers that were previously frozen were fine-tuned after the newly added top layers were trained. This study used data from the HAM10000 dataset, which included an unrepresentative sample of seven different forms of skin cancer. To fix the discrepancy, we utilized data augmentation. The proposed models outperformed the results of the previous investigation with an effectiveness of 0.89 for Inception-V3 and 0.91 for InceptionResnet-V2.
Journal Article
Enhancing diabetic retinopathy classification using deep learning
2023
Prolonged hyperglycemia can cause diabetic retinopathy (DR), which is a major contributor to blindness. Numerous incidences of DR may be avoided if it were identified and addressed promptly. Throughout recent years, many deep learning (DL)-based algorithms have been proposed to facilitate psychometric testing. Utilizing DL model that encompassed four scenarios, DR and its stages were identified in this study using retinal scans from the “Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 Blindness Detection” dataset. Adopting a DL model then led to the use of augmentation strategies that produced a comprehensive dataset with consistent hyper parameters across all test cases. As a further step in the classification process, we used a Convolutional Neural Network model. Different enhancement methods have been used to raise visual quality. The proposed approach detected the DR with a highest experimental result of 97.83%, a top-2 accuracy of 99.31%, and a top-3 accuracy of 99.88% across all the 5 severity stages of the APTOS 2019 evaluation employing CLAHE and ESRGAN techniques for image enhancement. In addition, we employed APTOS 2019 to develop a set of evaluation metrics (precision, recall, and F1-score) to use in analyzing the efficacy of the suggested model. The proposed approach was also proven to be more efficient at DR location than both state-of-the-art technology and conventional DL.
Journal Article