Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
4,223 result(s) for "Turki, Turki"
Sort by:
Improving prediction of cervical cancer using KNN imputer and multi-model ensemble learning
Cervical cancer is a leading cause of women’s mortality, emphasizing the need for early diagnosis and effective treatment. In line with the imperative of early intervention, the automated identification of cervical cancer has emerged as a promising avenue, leveraging machine learning techniques to enhance both the speed and accuracy of diagnosis. However, an inherent challenge in the development of these automated systems is the presence of missing values in the datasets commonly used for cervical cancer detection. Missing data can significantly impact the performance of machine learning models, potentially leading to inaccurate or unreliable results. This study addresses a critical challenge in automated cervical cancer identification—handling missing data in datasets. The study present a novel approach that combines three machine learning models into a stacked ensemble voting classifier, complemented by the use of a KNN Imputer to manage missing values. The proposed model achieves remarkable results with an accuracy of 0.9941, precision of 0.98, recall of 0.96, and an F1 score of 0.97. This study examines three distinct scenarios: one involving the deletion of missing values, another utilizing KNN imputation, and a third employing PCA for imputing missing values. This research has significant implications for the medical field, offering medical experts a powerful tool for more accurate cervical cancer therapy and enhancing the overall effectiveness of testing procedures. By addressing missing data challenges and achieving high accuracy, this work represents a valuable contribution to cervical cancer detection, ultimately aiming to reduce the impact of this disease on women’s health and healthcare systems.
Automatic Classification of Melanoma Skin Cancer with Deep Convolutional Neural Networks
Melanoma skin cancer is one of the most dangerous types of skin cancer, which, if not diagnosed early, may lead to death. Therefore, an accurate diagnosis is needed to detect melanoma. Traditionally, a dermatologist utilizes a microscope to inspect and then provide a report on a biopsy for diagnosis; however, this diagnosis process is not easy and requires experience. Hence, there is a need to facilitate the diagnosis process while still yielding an accurate diagnosis. For this purpose, artificial intelligence techniques can assist the dermatologist in carrying out diagnosis. In this study, we considered the detection of melanoma through deep learning based on cutaneous image processing. For this purpose, we tested several convolutional neural network (CNN) architectures, including DenseNet201, MobileNetV2, ResNet50V2, ResNet152V2, Xception, VGG16, VGG19, and GoogleNet, and evaluated the associated deep learning models on graphical processing units (GPUs). A dataset consisting of 7146 images was processed using these models, and we compared the obtained results. The experimental results showed that GoogleNet can obtain the highest performance accuracy on both the training and test sets (74.91% and 76.08%, respectively).
A novel interpretable deep transfer learning combining diverse learnable parameters for improved T2D prediction based on single-cell gene regulatory networks
Accurate deep learning (DL) models to predict type 2 diabetes (T2D) are concerned not only with targeting the discrimination task but also with learning useful feature representation. However, existing DL tools are far from perfect and do not provide appropriate interpretation as a guideline to explain and promote superior performance in the target task. Therefore, we provide an interpretable approach for our presented deep transfer learning (DTL) models to overcome such drawbacks, working as follows. We utilize several pre-trained models including SEResNet152, and SEResNeXT101. Then, we transfer knowledge from pre-trained models via keeping the weights in the convolutional base (i.e., feature extraction part) while modifying the classification part with the use of Adam optimizer to deal with classifying healthy controls and T2D based on single-cell gene regulatory network (SCGRN) images. Another DTL models work in a similar manner but just with keeping weights of the bottom layers in the feature extraction unaltered while updating weights of consecutive layers through training from scratch. Experimental results on the whole 224 SCGRN images using five-fold cross-validation show that our model (TFeSEResNeXT101) achieving the highest average balanced accuracy (BAC) of 0.97 and thereby significantly outperforming the baseline that resulted in an average BAC of 0.86. Moreover, the simulation study demonstrated that the superiority is attributed to the distributional conformance of model weight parameters obtained with Adam optimizer when coupled with weights from a pre-trained model.
Automated framework for multi-domain social media text analysis for business strategy employing multilayer perceptron with Word2Vec features and LIME XAI
Sentiment analysis is a pivotal domain in Natural Language Processing (NLP), particularly for understanding opinions expressed in sequential and textual data with the usage of machine learning. It involves identifying and categorizing emotions expressed in textual reviews and messages. Social media platforms such as Twitter, Facebook, and Instagram generate extensive datasets rich in sentiments, making their analysis crucial for monitoring public opinion and informing business strategy. By uncovering customer satisfaction levels, product feedback, and service-related concerns, sentiment analysis helps organizations refine marketing efforts, optimize product features, and improve service delivery. Traditional machine learning techniques struggle to process large datasets and yield accurate results efficiently. To address this, we propose an effective multi-layer perceptron deep network with word embedding features, called MultiSentiNet, for sentiment analysis on Twitter datasets. The proposed model’s performance is evaluated against conventional machine learning classifiers and state-of-the-art deep learning classifiers, indicating superior accuracy with three different datasets. The significance of the proposed model is further tested on three diverse datasets (women’s e-commerce, US airline sentiments, and hate text-speech detection) that demonstrate that the proposed framework outperforms other classifiers in terms of accuracy, recall, precision, and F1 score. The performance of the proposed model is compared with previously published research works. Furthermore, the interpretability and analysis of MultiSentiNet results are explained using the LIME XAI technique, providing deeper insights into the model’s predictions and practical value in strategic business decision-making.
A validated framework for responsible AI in healthcare autonomous systems
Artificial Intelligence (AI)-powered autonomous systems are increasingly entering healthcare, yet concerns about their reliability, safety, and responsible use present significant barriers to adoption. Building on prior conceptual work, this study introduces a refined and empirically validated framework designed to support the safe and responsible integration of AI in clinical and regulatory contexts. The original framework was developed from semi-structured interviews with 15 experts across clinical, technical, ethical, and regulatory domains, and was subsequently validated through a structured process involving 10 newly recruited participants. Validation combined quantitative ratings and qualitative feedback, yielding consistently high scores for relevance, clarity, and usability, alongside strong endorsement of practical utility. The resulting framework consists of ten dimensions spanning technical, ethical, and operational categories, and is aligned with international standards such as ISO 21448 and the NIST AI Risk Management Framework. By addressing critical issues including data quality, explainability, fairness, and human–AI collaboration, the framework moves beyond abstract principles to provide actionable guidance. It offers clinicians, developers, regulators, and procurement bodies a structured tool to evaluate, monitor, and guide the responsible adoption of autonomous AI systems in healthcare ecosystems.
Novel Hate Speech Detection Using Word Cloud Visualization and Ensemble Learning Coupled with Count Vectorizer
A plethora of negative behavioural activities have recently been found in social media. Incidents such as trolling and hate speech on social media, especially on Twitter, have grown considerably. Therefore, detection of hate speech on Twitter has become an area of interest among many researchers. In this paper, we present a computational framework to (1) examine out the computational challenges behind hate speech detection and (2) generate high performance results. First, we extract features from Twitter data by utilizing a count vectorizer technique. Then, we provide the labeled dataset of constructed features to adopted ensemble methods, including Bagging, AdaBoost, and Random Forest. After training, we classify new tweet examples into one of the two categories, hate speech or non-hate speech. Experimental results show (1) that Random Forest has surpassed other methods by generating 95% using accuracy performance results and (2) word cloud displays the most prominent tweets that are responsible for hateful sentiments.
Decoding trust in large language models for healthcare in Saudi Arabia
This study investigates the factors influencing user trust and decision-making when using Artificial Intelligence (AI) systems, specifically focusing on ChatGPT in the healthcare domain within the Saudi context. As AI-powered conversational agents are increasingly utilized for medical advice, symptom assessment, and healthcare decision support, understanding user trust and adoption behavior is critical. Leveraging constructs from trust in technology, the Technology Acceptance Model (TAM), the Health Belief Model (HBM), and usability frameworks, the study utilizes Partial Least Squares Structural Equation Modeling (PLS-SEM) to analyze relationships among competence, reliability, transparency, security, trustworthiness, persuasiveness, and user satisfaction. The findings highlight the significant role of reliability, security, and transparency in building trust and supporting decision-making with ChatGPT in healthcare applications. Notably, out of the 15 tested hypotheses, 10 were supported, reinforcing the critical importance of trust and satisfaction in AI adoption for health-related interactions. The research contributes to understanding cultural influences on AI adoption in Saudi Arabia’s healthcare sector and offers practical recommendations for enhancing the trustworthiness and effectiveness of large language models (LLMs) like ChatGPT in medical consultations. These insights are vital for developing responsible AI practices and ensuring ethical deployment of AI-powered tools in healthcare settings, ultimately fostering user confidence in AI-assisted medical decision-making.