Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
21 result(s) for "ElMoaqet, Hisham"
Sort by:
Design of a Smart Factory Based on Cyber-Physical Systems and Internet of Things towards Industry 4.0
The rise of Industry 4.0, which employs emerging powerful and intelligent technologies and represents the digital transformation of manufacturing, has a significant impact on society, industry, and other production sectors. The industrial scene is witnessing ever-increasing pressure to improve its agility and versatility to accommodate the highly modularized, customized, and dynamic demands of production. One of the key concepts within Industry 4.0 is the smart factory, which represents a manufacturing/production system with interconnected processes and operations via cyber-physical systems, the Internet of Things, and state-of-the-art digital technologies. This paper outlines the design of a smart cyber-physical system that complies with the innovative smart factory framework for Industry 4.0 and implements the core industrial, computing, information, and communication technologies of the smart factory. It discusses how to combine the key components (pillars) of a smart factory to create an intelligent manufacturing system. As a demonstration of a simplified smart factory model, a smart manufacturing case study with a drilling process is implemented, and the feasibility of the proposed method is demonstrated and verified with experiments.
Deep Recurrent Neural Networks for Automatic Detection of Sleep Apnea from Single Channel Respiration Signals
Sleep apnea is a common sleep disorder that causes repeated breathing interruption during sleep. The performance of automated apnea detection methods based on respiratory signals depend on the signals considered and feature extraction methods. Moreover, feature engineering techniques are highly dependent on the experts’ experience and their prior knowledge about different physiological signals and conditions of the subjects. To overcome these problems, a novel deep recurrent neural network (RNN) framework is developed for automated feature extraction and detection of apnea events from single respiratory channel inputs. Long short-term memory (LSTM) and bidirectional long short-term memory (BiLSTM) are investigated to develop the proposed deep RNN model. The proposed framework is evaluated over three respiration signals: Oronasal thermal airflow (FlowTh), nasal pressure (NPRE), and abdominal respiratory inductance plethysmography (ABD). To demonstrate our results, we use polysomnography (PSG) data of 17 patients with obstructive, central, and mixed apnea events. Our results indicate the effectiveness of the proposed framework in automatic extraction for temporal features and automated detection of apneic events over the different respiratory signals considered in this study. Using a deep BiLSTM-based detection model, the NPRE signal achieved the highest overall detection results with true positive rate (sensitivity) = 90.3%, true negative rate (specificity) = 83.7%, and area under receiver operator characteristic curve = 92.4%. The present results contribute a new deep learning approach for automated detection of sleep apnea events from single channel respiration signals that can potentially serve as a helpful and alternative tool for the traditional PSG method.
A Deep Transfer Learning Framework for Sleep Stage Classification with Single-Channel EEG Signals
The polysomnogram (PSG) is the gold standard for evaluating sleep quality and disorders. Attempts to automate this process have been hampered by the complexity of the PSG signals and heterogeneity among subjects and recording hardwares. Most of the existing methods for automatic sleep stage scoring rely on hand-engineered features that require prior knowledge of sleep analysis. This paper presents an end-to-end deep transfer learning framework for automatic feature extraction and sleep stage scoring based on a single-channel EEG. The proposed framework was evaluated over the three primary signals recommended by the American Academy of Sleep Medicine (C4-M1, F4-M1, O2-M1) from two data sets that have different properties and are recorded with different hardware. Different Time–Frequency (TF) imaging approaches were evaluated to generate TF representations for the 30 s EEG sleep epochs, eliminating the need for complex EEG signal pre-processing or manual feature extraction. Several training and detection scenarios were investigated using transfer learning of convolutional neural networks (CNN) and combined with recurrent neural networks. Generating TF images from continuous wavelet transform along with a deep transfer architecture composed of a pre-trained GoogLeNet CNN followed by a bidirectional long short-term memory (BiLSTM) network showed the best scoring performance among all tested scenarios. Using 20-fold cross-validation applied on the C4-M1 channel, the proposed framework achieved an average per-class accuracy of 91.2%, sensitivity of 77%, specificity of 94.1%, and precision of 75.9%. Our results demonstrate that without changing the model architecture and the training algorithm, our model could be applied to different single-channel EEGs from different data sets. Most importantly, the proposed system receives a single EEG epoch as an input at a time and produces a single corresponding output label, making it suitable for real time monitoring outside sleep labs as well as to help sleep lab specialists arrive at a more accurate diagnoses.
Multi-Stage Domain-Adapted 6D Pose Estimation of Warehouse Load Carriers: A Deep Convolutional Neural Network Approach
Intelligent autonomous guided vehicles (AGVs) are of huge importance in facilitating the automation of load handling in the era of Industry 4.0. AGVs heavily rely on environmental perception, such as the 6D poses of objects, in order to execute complex tasks efficiently. Therefore, estimating the 6D poses of objects in warehouses is crucial for proper load handling in modern intra-logistics warehouse environments. This study presents a deep convolutional neural network approach for estimating the pose of warehouse load carriers. Recognizing the paucity of labeled real 6D pose estimation data, the proposed approach uses only synthetic RGB warehouse data to train the network. Domain adaption was applied using a Contrastive Unpaired Image-to-Image Translation (CUT) Network to generate domain-adapted training data that can bridge the domain gap between synthetic and real environments and help the model generalize better over realistic scenes. In order to increase the detection range, a multi-stage refinement detection pipeline is developed using consistent multi-view multi-object 6D pose estimation (CosyPose) networks. The proposed framework was tested with different training scenarios, and its performance was comprehensively analyzed and compared with a state-of-the-art non-adapted single-stage pose estimation approach, showing an improvement of up to 80% on the ADD-S AUC metric. Using a mix of adapted and non-adapted synthetic data along with splitting the state space into multiple refiners, the proposed approach achieved an ADD-S AUC performance greater than 0.81 over a wide detection range, from one and up to five meters, while still being trained on a relatively small synthetic dataset for a limited number of epochs.
Using Masked Image Modelling Transformer Architecture for Laparoscopic Surgical Tool Classification and Localization
Artificial intelligence (AI) has shown its potential to advance applications in various medical fields. One such area involves developing integrated AI-based systems to assist in laparoscopic surgery. Surgical tool detection and phase recognition are key components to develop such systems, and therefore, they have been extensively studied in recent years. Despite significant advancements in this field, previous image-based methods still face many challenges that limit their performance due to complex surgical scenes and limited annotated data. This study proposes a novel deep learning approach for classifying and localizing surgical tools in laparoscopic surgeries. The proposed approach uses a self-supervised learning algorithm for surgical tool classification followed by a weakly supervised algorithm for surgical tool localization, eliminating the need for explicit localization annotation. In particular, we leverage the Bidirectional Encoder Representation from Image Transformers (BEiT) model for tool classification and then utilize the heat maps generated from the multi-headed attention layers in the BEiT model for the localizing of these tools. Furthermore, the model incorporates class weights to address the class imbalance issue resulting from different usage frequencies of surgical tools in surgeries. Evaluated on the Cholec80 benchmark dataset, the proposed approach demonstrated high performance in surgical tool classification, surpassing previous works that utilize both spatial and temporal information. Additionally, the proposed weakly supervised learning approach achieved state-of-the-art results for the localization task.
End of Apnea Event Prediction Leveraging EEG Signals and Interpretable Machine Learning
Obstructive sleep apnea is a prevalent sleep disorder with serious health implications. While previous studies focused on detecting apnea events, little is known about the factors that determine whether an apnea episode continues or terminates. Understanding these mechanisms is crucial for optimizing treatment strategies. In this study, we analyzed 30-s brain activity segments during continuous and ending apnea events to identify neurophysiological markers of event termination, with particular emphasis on the most influential EEG features. Frequency-domain and complexity features were extracted, and several ensemble machine learning models were trained and evaluated. Our results show that the Extra Trees model achieved the highest performance, with an accuracy of 0.88, F1-score for ending apnea of 0.87, and an area under the receiver operating characteristic curve of 0.95. Feature importance analyses and SHAP visualizations highlighted frequency-band energy, Teager–Kaiser energy, and signal complexity as key contributors. Temporal analyses revealed how these features evolve during apnea termination. These findings suggest that cortical activation and transient arousal processes play a decisive role in ending apnea events and may facilitate the development of more advanced adaptive or closed-loop sleep apnea therapies.
In-Vehicle Data for Predicting Road Conditions and Driving Style Using Machine Learning
Many network protocols such as Controller Area Network (CAN) and Ethernet are used in the automotive industry to allow vehicle modules to communicate efficiently. These networks carry rich data from the different vehicle systems, such as the engine, transmission, brake, etc. This in-vehicle data can be used with machine learning algorithms to predict valuable information about the vehicle and roads. In this work, a low-cost machine learning system that uses in-vehicle data is proposed to solve three categorization problems; road surface conditions, road traffic conditions and driving style. Random forests, decision trees and support vector machine algorithms were evaluated to predict road conditions and driving style from labeled CAN data. These algorithms were used to classify road surface condition as smooth, even or full of holes. They were also used to classify road traffic conditions as low, normal or high, and the driving style was classified as normal or aggressive. Detection results were presented and analyzed. The random forests algorithm showed the highest detection accuracy results with an overall accuracy score between 92% and 95%.
Research and Education in Robotics: A Comprehensive Review, Trends, Challenges, and Future Directions
Robotics has emerged as a transformative discipline at the intersection of the engineering, computer science, and cognitive sciences. This state-of-the-art review explores the current trends, methodologies, and challenges in both robotics research and education. This paper presents a comprehensive review of the evolution of robotics, tracing its development from early automation to intelligent, autonomous systems. Key enabling technologies, such as Artificial Intelligence (AI), soft robotics, the Internet of Things (IoT), and swarm intelligence, are examined along with real-world applications in healthcare, manufacturing, agriculture, and sustainable smart cities. A central focus is placed on robotics education, where hands-on, interdisciplinary learning is reshaping curricula from K–12 to postgraduate levels. This paper analyzes instructional models including project-based learning, laboratory work, capstone design courses, and robotics competitions, highlighting their effectiveness in developing both technical and creative competencies. Widely adopted platforms such as the Robot Operating System (ROS) are briefly discussed in the context of their educational value and real-world alignment. Through case studies, institutional insights, and synthesis of academic and industry practices, this review underscores the vital role of robotics education in fostering innovation, systems thinking, and workforce readiness. The paper concludes by identifying the key challenges and future directions to guide researchers, educators, industry stakeholders, and policymakers in advancing robotics as both technological and educational frontiers.
Deep Learning Architectures for Single-Label and Multi-Label Surgical Tool Classification in Minimally Invasive Surgeries
The integration of Context-Aware Systems (CASs) in Future Operating Rooms (FORs) aims to enhance surgical workflows and outcomes through real-time data analysis. CASs require accurate classification of surgical tools, enabling the understanding of surgical actions. This study proposes a novel deep learning approach for surgical tool classification based on combining convolutional neural networks (CNNs), Feature Fusion Modules (FFMs), Squeeze-and-Excitation (SE) networks, and Bidirectional long-short term memory (BiLSTM) networks to capture both spatial and temporal features in laparoscopic surgical videos. We explored different modeling scenarios with respect to the location and number of SE blocks for multi-label surgical tool classification in the Cholec80 dataset. Furthermore, we analyzed a single-label surgical tool classification model using a simplified and computationally less expensive architecture compared to the multi-label problem setting. The single-label classification model showed an improved overall performance compared to the proposed multi-label classification model due to the increased complexity of identifying multiple tools simultaneously. Nonetheless, our results demonstrated that the proposed CNN-SE-FFM-BiLSTM multi-label model achieved competitive performance to state-of-the-art methods with excellent performance in detecting tools with complex usage patterns and in minority classes. Future work should focus on optimizing models for real-time applications, and broadening dataset evaluations to improve performance in diverse surgical environments. These improvements are crucial for the practical implementation of such models in CASs, ultimately aiming to enhance surgical workflows and patient outcomes in FORs.
Short-Time Wind Speed Forecast Using Artificial Learning-Based Algorithms
The need for an efficient power source for operating the modern industry has been rapidly increasing in the past years. Therefore, the latest renewable power sources are difficult to be predicted. The generated power is highly dependent on fluctuated factors (such as wind bearing, pressure, wind speed, and humidity of surrounding atmosphere). Thus, accurate forecasting methods are of paramount importance to be developed and employed in practice. In this paper, a case study of a wind harvesting farm is investigated in terms of wind speed collected data. For data like the wind speed that are hard to be predicted, a well built and tested forecasting algorithm must be provided. To accomplish this goal, four neural network-based algorithms: artificial neural network (ANN), convolutional neural network (CNN), long short-term memory (LSTM), and a hybrid model convolutional LSTM (ConvLSTM) that combines LSTM with CNN, and one support vector machine (SVM) model are investigated, evaluated, and compared using different statistical and time indicators to assure that the final model meets the goal that is built for. Results show that even though SVM delivered the most accurate predictions, ConvLSTM was chosen due to its less computational efforts as well as high prediction accuracy.