Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
26
result(s) for
"Rahman, Hameedur"
Sort by:
Enhancing Network Intrusion Detection Using an Ensemble Voting Classifier for Internet of Things
2023
In the context of 6G technology, the Internet of Everything aims to create a vast network that connects both humans and devices across multiple dimensions. The integration of smart healthcare, agriculture, transportation, and homes is incredibly appealing, as it allows people to effortlessly control their environment through touch or voice commands. Consequently, with the increase in Internet connectivity, the security risk also rises. However, the future is centered on a six-fold increase in connectivity, necessitating the development of stronger security measures to handle the rapidly expanding concept of IoT-enabled metaverse connections. Various types of attacks, often orchestrated using botnets, pose a threat to the performance of IoT-enabled networks. Detecting anomalies within these networks is crucial for safeguarding applications from potentially disastrous consequences. The voting classifier is a machine learning (ML) model known for its effectiveness as it capitalizes on the strengths of individual ML models and has the potential to improve overall predictive performance. In this research, we proposed a novel classification technique based on the DRX approach that combines the advantages of the Decision tree, Random forest, and XGBoost algorithms. This ensemble voting classifier significantly enhances the accuracy and precision of network intrusion detection systems. Our experiments were conducted using the NSL-KDD, UNSW-NB15, and CIC-IDS2017 datasets. The findings of our study show that the DRX-based technique works better than the others. It achieved a higher accuracy of 99.88% on the NSL-KDD dataset, 99.93% on the UNSW-NB15 dataset, and 99.98% on the CIC-IDS2017 dataset, outperforming the other methods. Additionally, there is a notable reduction in the false positive rates to 0.003, 0.001, and 0.00012 for the NSL-KDD, UNSW-NB15, and CIC-IDS2017 datasets.
Journal Article
A Novel Feature-Selection Algorithm in IoT Networks for Intrusion Detection
by
Memon, Zulfiqar
,
Khan, Inam Ullah
,
Rahman, Hameedur
in
Accuracy
,
Algorithms
,
Anti-virus software
2023
The Internet of Things (IoT) and network-enabled smart devices are crucial to the digitally interconnected society of the present day. However, the increased reliance on IoT devices increases their susceptibility to malicious activities within network traffic, posing significant challenges to cybersecurity. As a result, both system administrators and end users are negatively affected by these malevolent behaviours. Intrusion-detection systems (IDSs) are commonly deployed as a cyber attack defence mechanism to mitigate such risks. IDS plays a crucial role in identifying and preventing cyber hazards within IoT networks. However, the development of an efficient and rapid IDS system for the detection of cyber attacks remains a challenging area of research. Moreover, IDS datasets contain multiple features, so the implementation of feature selection (FS) is required to design an effective and timely IDS. The FS procedure seeks to eliminate irrelevant and redundant features from large IDS datasets, thereby improving the intrusion-detection system’s overall performance. In this paper, we propose a hybrid wrapper-based feature-selection algorithm that is based on the concepts of the Cellular Automata (CA) engine and Tabu Search (TS)-based aspiration criteria. We used a Random Forest (RF) ensemble learning classifier to evaluate the fitness of the selected features. The proposed algorithm, CAT-S, was tested on the TON_IoT dataset. The simulation results demonstrate that the proposed algorithm, CAT-S, enhances classification accuracy while simultaneously reducing the number of features and the false positive rate.
Journal Article
A Deep Learning Approach for Liver and Tumor Segmentation in CT Images Using ResUNet
by
Tu, Shanshan
,
Bukht, Tanvir Fatima Naik
,
Alzahrani, Abdulkareeem
in
Accuracy
,
Automation
,
Bioengineering
2022
According to the most recent estimates from global cancer statistics for 2020, liver cancer is the ninth most common cancer in women. Segmenting the liver is difficult, and segmenting the tumor from the liver adds some difficulty. After a sample of liver tissue is taken, imaging tests, such as magnetic resonance imaging (MRI), computer tomography (CT), and ultrasound (US), are used to segment the liver and liver tumor. Due to overlapping intensity and variability in the position and shape of soft tissues, segmentation of the liver and tumor from computed abdominal tomography images based on shade gray or shapes is undesirable. This study proposed a more efficient method for segmenting liver and tumors from CT image volumes using a hybrid ResUNet model, combining the ResNet and UNet models to address this gap. The two overlapping models were primarily used in this study to segment the liver and for region of interest (ROI) assessment. Segmentation of the liver is done to examine the liver with an abdominal CT image volume. The proposed model is based on CT volume slices of patients with liver tumors and evaluated on the public 3D dataset IRCADB01. Based on the experimental analysis, the true value accuracy for liver segmentation was found to be approximately 99.55%, 97.85%, and 98.16%. The authentication rate of the dice coefficient also increased, indicating that the experiment went well and that the model is ready to use for the detection of liver tumors.
Journal Article
IoT powered RNN for improved human activity recognition with enhanced localization and classification
by
Al Mudawi, Naif
,
Alhasson, Haifa F.
,
Rahman, Hameedur
in
631/114/1305
,
631/114/1314
,
631/114/1386
2025
Human activity recognition (HAR) and localization are green research areas of the modern era that are being propped up by smart devices. But the data acquired from the sensors embedded in smart devices, contain plenty of noise that makes it indispensable to design robust systems for HAR and localization. In this article, a system is presented endowed with multiple algorithms that make it impervious to signal noise and efficient to recognize human activities and their respective locations. The system begins by denoising the input signal using a Chebyshev type-I filter and then performs windowing. Then, working in parallel branches, respective features are extracted for the performed activity and human’s location. The Boruta algorithm is then implemented to select the most informative features among the extracted ones. The data is optimized using a particle swarm optimization (PSO) algorithm, and two recurrent neural networks (RNN) are trained in parallel, one for HAR and other for localization. The system is comprehensively evaluated using two publicly available benchmark datasets i.e., the Extrasensory dataset and the Sussex Huawei locomotion (SHL) dataset. The evaluation results advocate the system’s exceptional performance as it outperformed the state-of-the-art methods by scoring respective accuracies of 89.25% and 90.50% over the former dataset and 95.75% and 91.50% over the later one for HAR and localization.
Journal Article
Virtual Reality Driving Simulator: Investigating the Effectiveness of Image–Arrow Aids in Improving the Performance of Trainees
by
Ullah, Sehat
,
Khan, Dawar
,
Rahman, Hameedur
in
Behavior
,
Cognition & reasoning
,
cognitive aids
2025
Virtual reality driving simulators have been increasingly used for training purposes, but they are still lacking effective driver assistance features, and poor use of user interface (UI) and guidance systems leads to users’ performance being affected. In this paper, we investigate image–arrow aids in a virtual reality driving simulator (VRDS) that enables trainees (new drivers) to interpret instructions according to the correct course of action while performing their driving task. Image–arrow aids consist of arrows, texts, and images that are separately rendered during driving in the VRDS. A total of 45 participants were divided into three groups: G1 (image–arrow aids), G2 (audio and textual aids), and G3 (arrows and textual aids). The results showed that G1 (image–arrow guidance) achieved the best performance, with a mean error rate of 8.1 (SD = 1.23) and a mean completion time of 3.26 min (SD = 0.56). In comparison, G2 (audio and textual aids) had a mean error rate of 10.8 (SD = 1.31) and completion time of 4.49 min (SD = 0.67), while G3 (arrows and textual aids) had the highest error rate (18.4, SD = 1.43) and longest completion time (6.51 min, SD = 0.68). An evaluation revealed that the performance of G1 is significantly better than that of G2 and G3 in terms of performance measures (errors + time) and subjective analysis such as usability, easiness, understanding, and assistance.
Journal Article
Target detection and classification via EfficientDet and CNN over unmanned aerial vehicles
by
Algarni, Asaad
,
Hanzla, Muhammad
,
Al Mudawi, Naif
in
deep learning
,
dynamic environments
,
multi-objects recognition deep learning
2024
Advanced traffic monitoring systems face significant challenges in vehicle detection and classification. Conventional methods often require substantial computational resources and struggle to adapt to diverse data collection methods.
This research introduces an innovative technique for classifying and recognizing vehicles in aerial image sequences. The proposed model encompasses several phases, starting with image enhancement through noise reduction and Contrast Limited Adaptive Histogram Equalization (CLAHE). Following this, contour-based segmentation and Fuzzy C-means segmentation (FCM) are applied to identify foreground objects. Vehicle detection and identification are performed using EfficientDet. For feature extraction, Accelerated KAZE (AKAZE), Oriented FAST and Rotated BRIEF (ORB), and Scale Invariant Feature Transform (SIFT) are utilized. Object classification is achieved through a Convolutional Neural Network (CNN) and ResNet Residual Network.
The proposed method demonstrates improved performance over previous approaches. Experiments on datasets including Vehicle Aerial Imagery from a Drone (VAID) and Unmanned Aerial Vehicle Intruder Dataset (UAVID) reveal that the model achieves an accuracy of 96.6% on UAVID and 97% on VAID.
The results indicate that the proposed model significantly enhances vehicle detection and classification in aerial images, surpassing existing methods and offering notable improvements for traffic monitoring systems.
Journal Article
Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games
by
Rahman, Hameedur
,
Farooq, Sehar Shahzad
,
Abdul Wahid, Samiya
in
action games
,
Actors
,
Actresses
2025
Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage Actor–Critic (A3C) algorithm is explored using action games. Having been proven successful in several gaming environments, its effectiveness in action games is rarely explored. Providing efficient learning and adaptation facilities, this research aims to assess whether integrating ICM with A3C promotes curiosity-driven explorations and adaptive learning in action games. Using the MAME Toolkit library, we interface with the game environments, preprocess game screens to focus on relevant visual elements, and create diverse game episodes for training. The A3C policy is optimized using the Proximal Policy Optimization (PPO) algorithm with tuned hyperparameters. Comparisons are made with baseline methods, including vanilla A3C, ICM with pixel-based predictions, and state-of-the-art exploration techniques. Additionally, we evaluate the agent’s generalization capability in separate environments. The results demonstrate that ICM and A3C effectively promote curiosity-driven exploration in action games, with the agent learning exploration behaviors without relying solely on external rewards. Notably, we also observed an improved efficiency and learning speed compared to baseline approaches. This research contributes to curiosity-driven exploration in reinforcement learning-based virtual environments and provides insights into the exploration of complex action games. Successfully applying ICM and A3C in action games presents exciting opportunities for adaptive learning and efficient exploration in challenging real-world environments.
Journal Article
Unmanned aerial vehicles for human detection and recognition using neural-network model
2024
Recognizing human actions is crucial for allowing machines to understand and recognize human behavior, with applications spanning video based surveillance systems, human-robot collaboration, sports analysis systems, and entertainment. The immense diversity in human movement and appearance poses a significant challenge in this field, especially when dealing with drone-recorded (RGB) videos. Factors such as dynamic backgrounds, motion blur, occlusions, varying video capture angles, and exposure issues greatly complicate recognition tasks.
In this study, we suggest a method that addresses these challenges in RGB videos captured by drones. Our approach begins by segmenting the video into individual frames, followed by preprocessing steps applied to these RGB frames. The preprocessing aims to reduce computational costs, optimize image quality, and enhance foreground objects while removing the background.
This results in improved visibility of foreground objects while eliminating background noise. Next, we employ the YOLOv9 detection algorithm to identify human bodies within the images. From the grayscale silhouette, we extract the human skeleton and identify 15 important locations, such as the head, neck, shoulders (left and right), elbows, wrists, hips, knees, ankles, and hips (left and right), and belly button. By using all these points, we extract specific positions, angular and distance relationships between them, as well as 3D point clouds and fiducial points. Subsequently, we optimize this data using the kernel discriminant analysis (KDA) optimizer, followed by classification using a deep neural network (CNN). To validate our system, we conducted experiments on three benchmark datasets: UAV-Human, UCF, and Drone-Action.
On these datasets, our suggested model produced corresponding action recognition accuracies of 0.68, 0.75, and 0.83.
Journal Article
Vehicle recognition pipeline via DeepSort on aerial image datasets
by
Algarni, Asaad
,
Hanzla, Muhammad
,
Al Mudawi, Naif
in
deep learning
,
DeepSort
,
dynamic environments
2024
Unmanned aerial vehicles (UAVs) are widely used in various computer vision applications, especially in intelligent traffic monitoring, as they are agile and simplify operations while boosting efficiency. However, automating these procedures is still a significant challenge due to the difficulty of extracting foreground (vehicle) information from complex traffic scenes.
This paper presents a unique method for autonomous vehicle surveillance that uses FCM to segment aerial images. YOLOv8, which is known for its ability to detect tiny objects, is then used to detect vehicles. Additionally, a system that utilizes ORB features is employed to support vehicle recognition, assignment, and recovery across picture frames. Vehicle tracking is accomplished using DeepSORT, which elegantly combines Kalman filtering with deep learning to achieve precise results.
Our proposed model demonstrates remarkable performance in vehicle identification and tracking with precision of 0.86 and 0.84 on the VEDAI and SRTID datasets, respectively, for vehicle detection.
For vehicle tracking, the model achieves accuracies of 0.89 and 0.85 on the VEDAI and SRTID datasets, respectively.
Journal Article
MAPE-ViT: multimodal scene understanding with novel wavelet-augmented Vision Transformer
by
Rahman, Hameedur
,
Sadiq, Touseef
,
Ahmed, Muhammad Waqas
in
Artificial Intelligence
,
Computer Vision
,
Deep learning
2025
This article introduces Multimodal Adaptive Patch Embedding with Vision Transformer (MAPE-ViT), a novel approach for RGB-D scene classification that effectively addresses fundamental challenges of sensor misalignment, depth noise, and object boundary preservation. Our framework integrates maximally stable extremal regions (MSER) with wavelet coefficients to create comprehensive patch embedding that capture both local and global image features. These MSER-guided patches, incorporating original pixels and multi-scale wavelet information, serve as input to a Vision Transformer, which leverages its attention mechanisms to extract high-level semantic features. The feature discrimination capability is further enhanced through optimization using the Gray Wolf algorithm. The processed features then flow into a dual-stream architecture, where an extreme learning machine handles multi-object classification, while conditional random fields (CRF) manage scene-level categorization. Extensive experimental results demonstrate the effectiveness of our approach, showing significant improvements in classification accuracy compared to existing methods. Our system provides a robust solution for RGB-D scene understanding, particularly in challenging conditions where traditional approaches struggle with sensor artifacts and noise.
Journal Article