Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
135
result(s) for
"Ullah, Imdad"
Sort by:
Blockchain Based Solutions to Mitigate Distributed Denial of Service (DDoS) Attacks in the Internet of Things (IoT): A Survey
2022
Internet of Things (IoT) devices are widely used in many industries including smart cities, smart agriculture, smart medical, smart logistics, etc. However, Distributed Denial of Service (DDoS) attacks pose a serious threat to the security of IoT. Attackers can easily exploit the vulnerabilities of IoT devices and control them as part of botnets to launch DDoS attacks. This is because IoT devices are resource-constrained with limited memory and computing resources. As an emerging technology, Blockchain has the potential to solve the security issues in IoT. Therefore, it is important to analyse various Blockchain-based solutions to mitigate DDoS attacks in IoT. In this survey, a detailed survey of various Blockchain-based solutions to mitigate DDoS attacks in IoT is carried out. First, we discuss how the IoT networks are vulnerable to DDoS attacks, its impact over IoT networks and associated services, the use of Blockchain as a potential technology to address DDoS attacks, in addition to challenges of Blockchain implementation in IoT. We then discuss various existing Blockchain-based solutions to mitigate the DDoS attacks in the IoT environment. Then, we classify existing Blockchain-based solutions into four categories i.e., Distributed Architecture-based solutions, Access Management-based solutions, Traffic Control-based solutions and the Ethereum Platform-based solutions. All the solutions are critically evaluated in terms of their working principles, the DDoS defense mechanism (i.e., prevention, detection, reaction), strengths and weaknesses. Finally, we discuss future research directions that can be explored to design and develop better Blockchain-based solutions to mitigate DDoS attacks in IoT.
Journal Article
Deep Learning-Inspired IoT-IDS Mechanism for Edge Computing Environments
by
Ahanger, Tariq Ahamed
,
Aldaej, Abdulaziz
,
Ullah, Imdad
in
Accuracy
,
Algorithms
,
Classification
2023
The Internet of Things (IoT) technology has seen substantial research in Deep Learning (DL) techniques to detect cyberattacks. Critical Infrastructures (CIs) must be able to quickly detect cyberattacks close to edge devices in order to prevent service interruptions. DL approaches outperform shallow machine learning techniques in attack detection, giving them a viable alternative for use in intrusion detection. However, because of the massive amount of IoT data and the computational requirements for DL models, transmission overheads prevent the successful implementation of DL models closer to the devices. As they were not trained on pertinent IoT, current Intrusion Detection Systems (IDS) either use conventional techniques or are not intended for scattered edge–cloud deployment. A new edge–cloud-based IoT IDS is suggested to address these issues. It uses distributed processing to separate the dataset into subsets appropriate to different attack classes and performs attribute selection on time-series IoT data. Next, DL is used to train an attack detection Recurrent Neural Network, which consists of a Recurrent Neural Network (RNN) and Bidirectional Long Short-Term Memory (LSTM). The high-dimensional BoT-IoT dataset, which replicates massive amounts of genuine IoT attack traffic, is used to test the proposed model. Despite an 85 percent reduction in dataset size made achievable by attribute selection approaches, the attack detection capability was kept intact. The models built utilizing the smaller dataset demonstrated a higher recall rate (98.25%), F1-measure (99.12%), accuracy (99.56%), and precision (99.45%) with no loss in class discrimination performance compared to models trained on the entire attribute set. With the smaller attribute space, neither the RNN nor the Bi-LSTM models experienced underfitting or overfitting. The proposed DL-based IoT intrusion detection solution has the capability to scale efficiently in the face of large volumes of IoT data, thus making it an ideal candidate for edge–cloud deployment.
Journal Article
Ensemble technique of intrusion detection for IoT-edge platform
by
Ahanger, Tariq Ahamed
,
Aldaej, Abdulaziz
,
Ullah, Imdad
in
639/166
,
639/705
,
Computer engineering
2024
Internet of Things (IoT) technology has revolutionized modern industrial sectors. Moreover, IoT technology has been incorporated within several vital domains of applicability. However, security is overlooked due to the limited resources of IoT devices. Intrusion detection methods are crucial for detecting attacks and responding adequately to every IoT attack. Conspicuously, the current study outlines a two-stage procedure for the determination and identification of intrusions. In the first stage, a binary classifier termed an Extra Tree (E-Tree) is used to analyze the flow of IoT data traffic within the network. In the second stage, an Ensemble Technique (ET) comprising of E-Tree, Deep Neural Network (DNN), and Random Forest (RF) examines the invasive events that have been identified. The proposed approach is validated for performance analysis. Specifically, Bot-IoT, CICIDS2018, NSL-KDD, and IoTID20 dataset were used for an in-depth performance assessment. Experimental results showed that the suggested strategy was more effective than existing machine learning methods. Specifically, the proposed technique registered enhanced statistical measures of accuracy, normalized accuracy, recall measure, and stability.
Journal Article
Smart Cybersecurity Framework for IoT-Empowered Drones: Machine Learning Perspective
by
Ahanger, Tariq Ahamed
,
Aldaej, Abdulaziz
,
Ullah, Imdad
in
Access control
,
Artificial intelligence
,
Blockchain
2022
Drone advancements have ushered in new trends and possibilities in a variety of sectors, particularly for small-sized drones. Drones provide navigational interlocation services, which are made possible by the Internet of Things (IoT). Drone networks, on the other hand, are subject to privacy and security risks due to design flaws. To achieve the desired performance, it is necessary to create a protected network. The goal of the current study is to look at recent privacy and security concerns influencing the network of drones (NoD). The current research emphasizes the importance of a security-empowered drone network to prevent interception and intrusion. A hybrid ML technique of logistic regression and random forest is used for the purpose of classification of data instances for maximal efficacy. By incorporating sophisticated artificial-intelligence-inspired techniques into the framework of a NoD, the proposed technique mitigates cybersecurity vulnerabilities while making the NoD protected and secure. For validation purposes, the suggested technique is tested against a challenging dataset, registering enhanced performance results in terms of temporal efficacy (34.56 s), statistical measures (precision (97.68%), accuracy (98.58%), recall (98.59%), F-measure (99.01%), reliability (94.69%), and stability (0.73).
Journal Article
Historical Text Image Enhancement Using Image Scaling and Generative Adversarial Networks
2023
Historical documents such as newspapers, invoices, contract papers are often difficult to read due to degraded text quality. These documents may be damaged or degraded due to a variety of factors such as aging, distortion, stamps, watermarks, ink stains, and so on. Text image enhancement is essential for several document recognition and analysis tasks. In this era of technology, it is important to enhance these degraded text documents for proper use. To address these issues, a new bi-cubic interpolation of Lifting Wavelet Transform (LWT) and Stationary Wavelet Transform (SWT) is proposed to enhance image resolution. Then a generative adversarial network (GAN) is used to extract the spectral and spatial features in historical text images. The proposed method consists of two parts. In the first part, the transformation method is used to de-noise and de-blur the images, and to increase the resolution effects, whereas in the second part, the GAN architecture is used to fuse the original and the resulting image obtained from part one in order to improve the spectral and spatial features of a historical text image. Experiment results show that the proposed model outperforms the current deep learning methods.
Journal Article
A novel CT image de-noising and fusion based deep learning network to screen for disease (COVID-19)
2023
A COVID-19, caused by SARS-CoV-2, has been declared a global pandemic by WHO. It first appeared in China at the end of 2019 and quickly spread throughout the world. During the third layer, it became more critical. COVID-19 spread is extremely difficult to control, and a huge number of suspected cases must be screened for a cure as soon as possible. COVID-19 laboratory testing takes time and can result in significant false negatives. To combat COVID-19, reliable, accurate and fast methods are urgently needed. The commonly used Reverse Transcription Polymerase Chain Reaction has a low sensitivity of approximately 60% to 70%, and sometimes even produces negative results. Computer Tomography (CT) has been observed to be a subtle approach to detecting COVID-19, and it may be the best screening method. The scanned image's quality, which is impacted by motion-induced Poisson or Impulse noise, is vital. In order to improve the quality of the acquired image for post segmentation, a novel Impulse and Poisson noise reduction method employing boundary division max/min intensities elimination along with an adaptive window size mechanism is proposed. In the second phase, a number of CNN techniques are explored for detecting COVID-19 from CT images and an Assessment Fusion Based model is proposed to predict the result. The AFM combines the results for cutting-edge CNN architectures and generates a final prediction based on choices. The empirical results demonstrate that our proposed method performs extensively and is extremely useful in actual diagnostic situations.
Journal Article
An Effective Self-Configurable Ransomware Prevention Technique for IoMT
2022
Remote healthcare systems and applications are being enabled via the Internet of Medical Things (IoMT), which is an automated system that facilitates the critical and emergency healthcare services in urban areas, in addition to, bridges the isolated rural communities for various healthcare services. Researchers and developers are, to date, considering the majority of the technological aspects and critical issues around the IoMT, e.g., security vulnerabilities and other cybercrimes. One of such major challenges IoMT has to face is widespread ransomware attacks; a malicious malware that encrypts the patients’ critical data, restricts access to IoMT devices or entirely disable IoMT devices, or uses several combinations to compromise the overall system functionality, mainly for ransom. These ransomware attacks would have several devastating consequences, such as loss of life-threatening data and system functionality, ceasing emergency and life-saving services, wastage of several vital resources etc. This paper presents a ransomware analysis and identification architecture with the objective to detect and validate the ransomware attacks and to evaluate its accuracy using a comprehensive verification process. We first develop a comprehensive experimental environment, to simulate a real-time IoMT network, for experimenting various types of ransomware attacks. Following, we construct a comprehensive set of ransomware attacks and analyze their effects over an IoMT network devices. Furthermore, we develop an effective detection filter for detecting various ransomware attacks (e.g., static and dynamic attacks) and evaluate the degree of damages caused to the IoMT network devices. In addition, we develop a defense system to block the ransomware attacks and notify the backend control system. To evaluate the effectiveness of the proposed framework, we experimented our architecture with 194 various samples of malware and 46 variants, with a duration of sixty minutes for each sample, and thoroughly examined the network traffic data for malicious behaviors. The evaluation results show more than 95% of accuracy of detecting various ransomware attacks.
Journal Article
DIEER: Delay-Intolerant Energy-Efficient Routing with Sink Mobility in Underwater Wireless Sensor Networks
by
Abbas Malik, Zafar
,
Latif, Kamran
,
Ullah, Imdad
in
Algorithms
,
Autonomous underwater vehicles
,
Communication
2020
Underwater Wireless Sensor Networks (UWSNs) are an enabling technology for many applications in commercial, military, and scientific domains. In some emergency response applications of UWSN, data dissemination is more important, therefore these applications are handled differently as compared to energy-focused approaches, which is only possible when propagation delay is minimized and packet delivery at surface sinks is assured. Packet delivery underwater is a serious concern because of harsh underwater environments and the dense deployment of nodes, which causes collisions and packet loss. Resultantly, re-transmission causes energy loss and increases end-to-end delay ( D E 2 E ). In this work, we devise a framework for the joint optimization of sink mobility, hold and forward mechanisms, adoptive depth threshold ( d t h ) and data aggregation with pattern matching for reducing nodal propagation delay, maximizing throughput, improving network lifetime, and minimizing energy consumption. To evaluate our technique, we simulate the three-dimensional (3-D) underwater network environment with mobile sink and dense deployments of sensor nodes with varying communication radii. We carry out scalability analysis of the proposed framework in terms of network lifetime, throughput, and packet drop. We also compare our framework to existing techniques, i.e., Mobicast and iAMCTD protocols. We note that adapting varying d t h based on node density in a range of network deployment scenarios results in a reduced number of re-transmissions, good energy conservation, and enhanced throughput. Furthermore, results from extensive simulations show that our proposed framework achieves better performance over existing approaches for real-time delay-intolerant applications.
Journal Article
Application of tuned liquid column ball damper (TLCBD) for improved vibration control performance of multi-storey structure
by
Usman, Muhammad
,
Tanveer, Muhammad
,
Farooq, Syed Hassan
in
Acceleration
,
Amino acid sequence
,
Buildings
2019
Tuned liquid column ball damper (TLCBD) is a passive control device used for controlling the building vibrations induced from wind or earthquakes. TLCBD is a modified form of conventional tuned liquid column damper (TLCD). This paper studies the effect of TLCBD on the four-storey steel frame structure. The performance of the TLCBD is also compared with conventional TLCD. The analytical model of both TLCD and TLCBD is presented here. The effectiveness of these analytical models is examined experimentally by series of shaking table tests under different excitation levels including harmonic loadings and seismic excitations. In TLCBD, the vibration is reduced significantly as compared to TLCD by using steel ball as a moving orifice. The difference in diameter of steel ball and tube, containing the liquid column, acts as an orifice which moves with the movement of the ball. This moving orifice phenomenon enhanced the vibration reduction effect by resisting the water motion in the TLCBD. Root mean square (RMS) and peak values of acceleration were calculated for each loading and each storey of uncontrolled and controlled structures. Comparison of the time histories of controlled and uncontrolled structures for different loadings is also reported. Results indicate that the TLCBD is more effective in the earthquake scenarios as compared to the harmonic excitations. The TLCBD controls the vibration of the primary structure significantly in vibration reduction.
Journal Article
Prompt-based fine-tuning with multilingual transformers for language-independent sentiment analysis
by
Alkhodre, Ahmad B.
,
Khan, Imdad Ullah
,
Faizullah, Safiullah
in
639/705/1042
,
639/705/117
,
Humanities and Social Sciences
2025
In the era of global digital communication, understanding user sentiment across multiple languages is a critical challenge with wide-ranging applications in opinion mining, customer feedback analysis, and social media monitoring. This study advances the field of language-independent sentiment analysis by leveraging prompt-based fine-tuning with state-of-the-art transformer models. The performance of classical machine learning approaches, hybrid deep learning architectures, and multilingual transformer models is evaluated across eight typologically diverse languages: Arabic, English, French, German, Hindi, Italian, Portuguese, and Spanish. Baseline models are established using traditional machine learning approaches such as Support Vector Machines (SVM) and Logistic Regression, with feature extraction methods like TF-IDF. A hybrid deep learning model is introduced, combining Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNNs) to capture local and sequential text patterns. Building on these, pre-trained multilingual transformer models, specifically BERT-base-multilingual and XLM-RoBERTa, are fine-tuned for language-independent sentiment classification tasks. The key contribution lies in the implementation of prompt-based fine-tuning strategies for language independent sentiment analysis. Using (1) prefix prompts and (2) cloze-style prompts, a unified framework is established that employs templates designed in one language and evaluates their performance on data from the remaining
(
n
-
1
)
languages. Experimental results demonstrate that transformer models, particularly XLM-RoBERTa with prompt-based fine-tuning outperform both classical and deep learning methods. With only 32 training examples per class, prefix prompts produce results comparable to standard fine-tuning, which typically uses 70-80% of the data for training. This highlights the potential of prompt-based learning for scalable, multilingual sentiment analysis in diverse language settings.
Journal Article