Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
88
result(s) for
"Javidan, Reza"
Sort by:
A bi-level mobility-aware deep reinforcement learning approach for fault-tolerant task offloading in vehicular edge-cloud computing
2026
Vehicular Edge Cloud Computing (VECC) has emerged as a promising paradigm to support delay-sensitive and computation-intensive applications in Intelligent Transportation Systems (ITS). However, dynamic traffic patterns, fluctuating network conditions, and uncertain resource availability often result in high task latency and service failures. To address these challenges, this paper proposes a bi-level Deep Q-Network (DQN)-based mobility-aware framework for fault-tolerant task offloading in VECC environments. Unlike existing approaches that offload tasks solely to the receiving Roadside Unit (RSU), the proposed framework introduces a level-1 DQN agent that performs high-level scheduling by selecting the most suitable RSU for task execution based on its workload, network latency, and failure rate. In parallel, level-2 DQN agents at each RSU handle low-level decisions, including task allocation and failure-recovery strategy selection, choosing among First Result, Recovery Block, or Retry mechanisms. To eliminate centralized dependency, the level-1 DQN is replicated across RSUs at the edge layer, ensuring high accessibility and resilience for distributed scheduling. Extensive simulations conducted using an integrated SimPy/SUMO environment demonstrate that, under heavy and imbalanced traffic, the proposed bi-level DQN improves the total reward by 7.7% to 37.8% and reduces the task failure rate by 29% to 63% relative to bi-level PPO, Greedy, and No-Forwarding baselines, based on averages over the final 40 training episodes.
Journal Article
An Intelligent IoT Based Traffic Light Management System: Deep Reinforcement Learning
by
Zourbakhsh, Mojtaba
,
Damadam, Shima
,
Javidan, Reza
in
Adaptive control
,
adaptive traffic signal control
,
Artificial intelligence
2022
Traffic is one of the indispensable problems of modern societies, which leads to undesirable consequences such as time wasting and greater possibility of accidents. Adaptive Traffic Signal Control (ATSC), as a key part of Intelligent Transportation Systems (ITS), plays a key role in reducing traffic congestion by real-time adaptation to dynamic traffic conditions. Moreover, these systems are integrated with Internet of Things (IoT) devices. IoT can lead to easy implementation of traffic management systems. Recently, the combination of Artificial Intelligence (AI) and the IoT has attracted the attention of many researchers and can process large amounts of data that are suitable for solving complex real-world problems about traffic control. In this paper, we worked on the real-world scenario of Shiraz City, which currently does not use any intelligent method and works based on fixed-time traffic signal scheduling. We applied IoT approaches and AI techniques to control traffic lights more efficiently, which is an essential part of the ITS. Specifically, sensors such as surveillance cameras were used to capture real-time traffic information for the intelligent traffic signal control system. In fact, an intelligent traffic signal control system is provided by utilizing distributed Multi-Agent Reinforcement Learning (MARL) and applying the traffic data of adjacent intersections along with local information. By using MARL, our goal was to improve the overall traffic of six signalized junctions of Shiraz City in Iran. We conducted numerical simulations for two synthetic intersections by simulated data and for a real-world map of Shiraz City with real-world traffic data received from the transportation and municipality traffic organization and compared it with the traditional system running in Shiraz. The simulation results show that our proposed approach performs more efficiently than the fixed-time traffic signal control scheduling implemented in Shiraz in terms of average vehicle queue lengths and waiting times at intersections.
Journal Article
A predictive SD‐WAN traffic management method for IoT networks in multi‐datacenters using deep RNN
by
Nazemi Absardi, Zeinab
,
Javidan, Reza
in
cloud computing
,
computer network management
,
recurrent neural nets
2024
Deploying the Internet of Things (IoT) in integrated edge‐cloud environments exposes the IoT traffic data to performance issues such as delay, bandwidth limitation etc. Recently, Software‐Defined Wide Area Network (SD‐WAN) has emerged as an architecture that originates from the Software‐Defined Network (SDN) paradigm and provides solutions for networking multiple data centers by allowing network administrators to manage and control network layers. In this article, an SDWAN‐based policy for traffic management in IoT is introduced in which the Quality of Service (QoS) metrics such as end‐to‐end delay and bandwidth utilization are optimized. The proposed method implements the traffic management policy in the SDWAN controller. When the IoT traffic flows reach the SDWAN infrastructure network, graph search algorithms are performed to find the near‐optimal paths that affect the end‐to‐end delay of traffic flows. Because of the ability of deep learning to process complex data, a deep RNN model is used to predict the network state information, such as link latency and available bandwidth, before the traffic flows reach the infrastructure network. The proposed method consists of four key modules to predict the routes for future time intervals: (a) an SD‐WAN topology updater unit that checks the link changes and availability, (b) the network state information collector, which collects the network state information to create a dataset, (c) the learning unit, which trains a deep RNN model using the created dataset, and (d) the route predictor unit, which uses the trained model to predict the network state information using a heuristic algorithm to determine the routes. The simulation results showed that the deep RNN model can achieve high accuracy and low Mean Absolute Error (MAE), and the proposed method outperforms shortest‐path algorithms in terms of latency. At the same time, the available bandwidth is almost fairly distributed among all network links. An SDWAN‐based predictive traffic management policy using deep learning for IoT is proposed that minimizes the end‐to‐end delay and distribute the available bandwidths fairly among network links.
Journal Article
A Robust Anonymous Remote User Authentication Protocol for IoT Services
by
Ghahramani, Meysam
,
Javidan, Reza
in
Authentication
,
Communication
,
Communications Engineering
2021
The Internet of Things (IoT), as one of the hottest topics, supports resource-constrained devices which can communicate with each other at any time. Moreover, the communication must be secure on public networks, and it is sometimes necessary to control remote devices using a secure protocol. Consequently, designing a lightweight protocol is one of the challenging points to be addressed. So far, several lightweight protocols have been proposed―this paper analyzes one of the current lightweight authentication protocols for the IoT. Based on the previous protocol, a secure protocol has been suggested that inherits the benefits of the previous one, while it is completely safe against the proposed attacks. The proposed protocol fulfils mutual authentication using BAN logic, as a broadly accepted formal method in security analysis. Moreover, the proposed protocol resists well-known attacks and its security, communication overhead, and time complexity have been compared with similar protocols that show its efficiency for IoT applications. In the proposed protocol, users and IoT nodes share a secret key in 3.122 ms with 528 bytes communication overhead.
Journal Article
Intrusion detection in the internet of things using convolutional neural networks: an explainable AI approach
by
Hosseini, Yasin
,
Ebrahimi, Fatemeh
,
Javidan, Reza
in
1D-CNN
,
Accuracy
,
Artificial neural networks
2025
Intrusion Detection Systems (IDSs) with a Machine Learning (ML) technique have shown efficacy in securing Internet of Things (IoT) networks in recent years. As cyber threats continue to evolve, IDS have become increasingly reliant on advanced ML and deep learning (DL) techniques to improve detection accuracy. However, the growing complexity of these models often makes it challenging for security analysts to interpret the reasoning behind specific alerts. While extensive research has been conducted on IDS using ML and DL methods, the issue of interpretability remains largely unaddressed. One of the interpretable methods in machine learning is to use model-agnostic interpretation tools that can be applied to any supervised machine learning model. To address this issue, a new hybrid model composed of a lightweight one-dimensional convolutional Neural Network (1D-CNN) is proposed with the interpretation ability of the results in which, resource-constrained IoT devices can execute the proposed model. In the first phase, the SHapley Additive exPlanations (SHAP) technique is used for feature selection to detect the most important features. These features can be considered for redesigning the model by using a smaller set of features and reducing the computation and complexity of the model, leading to the creation of a lighter deep network. After the prediction of the proposed model, to interpret and explain the results and analyze the influential factors in predictions, Agnostic methods are employed both globally(SHAP) and locally(SHAP, LIME) to clarify the reasons for the predictions. Experimental results using the TON-IoT dataset showed accuracy, precision, recall, and F1-score criteria to 0.995, 0.9949, 0.9947, and 0.9947, respectively. Therefore, besides accurately predicting attacks in the area of IoT with high precision and lightweight models, the proposed method increases transparency to assist cybersecurity personnel in gaining a better understanding of IDS judgments.
Journal Article
A new load balancing clustering method for the RPL protocol
2021
Internet of things (IoT) is a network of different interconnected objects that are capable to collect and exchange data without human interaction. IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) is the common IoT routing protocol. One of the main drawbacks of the RPL protocol is lack of support of load balancing leads to unfair distribution of traffic load in the network which may decrease network efficiency. In this paper for load balancing, we proposed a new method called C-Balance based on cluster ranking to increase the network lifetime. In this method, two ranks are calculated for each node. The first rank is used to identify clusters and cluster heads and the second rank is used to select parents of each cluster head to forward packets towards the destination. To calculate these ranks, several metrics are used including Expected Transmission Count, hop count, residual energy and number of children. To investigate the performance of the proposed method, it has been simulated with Cooja simulator in the form of nodes with mobility and non-mobility scenarios plus using a random topology network with 20, 40 and 60 nodes experiments. The results are compared with OF0 and MRHOF standard objective functions as well as the QU-RPL method. The final results in both scenarios show that the proposed method in the field of energy consumption, network lifetime and load balancing has improved compared to the other methods. In terms of end-to-end delay, the proposed method has more delay compared to the standard objective functions and QU-RPL method. The calculation of the mean packet delivery ratio (PDR) of these four methods also shows that the proposed method has an acceptable performance. Final results indicate that on average, there is a 30–45% improvement in energy consumption, 15–23% reduction in average number of children and 22–48% improvement in network lifetime are obtained compared to the other methods. Finally, there is about 12% progress for PDR compared to the OF0.
Journal Article
Adversarial android malware detection for mobile multimedia applications in IoT environments
by
Pooranian Zahra
,
Javidan Reza
,
Taheri Rahim
in
Algorithms
,
Applications programs
,
Artificial neural networks
2021
In this paper, we propose two defense methods against adversarial attack to a malware detection system for mobile multimedia applications in IoT environments. They are Robust-NN and a combination of convolutional neural network and 1- nearest neighbors(C4N) which modify training data that has been poisoned by an adversarial attack. As a result, the trained machine learning model will be accurate and if the malicious program is entered by any IoT device, the model generates necessary alerts. We provide an explanation of the used attack method and the algorithms proposed to defend against this attack. In order to evaluate the suitability of the proposed defense methods, sufficient analysis is presented, i.e. Drebin, Contagio and Genome datasets which include benign and malware Android apps are applied to perform experiments. To confirm the effectiveness of the suggested defense algorithms, this paper compared their performance with two state-of-the-art defense algorithms used to detect adversarial samples, namely e2SAD and EAT. The experiments are performed on two types of API and Permission features from the mentioned datasets. The results confirm that accuracy rates of classification algorithms decrease to 40% after attack in some cases (related to Drebin dataset by reviewing API feature sets). Additionally, the accuracy rates increase to 94.94% and 96.03% by applying Robust-NN and C4N algorithms, respectively. Therefore, they are comparable with existing cutting-edge defense algorithms. Also, the adversarial attack increased the FPR to 45.81% which will be reduced to 4.84% and 4.15% using Robust-NN and C4N, respectively. Consequently, the proposed methods will be robust against adversarial attacks.
Journal Article
Efficient Detection of Underwater Natural Gas Pipeline Leak Based on Synthetic Aperture Sonar (SAS) Systems
by
Nadimi, Nahid
,
Javidan, Reza
,
Layeghi, Kamran
in
Acoustic properties
,
Acoustic scattering
,
Acoustics
2021
Natural gas is an important source of energy. Underwater gas pipeline leaks, on the other hand, have a serious impact on the marine environment; hence, the need for a reliable and preferably automated inspection method is essential. Due to the high impedance difference and strong scattering properties of gas bubbles in the marine environment, sonar systems are recognized as excellent tools for leak detection. In this paper, a new method for gas leak detection is proposed based on gas bubble acoustic scattering modeling using Synthetic Aperture Sonar (SAS) technology, in which a coherent combination of gas bubble and pipeline scattering fields at different angles along synthetic apertures is used for leak detection. The proposed method can distinguish leak signals from the background noise using coherent processing in SAS range migration. SAS as an active sonar can collect accurate information at wide area coverage rate, independent of operating range and frequency, which can potentially reduce the time and cost of pipeline inspection. The simulation and comparison results of the proposed method based on coherent processing of synthetic aperture technology and the real aperture system show that the proposed method can effectively distinguish gas bubble signals at different ranges even in a single pass and improves pipeline leak detection operations.
Journal Article
A novel multicast traffic engineering technique in SDN using TLBO algorithm
by
Mohammadi, Reza
,
Keshtgari, Manijeh
,
Javidan, Reza
in
Algorithms
,
Communications traffic
,
Machine learning
2018
In recent years, multicast communication is widely used by network providers to deliver multimedia contents. Quality of service (QoS) provisioning is one of the most important issues while transmitting multimedia contents using multicast. Traditional IP multicasting techniques suffer from reliability, scalability and have limitations to provide appropriate QoS for multimedia applications based on service level agreement (SLA). Nowadays, the advent of software defined networking (SDN), enables network providers to manage their networks dynamically and guarantee QoS parameters for customers based on SLA. SDN provides capabilities to monitor network resources and allows to dynamically configure desired multicasting policies. In this paper, we proposed a novel multicasting technique to guarantee QoS for multimedia applications over SDN. To deliver multimedia contents in an efficient manner, our proposed method models multicast routing as a delay constraint least cost (DCLC) problem. As DCLC problem is NP-Complete, we proposed an approximation algorithm using teaching–learning-based optimization to solve this problem. We evaluated our proposed method under different topologies. Experimental results confirmed that our proposed method outperforms IP multicast routing protocol, and it achieves a gain of about 25% for peak signal-to-noise ratio.
Journal Article
A Fast Mobile‐Based Elderly Fall Detection Method Using Neural Networks in the Internet of Things
by
Zare, Samaneh
,
Rezaei, Mohamad Sadegh
,
Goodarzi, Babak
in
data analytics
,
health care
,
machine learning
2025
With the rise of the Internet of Things (IoT), mobile devices have become essential for real‐time human activity monitoring, particularly in elderly care. This study presents a lightweight and privacy‐preserving fall detection approach using a single smartphone accelerometer, eliminating the need for intrusive cameras, which are impractical in private spaces such as bathrooms and culturally sensitive regions like Iran and other Muslim countries. We proposed an enhanced multilayer perceptron model optimized for mobile deployment, trained on a novel dataset of fewer than 10,000 labelled actions collected via Android smartphones. The model achieved 99.76% accuracy on this dataset and demonstrated strong generalizability with 98.28% and 98.49% accuracy on the UP‐Fall and SisFall datasets, respectively. This approach offers a practical and culturally sensitive solution for real‐time fall detection in IoT‐based elderly monitoring systems. This study introduces an enhanced multilayer perceptron model for elderly fall detection, focusing on developing a single sensor using a smartphone. We conducted a novel data collection using an Android smartphone, capturing key daily activities, including sitting on a chair, sitting on the floor, lying down, walking and falling while performing these activities in a dataset of fewer than 10,000 actions.
Journal Article