Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
41 result(s) for "Byzantine attack"
Sort by:
Resilient Consensus Control for Multi-Agent Systems: A Comparative Survey
Due to the openness of communication network and the complexity of system structures, multi-agent systems are vulnerable to malicious network attacks, which can cause intense instability to these systems. This article provides a survey of state-of-the-art results of network attacks on multi-agent systems. Recent advances on three types of attacks, i.e., those on DoS attacks, spoofing attacks and Byzantine attacks, the three main network attacks, are reviewed. Their attack mechanisms are introduced, and the attack model and the resilient consensus control structure are discussed, respectively, in detail, in terms of the theoretical innovation, the critical limitations and the change of the application. Moreover, some of the existing results along this line are given in a tutorial-like fashion. In the end, some challenges and open issues are indicated to guide future development directions of the resilient consensus of multi-agent system under network attacks.
Byzantine detection for federated learning under highly non-IID data and majority corruptions
Federated Learning (FL) is a privacy-preserving paradigm which enables multiple clients to jointly learn a model and keeps their data local. However, the nature of FL leaves the vulnerability to Byzantine attacks , where the malicious clients upload poisoned local models to the FL server, further corrupting the learnt global model. Most existing defenses against Byzantine attack still have the limitations when the ratio of malicious clients is greater than 50 % and the data among clients is not independent and identically distributed (non-IID). To address these issues, we propose a novel FL framework with Byzantine detection, which is robust against Byzantine attacks when the adversary has control of the majority of the clients and the data among clients is highly non-IID. The main idea is that the FL server supervises the clients via injecting a shadow dataset into the processes of the local training. Moreover, we design a Local Model Filter with an adaptive filtering policy that evaluates the local models’ performance on the shadow dataset and further filters out these local models compromised by the adversary. Finally, we evaluate our work on three real-world datasets, and the results show that our work outperforms the four existing Byzantine-robust defenses in defending against two state-of-the-art threatening Byzantine attacks.
Robust Learning-Based Detection with Cost Control and Byzantine Mitigation
To address the state estimation and detection problem in the presence of noisy sensor observations, probing costs, and communication noise, we in this paper propose a soft actor-critic (SAC) deep reinforcement learning (DRL) framework for dynamically scheduling sensors and sequentially probing the state of a stochastic system. Moreover, considering Byzantine attacks, we design a generative adversarial network (GAN)-based framework to identify the Byzantine sensors. The GAN-based Byzantine detector and SAC-DRL-based agent are developed to operate in coordination to detect the state of the system reliably and fast while incurring small sensing cost. To evaluate the proposed framework, we measure the performance in terms of detection accuracy, stopping time, and the total probing cost needed for detection. Via simulation results, we analyze the performances and demonstrate that soft actor–critic algorithms are flexible and effective in action selection in imperfectly known environments due to the maximum entropy strategy and they can achieve stable performance levels in challenging test cases (e.g., involving jamming attacks, imperfectly known noise power levels, and high sensing cost scenarios). We also provide comparisons between the performances of the proposed soft actor–critic and conventional actor–critic algorithms as well as fixed scheduling strategies. Finally, we analyze the impact of Byzantine attacks and identify the reliability and accuracy improvements achieved by the GAN-based approach when combined with the SAC-DRL-based decision-making agent.
Secure and efficient cooperative spectrum sensing under byzantine attack and imperfect reporting channel
Cooperative spectrum sensing (CSS) has been considered as an essential paradigm of cognitive radio (CR) to provide ever-growing wireless applications with the available spectrum band. However, the following problems degrade the security and efficiency of CSS in cognitive radio networks (CRNs), such as, (i) the open facet of the CRN provides an opportunity for the Byzantine attacker to falsify the sensing information, (ii) imperfect reporting channels may distort the sensing information during the decision-making process, (iii) a large amount of sensing information transmission incurs communication overhead. Confronting these problems, we propose a secure and efficient CSS scheme, called generalized voting-sequential and differential reporting (GV-SDR) in this paper. For this aim, we first design a data transmission monitoring process in the periodic spectrum sensing frame structure to defend against Byzantine attack. Furthermore, we exploit the on/off signaling characteristic and propose a single signaling transmission method to mitigate the negative impact of the imperfect reporting channel. Moreover, based on the generalized voting (GV) rule, the sensing information fusion is carried out in the sequential and differential reporting (SDR) mechanism to improve the spectrum sensing efficiency. Simulation results show that in front of Byzantine attack and imperfect reporting channels, the proposed GV-SDR not only significantly reduces the sensing information required by the FC, but also provides the better detection performance.
Development of secured data transmission using machine learning-based discrete-time partially observed Markov model and energy optimization in cognitive radio networks
The cognitive radio network (CR) is a primary and promising technology to distribute the spectrum assignment to an unlicensed user (secondary users) which is not utilized by the licensed user (primary user).The cognitive radio network frames a reactive security policy to enhance the energy monitoring while using the CR network primary channels. The CR network has a good amount of energy capacity using battery resource and accesses the data communication via the time-slotted channel. The data communication with moderate energy-level utilization during transmission is a great challenge in CR network security monitoring, since intruders may often attack the network in reducing the energy level of the PU or SU. The framework used to secure the communication is using the discrete-time partially observed Markov decision process. This system proposes a modern data communication-secured scheme using private key encryption with the sensing results, and eclat algorithm has been proposed for energy detection and Byzantine attack prediction. The data communication is secured using the AES algorithm at the CR network, and the simulation provides the best effort-efficient energy usage and security.
Federated learning algorithm based on matrix mapping for data privacy over edge computing
PurposeThis paper aims to provide the security and privacy for Byzantine clients from different types of attacks.Design/methodology/approachIn this paper, the authors use Federated Learning Algorithm Based On Matrix Mapping For Data Privacy over Edge Computing.FindingsBy using Softmax layer probability distribution for model byzantine tolerance can be increased from 40% to 45% in the blocking-convergence attack, and the edge backdoor attack can be stopped.Originality/valueBy using Softmax layer probability distribution for model the results of the tests, the aggregation method can protect at least 30% of Byzantine clients.
DWAMA: Dynamic weight-adjusted mahalanobis defense algorithm for mitigating poisoning attacks in federated learning
Federated learning is a distributed machine learning approach that enables participants to train models without sharing raw data, thereby protecting data privacy and facilitating collective information extraction. However, the risk of malicious attacks during client communication in federated learning remains a concern. Model poisoning attacks, where attackers hijack and modify uploaded models, can severely degrade the accuracy of the global model. To address this issue, we propose DWAMA, a federated learning-based method that incorporates outlier detection and a robust aggregation strategy. We use the robust Mahalanobis distance as a metric to measure abnormality, capturing complex correlations between data features. We also dynamically adjust the aggregation weights of malicious clients to ensure a more stable model updating process. Moreover, we adaptively adjust the malicious detection threshold to adapt to the Non-IID scenarios. Through a series of experiments and comparisons, we verify our method’s effectiveness and performance advantages, offering a more robust defense against model poisoning attacks in federated learning scenarios.
Adaptive Update Distribution Estimation under Probability Byzantine Attack
The secure and normal operation of distributed networks is crucial for accurate parameter estimation. However, distributed networks are frequently susceptible to Byzantine attacks. Considering real-life scenarios, this paper investigates a probability Byzantine (PB) attack, utilizing a Bernoulli distribution to simulate the attack probability. Historically, additional detection mechanisms are used to mitigate such attacks, leading to increased energy consumption and burdens on distributed nodes, consequently diminishing operational efficiency. Differing from these approaches, an adaptive updating distributed estimation algorithm is proposed to mitigate the impact of PB attacks. In the proposed algorithm, a penalty strategy is initially incorporated during data updates to weaken the influence of the attack. Subsequently, an adaptive fusion weight is employed during data fusion to merge the estimations. Additionally, the reason why this penalty term weakens the attack has been analyzed, and the performance of the proposed algorithm is validated through simulation experiments.
Byzantine-resilient decentralized network learning
Decentralized federated learning based on fully normal nodes has drawn attention in modern statistical learning. However, due to data corruption, device malfunctioning, malicious attacks and some other unexpected behaviors, not all nodes can obey the estimation process and the existing decentralized federated learning methods may fail. An unknown number of abnormal nodes, called Byzantine nodes, arbitrarily deviate from their intended behaviors, send wrong messages to their neighbors and affect all honest nodes across the entire network through passing polluted messages. In this paper, we focus on decentralized federated learning in the presence of Byzantine attacks and then propose a unified Byzantine-resilient framework based on the network gradient descent and several robust aggregation rules. Theoretically, the convergence of the proposed algorithm is guaranteed under some weakly balanced conditions of network structure. The finite-sample performance is studied through simulations under different network topologies and various Byzantine attacks. An application to Communities and Crime Data is also presented.
Privacy-Preserving Byzantine-Tolerant Federated Learning Scheme in Vehicular Networks
With the rapid development of vehicular network technology, data sharing and collaborative training among vehicles have become key to enhancing the efficiency of intelligent transportation systems. However, the heterogeneity of data and potential Byzantine attacks cause the model to update in different directions during the iterative process, causing the boundary between benign and malicious gradients to shift continuously. To address these issues, this paper proposes a privacy-preserving Byzantine-tolerant federated learning scheme. Specifically, we design a gradient detection method based on median absolute deviation (MAD), which calculates MAD in each round to set a gradient anomaly detection threshold, thereby achieving precise identification and dynamic filtering of malicious gradients. Additionally, to protect vehicle privacy, we obfuscate uploaded parameters to prevent leakage during transmission. Finally, during the aggregation phase, malicious gradients are eliminated, and only benign gradients are selected to participate in the global model update, which improves the model accuracy. Experimental results on three datasets demonstrate that the proposed scheme effectively mitigates the impact of non-independent and identically distributed (non-IID) heterogeneity and Byzantine behaviors while maintaining low computational cost.