Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
219
result(s) for
"Attack inference"
Sort by:
Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives
by
Xu, Xiangrui
,
Wang, Wei
,
Liu, Pengrui
in
Computer Science
,
Empirical analysis
,
Evasion attacks
2022
Empirical attacks on Federated Learning (FL) systems indicate that FL is fraught with numerous attack surfaces throughout the FL execution. These attacks can not only cause models to fail in specific tasks, but also infer private information. While previous surveys have identified the risks, listed the attack methods available in the literature or provided a basic taxonomy to classify them, they mainly focused on the risks in the training phase of FL. In this work, we survey the threats, attacks and defenses to FL throughout the whole process of FL in three phases, including
Data and Behavior Auditing Phase
,
Training Phase
and
Predicting Phase
. We further provide a comprehensive analysis of these threats, attacks and defenses, and summarize their issues and taxonomy. Our work considers security and privacy of FL based on the viewpoint of the execution process of FL. We highlight that establishing a trusted FL requires adequate measures to mitigate security and privacy threats at each phase. Finally, we discuss the limitations of current attacks and defense approaches and provide an outlook on promising future research directions in FL.
Journal Article
Re-ID-leak: Membership Inference Attacks Against Person Re-identification
2024
Person re-identification (Re-ID) has rapidly advanced due to its widespread real-world applications. It poses a significant risk of exposing private data from its training dataset. This paper aims to quantify this risk by conducting a membership inference (MI) attack. Most existing MI attack methods focus on classification models, while Re-ID follows a distinct paradigm for training and inference. Re-ID is a fine-grained recognition task that involves complex feature embedding, and the model outputs commonly used by existing MI algorithms, such as logits and losses, are inaccessible during inference. Since Re-ID models the relative relationship between image pairs rather than individual semantics, we conduct a formal and empirical analysis that demonstrates that the distribution shift of the inter-sample similarity between the training and test sets is a crucial factor for membership inference and exists in most Re-ID datasets and models. Thus, we propose a novel MI attack method based on the distribution of inter-sample similarity, which involves sampling a set of anchor images to represent the similarity distribution that is conditioned on a target image. Next, we consider two attack scenarios based on information that the attacker has. In the “one-to-one” scenario, where the attacker has access to the target Re-ID model and dataset, we propose an anchor selector module to select anchors accurately representing the similarity distribution. Conversely, in the “one-to-any” scenario, which resembles real-world applications where the attacker has no access to the target Re-ID model and dataset, leading to the domain-shift problem, we propose two alignment strategies. Moreover, we introduce the patch-attention module as a replacement for the anchor selector. Experimental evaluations demonstrate the effectiveness of our proposed approaches in Re-ID tasks in both attack scenarios.
Journal Article
Model architecture level privacy leakage in neural networks
by
Zhang, Xiaoxue
,
Pan, Zijie
,
Chen, Kongyang
in
Architecture
,
Computer Science
,
Data encryption
2024
Privacy leakage is one of the most critical issues in machine learning and has attracted growing interest for tasks such as demonstrating potential threats in model attacks and creating model defenses. In recent years, numerous studies have revealed various privacy leakage risks (e.g., data reconstruction attack, membership inference attack, backdoor attack, and adversarial attack) and several targeted defense approaches (e.g., data denoising, differential privacy, and data encryption). However, existing solutions generally focus on model parameter levels to disclose (or repair) privacy threats during the model training and/or model interference process, which are rarely applied at the model architecture level. Thus, in this paper, we aim to exploit the potential privacy leakage at the model architecture level through a pioneer study on neural architecture search (NAS) paradigms which serves as a powerful tool to automate a neural network design. By investigating the NAS procedure, we discover two attack threats in the model architecture level called the architectural dataset reconstruction attack and the architectural membership inference attack. Our theoretical analysis and experimental evaluation reveal that an attacker may leverage the output architecture of an ongoing NAS paradigm to reconstruct its original training set, or accurately infer the memberships of its training set simply from the model architecture. In this work, we also propose several defense approaches related to these model architecture attacks. We hope our work can highlight the need for greater attention to privacy protection in model architecture levels (e.g., NAS paradigms).
Journal Article
Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks
by
Chaieb, Faten
,
Ben Hamida, Sana
,
Mrabet, Hichem
in
Accuracy
,
Computer Communication Networks
,
Computer Science
2024
Machine learning (ML) has revolutionized various industries, but concerns about privacy and security have emerged as significant challenges. Membership inference attacks (MIAs) pose a serious threat by attempting to determine whenever a specific data record was used to train a ML model. In this study, we evaluate three defense strategies against MIAs: data augmentation (DA), dropout with L2 regularization, and differential privacy (DP). Through experiments, we assess the effectiveness of these techniques in mitigating the success of MIAs while maintaining acceptable model accuracy. Our findings demonstrate that DA not only improves model accuracy but also enhances privacy protection. The dropout and L2 regularization approach effectively reduces the impact of MIAs without compromising accuracy. However, adopting DP introduces a trade-off, as it limits MIA influence but affects model accuracy. Our DA defense strategy, for instance, show promising results, with privacy improvements of 12.97%, 15.82%, and 10.28% for the MNIST, CIFAR-10, and CIFAR-100 datasets, respectively. These insights contribute to the growing field of privacy protection in ML and highlight the significance of safeguarding sensitive data. Further research is needed to advance privacy-preserving techniques and address the evolving landscape of ML security.
Journal Article
HAMIATCM: high-availability membership inference attack against text classification models under little knowledge
2024
Membership inference attack opens up a newly emerging and rapidly growing research to steal user privacy from text classification models, a core problem of which is shadow model construction and members distribution optimization in inadequate members. The textual semantic is likely disrupted by simple text augmentation techniques, which weakens the correlation between labels and texts and reduces the precision of member classification. Shadow models trained exclusively with cross-entropy loss have little differentiation in embeddings among various classes, which deviates from the distribution of target models, then impacts the embeddings of members and reduces the F1 score. A competitive and High-Availability Membership Inference Attack against Text Classification Model (HAMIATCM) is proposed. At the data level, by selecting highly significant words and applying text augmentation techniques such as replacement or deletion, we expand knowledge of attackers, preserving vulnerable members to enhance the sensitive member distribution. At the model level, constructing contrastive loss and adaptive boundary loss to amplify the distribution differences among various classes, dynamically optimize the boundaries of members, enhancing the text representation capability of the shadow model and the classification performance of the attack classifier. Experimental results demonstrate that HAMIATCM achieves new state-of-the-art, significantly reduces the false positive rate, and strengthens the capability of fitting the output distribution of the target model with less knowledge of members.
Journal Article
Is Homomorphic Encryption-Based Deep Learning Secure Enough?
by
Choi, Seok-Hwan
,
Shin, Jinmyeong
,
Choi, Yoon-Ho
in
adversarial examples
,
Deep learning
,
homomorphic encryption
2021
As the amount of data collected and analyzed by machine learning technology increases, data that can identify individuals is also being collected in large quantities. In particular, as deep learning technology—which requires a large amount of analysis data—is activated in various service fields, the possibility of exposing sensitive information of users increases, and the user privacy problem is growing more than ever. As a solution to this user’s data privacy problem, homomorphic encryption technology, which is an encryption technology that supports arithmetic operations using encrypted data, has been applied to various field including finance and health care in recent years. If so, is it possible to use the deep learning service while preserving the data privacy of users by using the data to which homomorphic encryption is applied? In this paper, we propose three attack methods to infringe user’s data privacy by exploiting possible security vulnerabilities in the process of using homomorphic encryption-based deep learning services for the first time. To specify and verify the feasibility of exploiting possible security vulnerabilities, we propose three attacks: (1) an adversarial attack exploiting communication link between client and trusted party; (2) a reconstruction attack using the paired input and output data; and (3) a membership inference attack by malicious insider. In addition, we describe real-world exploit scenarios for financial and medical services. From the experimental evaluation results, we show that the adversarial example and reconstruction attacks are a practical threat to homomorphic encryption-based deep learning models. The adversarial attack decreased average classification accuracy from 0.927 to 0.043, and the reconstruction attack showed average reclassification accuracy of 0.888, respectively.
Journal Article
Network attack knowledge inference with graph convolutional networks and convolutional 2D KG embeddings
2025
To address the challenge of analyzing large-scale penetration attacks under complex multi-relational and multi-hop paths, this paper proposes a graph convolutional neural network-based attack knowledge inference method, KGConvE, aimed at intelligent reasoning and effective association mining of implicit network attack knowledge. The core idea of this method is to obtain knowledge embeddings related to CVE, CWE, and CAPEC, which are then used to construct attack context feature data and a relation matrix. Subsequently, we employ a graph convolutional neural network model to classify the attacks, and use the KGConvE model to perform attack inference within the same attack category. Through improvements to the graph convolutional neural network model, we significantly enhance the accuracy and generalization capability of the attack classification task. Furthermore, we are the first to apply the KGConvE model to perform attack inference tasks. Experimental results show that this method can infer implicit relationships between CVE-CVE, CVE-CWE, and CVE-CAPEC, achieving a significant performance improvement in network attack knowledge inference tasks, with a mean reciprocal rank (MRR) of 0.68 and Hits@10 of 0.58, outperforming baseline methods.
Journal Article
Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks
2023
Machine learning deployment on edge devices has faced challenges such as computational costs and privacy issues. Membership inference attack (MIA) refers to the attack where the adversary aims to infer whether a data sample belongs to the training set. In other words, user data privacy might be compromised by MIA from a well-trained model. Therefore, it is vital to have defense mechanisms in place to protect training data, especially in privacy-sensitive applications such as healthcare. This paper exploits the implications of quantization on privacy leakage and proposes a novel quantization method that enhances the resistance of a neural network against MIA. Recent studies have shown that model quantization leads to resistance against membership inference attacks. Existing quantization approaches primarily prioritize performance and energy efficiency; we propose a quantization framework with the main objective of boosting the resistance against membership inference attacks. Unlike conventional quantization methods whose primary objectives are compression or increased speed, our proposed quantization aims to provide defense against MIA. We evaluate the effectiveness of our methods on various popular benchmark datasets and model architectures. All popular evaluation metrics, including precision, recall, and F1-score, show improvement when compared to the full bitwidth model. For example, for ResNet on Cifar10, our experimental results show that our algorithm can reduce the attack accuracy of MIA by 14%, the true positive rate by 37%, and F1-score of members by 39% compared to the full bitwidth network. Here, reduction in true positive rate means the attacker will not be able to identify the training dataset members, which is the main goal of the MIA.
Journal Article
Defense against membership inference attack in graph neural networks through graph perturbation
2023
Graph neural networks have demonstrated remarkable performance in learning node or graph representations for various graph-related tasks. However, learning with graph data or its embedded representations may induce privacy issues when the node representations contain sensitive or private user information. Although many machine learning models or techniques have been proposed for privacy preservation of traditional non-graph structured data, there is limited work to address graph privacy concerns. In this paper, we investigate the privacy problem of embedding representations of nodes, in which an adversary can infer the user’s privacy by designing an inference attack algorithm. To address this problem, we develop a defense algorithm against white-box membership inference attacks, based on perturbation injection on the graph. In particular, we employ a graph reconstruction model and inject a certain size of noise into the intermediate output of the model, i.e., the latent representations of the nodes. The experimental results obtained on real-world datasets, along with reasonable usability and privacy metrics, demonstrate that our proposed approach can effectively resist membership inference attacks. Meanwhile, based on our method, the trade-off between usability and privacy brought by defense measures can be observed intuitively, which provides a reference for subsequent research in the field of graph privacy protection.
Journal Article
Label-Only Membership Inference Attack Based on Model Explanation
2024
It is well known that machine learning models (e.g., image recognition) can unintentionally leak information about the training set. Conventional membership inference relies on posterior vectors, and this task becomes extremely difficult when the posterior is masked. However, current label-only membership inference attacks require a large number of queries during the generation of adversarial samples, and thus incorrect inference generates a large number of invalid queries. Therefore, we introduce a label-only membership inference attack based on model explanations. It can transform a label-only attack into a traditional membership inference attack by observing neighborhood consistency and perform fine-grained membership inference for vulnerable samples. We use feature attribution to simplify the high-dimensional neighborhood sampling process, quickly identify decision boundaries and recover a posteriori vectors. It also compares different privacy risks faced by different samples through finding vulnerable samples. The method is validated on CIFAR-10, CIFAR-100 and MNIST datasets. The results show that membership attributes can be identified even using a simple sampling method. Furthermore, vulnerable samples expose the model to greater privacy risks.
Journal Article