Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
14,904
result(s) for
"adversarial"
Sort by:
Adversarial Training Methods for Deep Learning: A Systematic Review
by
Alwidian, Sanaa
,
Zhao, Weimin
,
Mahmoud, Qusay H.
in
adversarial attack generation
,
adversarial attacks
,
adversarial machine learning
2022
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the threat of adversarial attacks. It is a training schema that utilizes an alternative objective function to provide model generalization for both adversarial data and clean data. In this systematic review, we focus particularly on adversarial training as a method of improving the defensive capacities and robustness of machine learning models. Specifically, we focus on adversarial sample accessibility through adversarial sample generation methods. The purpose of this systematic review is to survey state-of-the-art adversarial training and robust optimization methods to identify the research gaps within this field of applications. The literature search was conducted using Engineering Village (Engineering Village is an engineering literature search tool, which provides access to 14 engineering literature and patent databases), where we collected 238 related papers. The papers were filtered according to defined inclusion and exclusion criteria, and information was extracted from these papers according to a defined strategy. A total of 78 papers published between 2016 and 2021 were selected. Data were extracted and categorized using a defined strategy, and bar plots and comparison tables were used to show the data distribution. The findings of this review indicate that there are limitations to adversarial training methods and robust optimization. The most common problems are related to data generalization and overfitting.
Journal Article
Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks
by
Lin, Hsuan-Yu
,
Hsieh, Sun-Ying
,
Lin, Chia-Liang
in
Accuracy
,
adversarial attack
,
adversarial examples
2023
Deep learning technology has developed rapidly in recent years and has been successfully applied in many fields, including face recognition. Face recognition is used in many scenarios nowadays, including security control systems, access control management, health and safety management, employee attendance monitoring, automatic border control, and face scan payment. However, deep learning models are vulnerable to adversarial attacks conducted by perturbing probe images to generate adversarial examples, or using adversarial patches to generate well-designed perturbations in specific regions of the image. Most previous studies on adversarial attacks assume that the attacker hacks into the system and knows the architecture and parameters behind the deep learning model. In other words, the attacked model is a white box. However, this scenario is unrepresentative of most real-world adversarial attacks. Consequently, the present study assumes the face recognition system to be a black box, over which the attacker has no control. A Generative Adversarial Network method is proposed for generating adversarial patches to carry out dodging and impersonation attacks on the targeted face recognition system. The experimental results show that the proposed method yields a higher attack success rate than previous works.
Journal Article
Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network
2021
Some recent articles have revealed that synthetic aperture radar automatic target recognition (SAR-ATR) models based on deep learning are vulnerable to the attacks of adversarial examples and cause security problems. The adversarial attack can make a deep convolutional neural network (CNN)-based SAR-ATR system output the intended wrong label predictions by adding small adversarial perturbations to the SAR images. The existing optimization-based adversarial attack methods generate adversarial examples by minimizing the mean-squared reconstruction error, causing smooth target edge and blurry weak scattering centers in SAR images. In this paper, we build a UNet-generative adversarial network (GAN) to refine the generation of the SAR-ATR models’ adversarial examples. The UNet learns the separable features of the targets and generates the adversarial examples of SAR images. The GAN makes the generated adversarial examples approximate to real SAR images (with sharp target edge and explicit weak scattering centers) and improves the generation efficiency. We carry out abundant experiments using the proposed adversarial attack algorithm to fool the SAR-ATR models based on several advanced CNNs, which are trained on the measured SAR images of the ground vehicle targets. The quantitative and qualitative results demonstrate the high-quality adversarial example generation and excellent attack effectiveness and efficiency improvement.
Journal Article
Enhancing adversarial robustness of quantum neural networks by adding noise layers
2023
The rapid advancements in machine learning and quantum computing have given rise to a new research frontier: quantum machine learning. Quantum models designed for tackling classification problems possess the potential to deliver speed enhancements and superior predictive accuracy compared to their classical counterparts. However, recent research has revealed that quantum neural networks (QNNs), akin to their classical deep neural network-based classifier counterparts, are vulnerable to adversarial attacks. In these attacks, meticulously designed perturbations added to clean input data can result in QNNs producing incorrect predictions with high confidence. To mitigate this issue, we suggest enhancing the adversarial robustness of quantum machine learning systems by incorporating noise layers into QNNs. This is accomplished by solving a Min-Max optimization problem to control the magnitude of the noise, thereby increasing the QNN’s resilience against adversarial attacks. Extensive numerical experiments illustrate that our proposed method outperforms state-of-the-art defense techniques in terms of both clean and robust accuracy.
Journal Article
Context-Aware Generative Adversarial Privacy
2017
Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals’ private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP’s performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model; and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.
Journal Article
Towards Adversarial Robustness for Multi-Mode Data through Metric Learning
by
Chen, Jun-Cheng
,
Chen, Chu-Song
,
Khan, Sarwar
in
adversarial attacks
,
adversarial training
,
classification
2023
Adversarial attacks have become one of the most serious security issues in widely used deep neural networks. Even though real-world datasets usually have large intra-variations or multiple modes, most adversarial defense methods, such as adversarial training, which is currently one of the most effective defense methods, mainly focus on the single-mode setting and thus fail to capture the full data representation to defend against adversarial attacks. To confront this challenge, we propose a novel multi-prototype metric learning regularization for adversarial training which can effectively enhance the defense capability of adversarial training by preventing the latent representation of the adversarial example changing a lot from its clean one. With extensive experiments on CIFAR10, CIFAR100, MNIST, and Tiny ImageNet, the evaluation results show the proposed method improves the performance of different state-of-the-art adversarial training methods without additional computational cost. Furthermore, besides Tiny ImageNet, in the multi-prototype CIFAR10 and CIFAR100 where we reorganize the whole datasets of CIFAR10 and CIFAR100 into two and ten classes, respectively, the proposed method outperforms the state-of-the-art approach by 2.22% and 1.65%, respectively. Furthermore, the proposed multi-prototype method also outperforms its single-prototype version and other commonly used deep metric learning approaches as regularization for adversarial training and thus further demonstrates its effectiveness.
Journal Article
Deep learning for determining a near-optimal topological design without any iteration
by
Yu, Yonggyun
,
Jang, In Gwun
,
Hur, Taeil
in
Artificial neural networks
,
Boundary conditions
,
Coders
2019
In this study, we propose a novel deep learning-based method to predict an optimized structure for a given boundary condition and optimization setting without using any iterative scheme. For this purpose, first, using open-source topology optimization code, datasets of the optimized structures paired with the corresponding information on boundary conditions and optimization settings are generated at low (32 × 32) and high (128 × 128) resolutions. To construct the artificial neural network for the proposed method, a convolutional neural network (CNN)-based encoder and decoder network is trained using the training dataset generated at low resolution. Then, as a two-stage refinement, the conditional generative adversarial network (cGAN) is trained with the optimized structures paired at both low and high resolutions and is connected to the trained CNN-based encoder and decoder network. The performance evaluation results of the integrated network demonstrate that the proposed method can determine a near-optimal structure in terms of pixel values and compliance with negligible computational time.
Journal Article
Experiments on Adversarial Examples for Deep Learning Model Using Multimodal Sensors
by
Murata, Masayuki
,
Kurniawan, Ade
,
Ohsita, Yuichi
in
adversarial examples
,
Artificial Intelligence
,
Computer crimes
2022
Recently, artificial intelligence (AI) based on IoT sensors has been widely used, which has increased the risk of attacks targeting AI. Adversarial examples are among the most serious types of attacks in which the attacker designs inputs that can cause the machine learning system to generate incorrect outputs. Considering the architecture using multiple sensor devices, hacking even a few sensors can create a significant risk; an attacker can attack the machine learning model through the hacked sensors. Some studies demonstrated the possibility of adversarial examples on the deep neural network (DNN) model based on IoT sensors, but it was assumed that an attacker must access all features. The impact of hacking only a few sensors has not been discussed thus far. Therefore, in this study, we discuss the possibility of attacks on DNN models by hacking only a small number of sensors. In this scenario, the attacker first hacks few sensors in the system, obtains the values of the hacked sensors, and changes them to manipulate the system, but the attacker cannot obtain and change the values of the other sensors. We perform experiments using the human activity recognition model with three sensor devices attached to the chest, wrist, and ankle of a user, and demonstrate that attacks are possible by hacking a small number of sensors.
Journal Article
Adversarial example detection for DNN models: a review and experimental comparison
2022
Deep learning (DL) has shown great success in many human-related tasks, which has led to its adoption in many computer vision based applications, such as security surveillance systems, autonomous vehicles and healthcare. Such safety-critical applications have to draw their path to success deployment once they have the capability to overcome safety-critical challenges. Among these challenges are the defense against or/and the detection of the adversarial examples (AEs). Adversaries can carefully craft small, often imperceptible, noise called perturbations to be added to the clean image to generate the AE. The aim of AE is to fool the DL model which makes it a potential risk for DL applications. Many test-time evasion attacks and countermeasures, i.e., defense or detection methods, are proposed in the literature. Moreover, few reviews and surveys were published and theoretically showed the taxonomy of the threats and the countermeasure methods with little focus in AE detection methods. In this paper, we focus on image classification task and attempt to provide a survey for detection methods of test-time evasion attacks on neural network classifiers. A detailed discussion for such methods is provided with experimental results for eight state-of-the-art detectors under different scenarios on four datasets. We also provide potential challenges and future perspectives for this research direction.
Journal Article
Synthetic data augmentation for surface defect detection and classification using deep learning
by
Jain Saksham
,
Paruthi Arpit
,
Soni Umang
in
Advanced manufacturing technologies
,
Algorithms
,
Artificial neural networks
2022
Deep learning techniques, especially Convolutional Neural Networks (CNN), dominate the benchmarks for most computer vision tasks. These state-of-the-art results are typically obtained through supervised learning, for which large annotated datasets are required. However, acquiring such datasets for manufacturing applications remains a challenging proposition due to the time and costs involved in their collection. To overcome this disadvantage, a novel framework is proposed for data augmentation by creating synthetic images using Generative Adversarial Networks (GANs). The generator synthesizes new surface defect images from random noise which is trained over time to get realistic fakes. These synthetic images can be used further for training of classification algorithms. Three GAN architectures are trained, and the entire data augmentation pipeline is implemented for the Northeastern University (China) Classification (NEU-CLS) dataset for hot-rolled steel strips from NEU Surface Defect Database. The classification accuracy of a simple CNN architecture is measured on synthetic augmented data and further it is compared with similar state-of-the-arts. It is observed that the proposed GANs-based augmentation scheme significantly improves the performance of CNN for classification of surface defects. The classically augmented CNN yields sensitivity and specificity of 90.28% and 98.06% respectively. In contrast, the synthetically augmented CNN yields better results, with sensitivity and specificity of 95.33% and 99.16% respectively. Also, the use of GANs is demonstrated to disentangle the representation space and to add additional domain knowledge through synthetic augmentation that can be difficult to replicate through classic augmentation. The proposed framework demonstrates high generalization capability. It may be applied to other supervised surface inspection tasks, and thus facilitate the development of advanced vision-based inspection instruments for manufacturing applications.
Journal Article