Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,496
result(s) for
"Black Box"
Sort by:
A review of epileptic seizure detection using machine learning classifiers
by
Morales-Menendez, Ruben
,
Huang, Xiaodi
,
Siddiqui, Mohammad Khubeb
in
Analysis
,
Applications of machine learning on epilepsy
,
Artificial Intelligence
2020
Epilepsy is a serious chronic neurological disorder, can be detected by analyzing the brain signals produced by brain neurons. Neurons are connected to each other in a complex way to communicate with human organs and generate signals. The monitoring of these brain signals is commonly done using Electroencephalogram (EEG) and Electrocorticography (ECoG) media. These signals are complex, noisy, non-linear, non-stationary and produce a high volume of data. Hence, the detection of seizures and discovery of the brain-related knowledge is a challenging task. Machine learning classifiers are able to classify EEG data and detect seizures along with revealing relevant sensible patterns without compromising performance. As such, various researchers have developed number of approaches to seizure detection using machine learning classifiers and statistical features. The main challenges are selecting appropriate classifiers and features. The aim of this paper is to present an overview of the wide varieties of these techniques over the last few years based on the taxonomy of statistical features and machine learning classifiers—‘black-box’ and ‘non-black-box’. The presented state-of-the-art methods and ideas will give a detailed understanding about seizure detection and classification, and research directions in the future.
Journal Article
Adversarial frontier stitching for remote neural network watermarking
by
Trédan, Gilles
,
Le Merrer, Erwan
,
Pérez, Patrick
in
Algorithms
,
Artificial Intelligence
,
Artificial neural networks
2020
The state-of-the-art performance of deep learning models comes at a high cost for companies and institutions, due to the tedious data collection and the heavy processing requirements. Recently, Nagai et al. (Int J Multimed Inf Retr 7(1):3–16, 2018), Uchida et al. (Embedding watermarks into deep neural networks, ICMR, 2017) proposed to watermark convolutional neural networks for image classification, by embedding information into their weights. While this is a clear progress toward model protection, this technique solely allows for extracting the watermark from a network that one
accesses locally
and entirely. Instead, we aim at allowing the extraction of the watermark from a neural network (or any other machine learning model) that is operated
remotely
, and available through a service API. To this end, we propose to mark the model’s action itself, tweaking slightly its decision frontiers so that a set of specific queries convey the desired information. In the present paper, we formally introduce the problem and propose a novel zero-bit watermarking algorithm that makes use of
adversarial model examples
. While limiting the loss of performance of the protected model, this algorithm allows subsequent extraction of the watermark using only few queries. We experimented the approach on three neural networks designed for image classification, in the context of MNIST digit recognition task.
Journal Article
Metaheuristic research: a comprehensive survey
by
Shi, Yuhui
,
Mohd Najib Mohd Salleh
,
Hussain, Kashif
in
Algorithms
,
Black boxes
,
Comparative analysis
2019
Because of successful implementations and high intensity, metaheuristic research has been extensively reported in literature, which covers algorithms, applications, comparisons, and analysis. Though, little has been evidenced on insightful analysis of metaheuristic performance issues, and it is still a “black box” that why certain metaheuristics perform better on specific optimization problems and not as good on others. The performance related analyses performed on algorithms are mostly quantitative via performance validation metrics like mean error, standard deviation, and co-relations have been used. Moreover, the performance tests are often performed on specific benchmark functions—few studies are those which involve real data from scientific or engineering optimization problems. In order to draw a comprehensive picture of metaheuristic research, this paper performs a survey of metaheuristic research in literature which consists of 1222 publications from year 1983 to 2016 (33 years). Based on the collected evidence, this paper addresses four dimensions of metaheuristic research: introduction of new algorithms, modifications and hybrids, comparisons and analysis, and research gaps and future directions. The objective is to highlight potential open questions and critical issues raised in literature. The work provides guidance for future research to be conducted more meaningfully that can serve for the good of this area of research.
Journal Article
Considerations when learning additive explanations for black-box models
by
Tan, Sarah
,
Hooker, Giles
,
Gordo, Albert
in
Artificial Intelligence
,
Black boxes
,
Computer Science
2023
Many methods to explain black-box models, whether local or global, are additive. In this paper, we study global additive explanations for non-additive models, focusing on four explanation methods: partial dependence, Shapley explanations adapted to a global setting, distilled additive explanations, and gradient-based explanations. We show that different explanation methods characterize non-additive components in a black-box model’s prediction function in different ways. We use the concepts of main and total effects to anchor additive explanations, and quantitatively evaluate additive and non-additive explanations. Even though distilled explanations are generally the most accurate additive explanations, non-additive explanations such as tree explanations that explicitly model non-additive components tend to be even more accurate. Despite this, our user study showed that machine learning practitioners were better able to leverage additive explanations for various tasks. These considerations should be taken into account when considering which explanation to trust and use to explain black-box models.
Journal Article
Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence
2021
Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from philosophy of science, this framework is modeled after accounts of explanation in cognitive science. The framework distinguishes between the explanation-seeking questions that are likely to be asked by different stakeholders, and specifies the general ways in which these questions should be answered so as to allow these stakeholders to perform their roles in the Machine Learning ecosystem. By applying the normative framework to recently developed techniques such as input heatmapping, feature-detector visualization, and diagnostic classification, it is possible to determine whether and to what extent techniques from Explainable Artificial Intelligence can be used to render opaque computing systems transparent and, thus, whether they can be used to solve the Black Box Problem.
Journal Article
A constrained optimization method based on BP neural network
by
Wang, Fulin
,
Sun, Ting
,
Xu, Bing
in
Artificial Intelligence
,
Back propagation networks
,
Computational Biology/Bioinformatics
2018
A constrained optimization method based on back-propagation (BP) neural network is proposed in this paper. Taking the maximization of output for example, using unipolar sigmoid function as transfer function, the method presents a general mathematical expression of BP neural network constrained optimization and derives the partial derivative of output with respect to input. On this basis, the fundamental idea, algorithms and related models are given in this article. When BP neural network is on the basis of fitting, this method can adjust the input values of BP neural network to make the output values maximal or minimal. Therefore, with this method the application of BP neural network is expanded by combining BP network’s fitting with optimization. At the same time, the article also provides a new method to study the black-box problem. The experiments show that the constrained optimization method is effective.
Journal Article
DIMBA: discretely masked black-box attack in single object tracking
by
Yin, Xiangyu
,
Ruan, Wenjie
,
Fieldsend, Jonathan
in
Algorithms
,
Artificial Intelligence
,
Black boxes
2024
The adversarial attack can force a CNN-based model to produce an incorrect output by craftily manipulating human-imperceptible input. Exploring such perturbations can help us gain a deeper understanding of the vulnerability of neural networks, and provide robustness to deep learning against miscellaneous adversaries. Despite extensive studies focusing on the robustness of image, audio, and NLP, works on adversarial examples of visual object tracking—especially in a black-box manner—are quite lacking. In this paper, we propose a novel adversarial attack method to generate noises for single object tracking under black-box settings, where perturbations are merely added on initialized frames of tracking sequences, which is difficult to be noticed from the perspective of a whole video clip. Specifically, we divide our algorithm into three components and exploit reinforcement learning for localizing important frame patches precisely while reducing unnecessary computational queries overhead. Compared to existing techniques, our method requires less time to perturb videos, but to manipulate competitive or even better adversarial performance. We test our algorithm in both long-term and short-term datasets, including OTB100, VOT2018, UAV123, and LaSOT. Extensive experiments demonstrate the effectiveness of our method on three mainstream types of trackers: discrimination, Siamese-based, and reinforcement learning-based trackers. We release our attack tool, DIMBA, via GitHub
https://github.com/TrustAI/DIMBA
for use by the community.
Journal Article
Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition
by
Singh, Richa
,
Agarwal, Akshay
,
Vatsa, Mayank
in
Algorithms
,
Artificial neural networks
,
Black boxes
2019
Deep neural network (DNN) architecture based models have high expressive power and learning capacity. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of representation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities. In this paper, we attempt to unravel three aspects related to the robustness of DNNs for face recognition: (i) assessing the impact of deep architectures for face recognition in terms of vulnerabilities to attacks, (ii) detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and (iii) making corrections to the processing pipeline to alleviate the problem. Our experimental evaluation using multiple open-source DNN-based face recognition networks, and three publicly available face databases demonstrates that the performance of deep learning based face recognition algorithms can suffer greatly in the presence of such distortions. We also evaluate the proposed approaches on four existing quasi-imperceptible distortions: DeepFool, Universal adversarial perturbations, \\[l_2\\], and Elastic-Net (EAD). The proposed method is able to detect both types of attacks with very high accuracy by suitably designing a classifier using the response of the hidden layers in the network. Finally, we present effective countermeasures to mitigate the impact of adversarial attacks and improve the overall robustness of DNN-based face recognition.
Journal Article
TREGO: a trust-region framework for efficient global optimization
by
Perrotolo, Alexandre Scotto Di
,
Diouane, Youssef
,
Riche, Rodolophe Le
in
Algorithms
,
Black boxes
,
Canonical forms
2023
Efficient global optimization (EGO) is the canonical form of Bayesian optimization that has been successfully applied to solve global optimization of expensive-to-evaluate black-box problems. However, EGO struggles to scale with dimension, and offers limited theoretical guarantees. In this work, a trust-region framework for EGO (TREGO) is proposed and analyzed. TREGO alternates between regular EGO steps and local steps within a trust region. By following a classical scheme for the trust region (based on a sufficient decrease condition), the proposed algorithm enjoys global convergence properties, while departing from EGO only for a subset of optimization steps. Using extensive numerical experiments based on the well-known COCO bound constrained problems, we first analyze the sensitivity of TREGO to its own parameters, then show that the resulting algorithm is consistently outperforming EGO and getting competitive with other state-of-the-art black-box optimization methods.
Journal Article
The black box problem revisited. Real and imaginary challenges for automated legal decision making
by
Jakubiec, Marek
,
Furman, Michał
,
Kucharzyk, Bartłomiej
in
Algorithms
,
Artificial intelligence
,
Automation
2024
This paper addresses the black-box problem in artificial intelligence (AI), and the related problem of explainability of AI in the legal context. We argue, first, that the black box problem is, in fact, a superficial one as it results from an overlap of four different – albeit interconnected – issues: the opacity problem, the strangeness problem, the unpredictability problem, and the justification problem. Thus, we propose a framework for discussing both the black box problem and the explainability of AI. We argue further that contrary to often defended claims the opacity issue is not a genuine problem. We also dismiss the justification problem. Further, we describe the tensions involved in the strangeness and unpredictability problems and suggest some ways to alleviate them.
Journal Article