Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
86,077
result(s) for
"Robustness"
Sort by:
Adaptive Huber Regression
by
Zhou, Wen-Xin
,
Sun, Qiang
,
Fan, Jianqing
in
Adaptive Huber regression
,
Bias
,
Bias and robustness tradeoff
2020
Big data can easily be contaminated by outliers or contain variables with heavy-tailed distributions, which makes many conventional methods inadequate. To address this challenge, we propose the adaptive Huber regression for robust estimation and inference. The key observation is that the robustification parameter should adapt to the sample size, dimension and moments for optimal tradeoff between bias and robustness. Our theoretical framework deals with heavy-tailed distributions with bounded
th moment for any
. We establish a sharp phase transition for robust estimation of regression parameters in both low and high dimensions: when
, the estimator admits a sub-Gaussian-type deviation bound without sub-Gaussian assumptions on the data, while only a slower rate is available in the regime
and the transition is smooth and optimal. In addition, we extend the methodology to allow both heavy-tailed predictors and observation noise. Simulation studies lend further support to the theory. In a genetic study of cancer cell lines that exhibit heavy-tailedness, the proposed methods are shown to be more robust and predictive.
Supplementary materials
for this article are available online.
Journal Article
LOCALLY ROBUST SEMIPARAMETRIC ESTIMATION
by
Ichimura, Hidehiko
,
Newey, Whitney K.
,
Escanciano, Juan Carlos
in
bias
,
Discrete choice
,
double robustness
2022
Many economic and causal parameters depend on nonparametric or high dimensional first steps. We give a general construction of locally robust/orthogonal moment functions for GMM, where first steps have no effect, locally, on average moment functions. Using these orthogonal moments reduces model selection and regularization bias, as is important in many applications, especially for machine learning first steps. Also, associated standard errors are robust to misspecification when there is the same number of moment functions as parameters of interest.
We use these orthogonal moments and cross-fitting to construct debiased machine learning estimators of functions of high dimensional conditional quantiles and of dynamic discrete choice parameters with high dimensional state variables. We show that additional first steps needed for the orthogonal moment functions have no effect, globally, on average orthogonal moment functions. We give a general approach to estimating those additional first steps. We characterize double robustness and give a variety of new doubly robust moment functions. We give general and simple regularity conditions for asymptotic theory.
Journal Article
Foundations of static and dynamic absolute concentration robustness
by
Joshi, Badal
,
Craciun, Gheorghe
in
Biological models (mathematics)
,
Dynamical systems
,
Initial conditions
2022
Absolute Concentration Robustness (ACR) was introduced by Shinar and Feinberg (Science 327:1389-1391, 2010) as robustness of equilibrium species concentration in a mass action dynamical system. Their aim was to devise a mathematical condition that will ensure robustness in the function of the biological system being modeled. The robustness of function rests on what we refer to as empirical robustness—the concentration of a species remains unvarying, when measured in the long run, across arbitrary initial conditions. Even simple examples show that the ACR notion introduced in Shinar and Feinberg (Science 327:1389-1391, 2010) (here referred to as static ACR) is neither necessary nor sufficient for empirical robustness. To make a stronger connection with empirical robustness, we define dynamic ACR, a property related to long-term, global dynamics, rather than only to equilibrium behavior. We discuss general dynamical systems with dynamic ACR properties as well as parametrized families of dynamical systems related to reaction networks. We find necessary and sufficient conditions for dynamic ACR in complex balanced reaction networks, a class of networks that is central to the theory of reaction networks.
Journal Article
Making sense of sensitivity
2020
We extend the omitted variable bias framework with a suite of tools for sensitivity analysis in regression models that does not require assumptions on the functional form of the treatment assignment mechanism nor on the distribution of the unobserved confounders, naturally handles multiple confounders, possibly acting non-linearly, exploits expert knowledge to bound sensitivity parameters and can be easily computed by using only standard regression results. In particular, we introduce two novel sensitivity measures suited for routine reporting. The robustness value describes the minimum strength of association that unobserved confounding would need to have, both with the treatment and with the outcome, to change the research conclusions. The partial R² of the treatment with the outcome shows how strongly confounders explaining all the residual outcome variation would have to be associated with the treatment to eliminate the estimated effect. Next, we offer graphical tools for elaborating on problematic confounders, examining the sensitivity of point estimates and t-values, as well as ‘extreme scenarios’. Finally, we describe problems with a common ‘benchmarking’ practice and introduce a novel procedure to bound the strength of confounders formally on the basis of a comparison with observed covariates. We apply these methods to a running example that estimates the effect of exposure to violence on attitudes toward peace.
Journal Article
Incompatibility robustness of quantum measurements: a unified framework
by
Kaniewski, J drzej
,
Farkas, Máté
,
Designolle, Sébastien
in
Incompatibility
,
incompatibility robustness
,
Information theory
2019
In quantum mechanics performing a measurement is an invasive process which generally disturbs the system. Due to this phenomenon, there exist incompatible quantum measurements, i.e. measurements that cannot be simultaneously performed on a single copy of the system. It is then natural to ask what the most incompatible quantum measurements are. To answer this question, several measures have been proposed to quantify how incompatible a set of measurements is, however their properties are not well-understood. In this work, we develop a general framework that encompasses all the commonly used measures of incompatibility based on robustness to noise. Moreover, we propose several conditions that a measure of incompatibility should satisfy, and investigate whether the existing measures comply with them. We find that some of the widely used measures do not fulfil these basic requirements. We also show that when looking for the most incompatible pairs of measurements, we obtain different answers depending on the exact measure. For one of the measures, we analytically prove that projective measurements onto two mutually unbiased bases are among the most incompatible pairs in every dimension. However, for some of the remaining measures we find that some peculiar measurements turn out to be even more incompatible.
Journal Article
Deep learning models for digital image processing: a review
2024
Within the domain of image processing, a wide array of methodologies is dedicated to tasks including denoising, enhancement, segmentation, feature extraction, and classification. These techniques collectively address the challenges and opportunities posed by different aspects of image analysis and manipulation, enabling applications across various fields. Each of these methodologies contributes to refining our understanding of images, extracting essential information, and making informed decisions based on visual data. Traditional image processing methods and Deep Learning (DL) models represent two distinct approaches to tackling image analysis tasks. Traditional methods often rely on handcrafted algorithms and heuristics, involving a series of predefined steps to process images. DL models learn feature representations directly from data, allowing them to automatically extract intricate features that traditional methods might miss. In denoising, techniques like Self2Self NN, Denoising CNNs, DFT-Net, and MPR-CNN stand out, offering reduced noise while grappling with challenges of data augmentation and parameter tuning. Image enhancement, facilitated by approaches such as R2R and LE-net, showcases potential for refining visual quality, though complexities in real-world scenes and authenticity persist. Segmentation techniques, including PSPNet and Mask-RCNN, exhibit precision in object isolation, while handling complexities like overlapping objects and robustness concerns. For feature extraction, methods like CNN and HLF-DIP showcase the role of automated recognition in uncovering image attributes, with trade-offs in interpretability and complexity. Classification techniques span from Residual Networks to CNN-LSTM, spotlighting their potential in precise categorization despite challenges in computational demands and interpretability. This review offers a comprehensive understanding of the strengths and limitations across methodologies, paving the way for informed decisions in practical applications. As the field evolves, addressing challenges like computational resources and robustness remains pivotal in maximizing the potential of image processing techniques.
Journal Article
Percolation of localized attack on complex networks
by
Stanley, H Eugene
,
Huang, Xuqing
,
Shao, Shuai
in
complex network
,
Computer information security
,
Cybersecurity
2015
The robustness of complex networks against node failure and malicious attack has been of interest for decades, while most of the research has focused on random attack or hub-targeted attack. In many real-world scenarios, however, attacks are neither random nor hub-targeted, but localized, where a group of neighboring nodes in a network are attacked and fail. In this paper we develop a percolation framework to analytically and numerically study the robustness of complex networks against such localized attack. In particular, we investigate this robustness in Erd s-Rényi networks, random-regular networks, and scale-free networks. Our results provide insight into how to better protect networks, enhance cybersecurity, and facilitate the design of more robust infrastructures.
Journal Article
Things We Lost in the Fire
2020
Contestation of international norms has become the new focus of IR norm research. The optimism of the 1990s that fundamental liberal norms would diffuse globally has remained unfulfilled in recent years—even human rights norms have witnessed strong contestation. Time and again, controversy has erupted regarding international norms such as the ban on torture or the Responsibility to Protect. Meanwhile, we know little about how such controversy affects the robustness of norms—whether it contributes to their weakening or to their strengthening. Existing research offers two competing hypotheses: One branch of norm research often conceptualizes contestation as a sign of norm weakening. By contrast, another branch assigns contestation a normative power of its own, which strengthens norms. It does not specify the limits of such normative power, however. In this article, we argue that contestation per se is a poor predictor of norm robustness. The type of contestation a norm faces matters. Contestation can either (1) address the dimension of application of a norm or (2) examine its validity by questioning the righteousness of the claims a norm makes. The article draws on two illustrative case studies of extensively contested norms, the Responsibility to Protect and the ban on commercial whaling. We argue that widespread contestation of the very validity of a norm is likely to lead to a loss of norm robustness. Applicatory contestation, by contrast, can—under specific circumstances—even strengthen it.
Journal Article
Quantization and its breakdown in a Hubbard–Thouless pump
2023
Geometric properties of wave functions can explain the appearance of topological invariants in many condensed-matter and quantum systems1. For example, topological invariants describe the plateaux observed in the quantized Hall effect and the pumped charge in its dynamic analogue—the Thouless pump2–4. However, the presence of interparticle interactions can affect the topology of a material, invalidating the idealized formulation in terms of Bloch waves. Despite pioneering experiments in different platforms5–9, the study of topological matter under variations in interparticle interactions has proven challenging10. Here we experimentally realize a topological Thouless pump with fully tuneable Hubbard interactions in an optical lattice and observe regimes with robust pumping, as well as an interaction-induced breakdown. We confirm the pump’s robustness against interactions that are smaller than the protecting gap for both repulsive and attractive interactions. Furthermore, we identify that bound pairs of fermions are responsible for quantized transport at strongly attractive interactions. However, for strong repulsive interactions, topological pumping breaks down, but we show how to reinstate it by modifying the pump trajectory. Our results will prove useful for further investigations of interacting topological matter10, including edge effects11 and interaction-induced topological phases12–15.Thouless pumping is the quantization of charge transport through the adiabatic variation of a system’s parameters. The robustness and breakdown of pumping under variations in interparticle interactions have now been shown with ultracold atoms in an optical lattice.
Journal Article
Analysis of classifiers’ robustness to adversarial perturbations
by
Alhussein Fawzi
,
Omar, Fawzi
,
Frossard, Pascal
in
Classification
,
Classifiers
,
Economic models
2018
The goal of this paper is to analyze the intriguing instability of classifiers to adversarial perturbations (Szegedy et al., in: International conference on learning representations (ICLR), 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on two practical classes of classifiers, namely the linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured mathematically by the distinguishability measure). We further show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to d (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed by Szegedy et al. in the context of neural networks. We finally show experimental results on controlled and real-world data that confirm the theoretical analysis and extend its spirit to more complex classification schemes.
Journal Article