Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
59
result(s) for
"Sparsity principle"
Sort by:
Parsimonious Tensor Response Regression
2017
Aiming at abundant scientific and engineering data with not only high dimensionality but also complex structure, we study the regression problem with a multidimensional array (tensor) response and a vector predictor. Applications include, among others, comparing tensor images across groups after adjusting for additional covariates, which is of central interest in neuroimaging analysis. We propose parsimonious tensor response regression adopting a generalized sparsity principle. It models all voxels of the tensor response jointly, while accounting for the inherent structural information among the voxels. It effectively reduces the number of free parameters, leading to feasible computation and improved interpretation. We achieve model estimation through a nascent technique called the envelope method, which identifies the immaterial information and focuses the estimation based upon the material information in the tensor response. We demonstrate that the resulting estimator is asymptotically efficient, and it enjoys a competitive finite sample performance. We also illustrate the new method on two real neuroimaging studies. Supplementary materials for this article are available online.
Journal Article
Tensor Envelope Partial Least-Squares Regression
2017
Partial least squares (PLS) is a prominent solution for dimension reduction and high-dimensional regressions. Recent prevalence of multidimensional tensor data has led to several tensor versions of the PLS algorithms. However, none offers a population model and interpretation, and statistical properties of the associated parameters remain intractable. In this article, we first propose a new tensor partial least-squares algorithm, then establish the corresponding population interpretation. This population investigation allows us to gain new insight on how the PLS achieves effective dimension reduction, to build connection with the notion of sufficient dimension reduction, and to obtain the asymptotic consistency of the PLS estimator. We compare our method, both analytically and numerically, with some alternative solutions. We also illustrate the efficacy of the new method on simulations and two neuroimaging data analyses. Supplementary materials for this article are available online.
Journal Article
Full Factorial Experiments at Two Levels
by
Hamada Michael S
,
Wu C. F. Jeff
in
dispersion models
,
effect heredity principle
,
effect hierarchy principle
2021
In many scientific investigations, the interest lies in the study of effects of two or more factors simultaneously. Factorial designs are most commonly used for this type of investigation. This chapter considers the important class of factorial designs for factors at two levels. It also considers the estimation and testing of factorial effects for location and dispersion models for replicated and unreplicated experiments. The chapter discusses optimal blocking schemes for full factorial designs. It describes how the factorial effects can be computed using regression analysis. The chapter also discusses three fundamental principles: effect hierarchy principle, effect sparsity principle, and effect heredity principle. These principles are often used to justify the development of factorial design theory and data analysis strategies. The chapter also describes a graphical method that uses the normal probability plot for assessing the normality assumption.
Book Chapter
Fundamentals of Experimental Design
by
Freeman, Laura
,
Rigdon, Steven
,
Pan, Rong
in
2 k‐p fractional factorial design
,
analysis of variance
,
breakthrough innovation
2022
Designed experiments are a key technology for innovation. Both breakthrough innovation and incremental innovation activities can benefit from the effective use of designed experiments. Sir Ronald A. Fisher systematically introduced statistical thinking and principles into designing experimental investigations, including the factorial design concept and the analysis of variance, which remains today the primary method for analyzing data from designed experiments. Fractional factorial designs are usually effective in factor screening because of the
sparsity of effects principle
. The principle stares that only a fraction of the potential factors of interest in any system are actually important. Screening experiments are usually performed in the early stages of a project when many of the factors initially considered likely have little or no effect on the response. The chapter shows how to find the alias relationships in a 2
k‐p
fractional factorial design by use of the complete defining relation.
Book Chapter
Sparse Estimation by Exponential Weighting
2012
Consider a regression model with fixed design and Gaussian noise where the regression function can potentially be well approximated by a function that admits a sparse representation in a given dictionary. This paper resorts to exponential weights to exploit this underlying sparsity by implementing the principle of sparsity pattern aggregation. This model selection take on sparse estimation allows us to derive sparsity oracle inequalities in several popular frameworks, including ordinary sparsity, fused sparsity and group sparsity. One striking aspect of these theoretical results is that they hold under no condition in the dictionary. Moreover, we describe an efficient implementation of the sparsity pattern aggregation principle that compares favorably to state-of-the-art procedures on some basic numerical examples.
Journal Article
First return, then explore
2021
Reinforcement learning promises to solve complex sequential-decision problems autonomously by specifying a high-level reward function only. However, reinforcement learning algorithms struggle when, as is often the case, simple and intuitive rewards provide sparse
1
and deceptive
2
feedback. Avoiding these pitfalls requires a thorough exploration of the environment, but creating algorithms that can do so remains one of the central challenges of the field. Here we hypothesize that the main impediment to effective exploration originates from algorithms forgetting how to reach previously visited states (detachment) and failing to first return to a state before exploring from it (derailment). We introduce Go-Explore, a family of algorithms that addresses these two challenges directly through the simple principles of explicitly ‘remembering’ promising states and returning to such states before intentionally exploring. Go-Explore solves all previously unsolved Atari games and surpasses the state of the art on all hard-exploration games
1
, with orders-of-magnitude improvements on the grand challenges of Montezuma’s Revenge and Pitfall. We also demonstrate the practical potential of Go-Explore on a sparse-reward pick-and-place robotics task. Additionally, we show that adding a goal-conditioned policy can further improve Go-Explore’s exploration efficiency and enable it to handle stochasticity throughout training. The substantial performance gains from Go-Explore suggest that the simple principles of remembering states, returning to them, and exploring from them are a powerful and general approach to exploration—an insight that may prove critical to the creation of truly intelligent learning agents.
A reinforcement learning algorithm that explicitly remembers promising states and returns to them as a basis for further exploration solves all as-yet-unsolved Atari games and out-performs previous algorithms on Montezuma’s Revenge and Pitfall.
Journal Article
Infrared Small Target Detection Based on Partial Sum of the Tensor Nuclear Norm
2019
Excellent performance, real time and strong robustness are three vital requirements for infrared small target detection. Unfortunately, many current state-of-the-art methods merely achieve one of the expectations when coping with highly complex scenes. In fact, a common problem is that real-time processing and great detection ability are difficult to coordinate. Therefore, to address this issue, a robust infrared patch-tensor model for detecting an infrared small target is proposed in this paper. On the basis of infrared patch-tensor (IPT) model, a novel nonconvex low-rank constraint named partial sum of tensor nuclear norm (PSTNN) joint weighted l1 norm was employed to efficiently suppress the background and preserve the target. Due to the deficiency of RIPT which would over-shrink the target with the possibility of disappearing, an improved local prior map simultaneously encoded with target-related and background-related information was introduced into the model. With the help of a reweighted scheme for enhancing the sparsity and high-efficiency version of tensor singular value decomposition (t-SVD), the total algorithm complexity and computation time can be reduced dramatically. Then, the decomposition of the target and background is transformed into a tensor robust principle component analysis problem (TRPCA), which can be efficiently solved by alternating direction method of multipliers (ADMM). A series of experiments substantiate the superiority of the proposed method beyond state-of-the-art baselines.
Journal Article
Constructing networks by filtering correlation matrices
2019
Network analysis has been applied to various correlation matrix data. Thresholding on the value of the pairwise correlation is probably the most straightforward and common method to create a network from a correlation matrix. However, there have been criticisms on this thresholding approach such as an inability to filter out spurious correlations, which have led to proposals of alternative methods to overcome some of the problems. We propose a method to create networks from correlation matrices based on optimization with regularization, where we lay an edge between each pair of nodes if and only if the edge is unexpected from a null model. The proposed algorithm is advantageous in that it can be combined with different types of null models. Moreover, the algorithm can select the most plausible null model from a set of candidate null models using a model selection criterion. For three economic datasets, we find that the configuration model for correlation matrices is often preferred to standard null models. For country-level product export data, the present method better predicts main products exported from countries than sample correlation matrices do.
Journal Article
Rank-Sparsity Incoherence for Matrix Decomposition
by
Chandrasekaran, Venkat
,
Sanghavi, Sujay
,
Parrilo, Pablo A.
in
Algebra
,
Computer engineering
,
Convex analysis
2011
Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification and is intractable to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components by minimizing a linear combination of the [cursive l]1 norm and the nuclear norm of the components. We develop a notion of rank-sparsity incoherence, expressed as an uncertainty principle between the sparsity pattern of a matrix and its row and column spaces, and we use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature with the tangent spaces to the algebraic varieties of sparse and low-rank matrices playing a prominent role. When the sparse and low-rank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems. [PUBLICATION ABSTRACT]
Journal Article
GUP corrected black holes with cloud of string
by
Al-Badawi, Ahmad
,
Shaymatov, Sanjar
,
Jha, Sohan Kumar
in
Analysis
,
Astronomy
,
Astrophysics and Cosmology
2024
We investigate shadows, deflection angle, quasinormal modes (QNMs), and sparsity of Hawking radiation of the Schwarzschild string cloud black hole’s solution after applying quantum corrections required by the Generalised Uncertainty Principle (GUP). First, we explore the shadow’s behaviour in the presence of a string cloud using three alternative GUP frameworks: linear quadratic GUP (LQGUP), quadratic GUP (QGUP), and linear GUP. We then used the weak field limit approach to determine the effect of the string cloud and GUP parameters on the light deflection angle, with computation based on the Gauss–Bonnet theorem. Next, to compute the quasinormal modes of Schwarzschild string clouds incorporating quantum correction with GUP, we determine the effective potentials generated by perturbing scalar, electromagnetic and fermionic fields, using the sixth-order WKB approach in conjunction with the appropriate numerical analysis. Our investigation indicates that string and linear GUP parameters have distinct and different effects on QNMs. We find that the greybody factor increases due to the presence of string cloud while the linear GUP parameter shows the opposite. We then examine the radiation spectrum and sparsity in the GUP corrected black hole with the cloud of string framework, which provides additional information about the thermal radiation released by black holes. Finally, our inquiries reveal that the influence of the string parameter and the quadratic GUP parameter on various astrophysical observables is comparable, however the impact of the linear GUP parameter is opposite.
Journal Article