Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
82
result(s) for
"Linear regression as inverse problem"
Sort by:
Deep learning methods for inverse problems
by
Azimifar, Zohreh
,
Sabzi, Rasool
,
Kamyab, Shima
in
3D reconstruction as inverse problem
,
Artificial Intelligence
,
Categories
2022
In this paper we investigate a variety of deep learning strategies for solving inverse problems. We classify existing deep learning solutions for inverse problems into three categories of Direct Mapping, Data Consistency Optimizer, and Deep Regularizer. We choose a sample of each inverse problem type, so as to compare the robustness of the three categories, and report a statistical analysis of their differences. We perform extensive experiments on the classic problem of linear regression and three well-known inverse problems in computer vision, namely image denoising, 3D human face inverse rendering, and object tracking, in presence of noise and outliers, are selected as representative prototypes for each class of inverse problems. The overall results and the statistical analyses show that the solution categories have a robustness behaviour dependent on the type of inverse problem domain, and specifically dependent on whether or not the problem includes measurement outliers. Based on our experimental results, we conclude by proposing the most robust solution category for each inverse problem class.
Journal Article
A Sequential Solution to the Inverse Linear Regression Problem
1974
In this note we apply the sequential theory developed by Chow and Robbins [1] and Gleser [3], [4] to the inverse linear regression problem. A two-stage sequential procedure has been proposed for the construction of a fixed-width confidence interval for x (an unknown parameter). It is shown that the limiting probabilities of \"correct decision\" are equal to P*(pre-assigned).
Journal Article
Methodology and Convergence Rates for Functional Linear Regression
2007
In functional linear regression, the slope \"parameter\" is a function. Therefore, in a nonparametric context, it is determined by an infinite number of unknowns. Its estimation involves solving an ill-posed problem and has points of contact with a range of methodologies, including statistical smoothing and deconvolution. The standard approach to estimating the slope function is based explicitly on functional principal components analysis and, consequently, on spectral decomposition in terms of eigenvalues and eigen-functions. We discuss this approach in detail and show that in certain circumstances, optimal convergence rates are achieved by the PCA technique. An alternative approach based on quadratic regularisation is suggested and shown to have advantages from some points of view.
Journal Article
ESTIMATION IN FUNCTIONAL LINEAR QUANTILE REGRESSION
2012
This paper studies estimation in functional linear quantile regression in which the dependent variable is scalar while the covariate is a function, and the conditional quantile for each fixed quantile index is modeled as a linear functional of the covariate. Here we suppose that covariates are discretely observed and sampling points may differ across subjects, where the number of measurements per subject increases as the sample size. Also, we allow the quantile index to vary over a given subset of the open unit interval, so the slope function is a function of two variables: (typically) time and quantile index. Likewise, the conditional quantile function is a function of the quantile index and the covariate. We consider an estimator for the slope function based on the principal component basis. An estimator for the conditional quantile function is obtained by a plug-in method. Since the so-constructed plug-in estimator not necessarily satisfies the monotonicity constraint with respect to the quantile index, we also consider a class of monotonized estimators for the conditional quantile function. We establish rates of convergence for these estimators under suitable norms, showing that these rates are optimal in a minimax sense under some smoothness assumptions on the covariance kernel of the covariate and the slope function. Empirical choice of the cutoff level is studied by using simulations.
Journal Article
Solving Stochastic Inverse Problems for Property–Structure Linkages Using Data-Consistent Inversion and Machine Learning
by
Wildey, Tim
,
Tran, Anh
in
Alloys
,
Aluminum
,
Augmenting Physics-based Models in ICME with Machine Learning and Uncertainty Quantification
2021
Determining process–structure–property linkages is one of the key objectives in material science, and uncertainty quantification plays a critical role in understanding both process–structure and structure–property linkages. In this work, we seek to learn a distribution of microstructure parameters that are consistent in the sense that the forward propagation of this distribution through a crystal plasticity finite element model matches a target distribution on materials properties. This stochastic inversion formulation infers a distribution of acceptable/consistent microstructures, as opposed to a deterministic solution, which expands the range of feasible designs in a probabilistic manner. To solve this stochastic inverse problem, we employ a recently developed uncertainty quantification framework based on push-forward probability measures, which combines techniques from measure theory and Bayes’ rule to define a unique and numerically stable solution. This approach requires making an initial prediction using an initial guess for the distribution on model inputs and solving a stochastic forward problem. To reduce the computational burden in solving both stochastic forward and stochastic inverse problems, we combine this approach with a machine learning Bayesian regression model based on Gaussian processes and demonstrate the proposed methodology on two representative case studies in structure–property linkages.
Journal Article
Matrices, Moments and Quadrature with Applications
by
Golub, Gene H
,
Meurant, Gérard
in
Algorithm
,
Basis (linear algebra)
,
Biconjugate gradient method
2009,2010
This computationally oriented book describes and explains the mathematical relationships among matrices, moments, orthogonal polynomials, quadrature rules, and the Lanczos and conjugate gradient algorithms. The book bridges different mathematical areas to obtain algorithms to estimate bilinear forms involving two vectors and a function of the matrix. The first part of the book provides the necessary mathematical background and explains the theory. The second part describes the applications and gives numerical examples of the algorithms and techniques developed in the first part.
Applications addressed in the book include computing elements of functions of matrices; obtaining estimates of the error norm in iterative methods for solving linear systems and computing parameters in least squares and total least squares; and solving ill-posed problems using Tikhonov regularization.
This book will interest researchers in numerical linear algebra and matrix computations, as well as scientists and engineers working on problems involving computation of bilinear forms.
ESTIMATION OF NONPARAMETRIC CONDITIONAL MOMENT MODELS WITH POSSIBLY NONSMOOTH GENERALIZED RESIDUALS
2012
This paper studies nonparametric estimation of conditional moment restrictions in which the generalized residual functions can be nonsmooth in the unknown functions of endogenous variables. This is a nonparametric nonlinear instrumental variables (IV) problem. We propose a class of penalized sieve minimum distance (PSMD) estimators, which are minimizers of a penalized empirical minimum distance criterion over a collection of sieve spaces that are dense in the infinite-dimensional function parameter space. Some of the PSMD procedures use slowly growing finite-dimensional sieves with flexible penalties or without any penalty; others use large dimensional sieves with lower semicompact and/or convex penalties. We establish their consistency and the convergence rates in Banach space norms (such as a sup-norm or a root mean squared norm), allowing for possibly noncompact infinite-dimensional parameter spaces. For both mildly and severely ill-posed nonlinear inverse problems, our convergence rates in Hubert space norms (such as a root mean squared norm) achieve the known minimax optimal rate for the nonparametric mean IV regression. We illustrate the theory with a nonparametric additive quantile IV regression. We present a simulation study and an empirical application of estimating nonparametric quantile IV Engel curves.
Journal Article
Computer model calibration with confidence and consistency
2019
The paper proposes and examines a calibration method for inexact models. The method produces a confidence set on the parameters that includes the best parameter with a desired probability under any sample size. Additionally, this confidence set is shown to be consistent in that it excludes suboptimal parameters in large sample environments. The method works and the results hold with few assumptions; the ideas are maintained even with discrete input spaces or parameter spaces. Computation of the confidence sets and approximate confidence sets is discussed. The performance is illustrated in a simulation example as well as two real data examples.
Journal Article
Structural Nonparametric Cointegrating Regression
2009
Nonparametric estimation of a structural cointegrating regression model is studied. As in the standard linear cointegrating regression model, the regressor and the dependent variable are jointly dependent and contemporaneously correlated. In nonparametric estimation problems, joint dependence is known to be a major complication that affects identification, induces bias in conventional kernel estimates, and frequently leads to ill-posed inverse problems. In functional cointegrating regressions where the regressor is an integrated or near-integrated time series, it is shown here that inverse and ill-posed inverse problems do not arise. Instead, simple nonparametric kernel estimation of a structural nonparametric cointegrating regression is consistent and the limit distribution theory is mixed normal, giving straightforward asymptotics that are useable in practical work. It is further shown that use of augmented regression, as is common in linear cointegration modeling to address endogeneity, does not lead to bias reduction in nonparametric regression, but there is an asymptotic gain in variance reduction. The results provide a convenient basis for inference in structural nonparametric regression with nonstationary time series when there is a single integrated or near-integrated regressor. The methods may be applied to a range of empirical models where functional estimation of cointegrating relations is required.
Journal Article
Nonparametric Instrumental Regression
2011
The focus of this paper is the nonparametric estimation of an instrumental regression function defined by conditional moment restrictions that stem from a structural econometric model E[Y — (Z) | W] = 0, and involve endogenous variables Y and Z and instruments W. The function is the solution of an ill-posed inverse problem and we propose an estimation procedure based on Tikhonov regularization. The paper analyzes identification and overidentification of this model, and presents asymptotic properties of the estimated nonparametric instrumental regression function.
Journal Article