Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
440
result(s) for
"reproducing kernel Hilbert space"
Sort by:
NONPARAMETRIC STOCHASTIC APPROXIMATION WITH LARGE STEP-SIZES
2016
We consider the random-design least-squares regression problem within the reproducing kernel Hubert space (RKHS) framework. Given a stream of independent and identically distributed input/output data, we aim to learn a regression function within an RKHS ℋ, even if the optimal predictor (i.e., the conditional expectation) is not in ℋ. In a stochastic approximation framework where the estimator is updated after each observation, we show that the averaged unregularized least-mean-square algorithm (a form of stochastic gradient descent), given a sufficient large step-size, attains optimal rates of convergence for a variety of regimes for the smoothnesses of the optimal prediction function and the functions in ℋ. Our results apply as well in the usual finite-dimensional setting of parametric least-squares regression, showing adaptivity of our estimator to the spectral decay of the covariance matrix of the covariates.
Journal Article
JUST INTERPOLATE
2020
In the absence of explicit regularization, Kernel “Ridgeless” Regression with nonlinear kernels has the potential to fit the training data perfectly. It has been observed empirically, however, that such interpolated solutions can still generalize well on test data. We isolate a phenomenon of implicit regularization for minimum-norm interpolated solutions which is due to a combination of high dimensionality of the input data, curvature of the kernel function and favorable geometric properties of the data such as an eigenvalue decay of the empirical covariance and kernel matrices. In addition to deriving a data-dependent upper bound on the out-of-sample error, we present experimental evidence suggesting that the phenomenon occurs in the MNIST dataset.
Journal Article
Adaptation of reproducing kernel algorithm for solving fuzzy Fredholm–Volterra integrodifferential equations
2017
In this article, we propose the reproducing kernel Hilbert space method to obtain the exact and the numerical solutions of fuzzy Fredholm–Volterra integrodifferential equations. The solution methodology is based on generating the orthogonal basis from the obtained kernel functions in which the constraint initial condition is satisfied, while the orthonormal basis is constructing in order to formulate and utilize the solutions with series form in terms of their
r
-cut representation form in the Hilbert space
W
2
2
Ω
⊕
W
2
2
Ω
. Several computational experiments are given to show the good performance and potentiality of the proposed procedure. Finally, the utilized results show that the present method and simulated annealing provide a good scheduling methodology to solve such fuzzy equations.
Journal Article
Fuzzy conformable fractional differential equations: novel extended approach and new numerical solutions
2020
The aim of this article is to propose a new definition of fuzzy fractional derivative, so-called fuzzy conformable. To this end, we discussed fuzzy conformable fractional integral softly. Meanwhile, uniqueness, existence, and other properties of solutions of certain fuzzy conformable fractional differential equations under strongly generalized differentiability are also utilized. Furthermore, all needed requirements for characterizing solutions by equivalent systems of crisp conformable fractional differential equations are debated. In this orientation, modern trend and new computational algorithm in terms of analytic and approximate conformable solutions are proposed. Finally, the reproducing kernel Hilbert space method in the conformable emotion is constructed side by side with numerical results, tabulated data, and graphical representations.
Journal Article
Kernel-Based Approximation of the Koopman Generator and Schrödinger Operator
by
Hamzi, Boumediene
,
Klus, Stefan
,
Nüske, Feliks
in
Koopman generator
,
reproducing kernel Hilbert space
,
Schrödinger operator
2020
Many dimensionality and model reduction techniques rely on estimating dominant eigenfunctions of associated dynamical operators from data. Important examples include the Koopman operator and its generator, but also the Schrödinger operator. We propose a kernel-based method for the approximation of differential operators in reproducing kernel Hilbert spaces and show how eigenfunctions can be estimated by solving auxiliary matrix eigenvalue problems. The resulting algorithms are applied to molecular dynamics and quantum chemistry examples. Furthermore, we exploit that, under certain conditions, the Schrödinger operator can be transformed into a Kolmogorov backward operator corresponding to a drift-diffusion process and vice versa. This allows us to apply methods developed for the analysis of high-dimensional stochastic differential equations to quantum mechanical systems.
Journal Article
Numerical solutions of fuzzy differential equations using reproducing kernel Hilbert space method
by
AL-Smadi, Mohammed
,
Momani, Shaher
,
Abu Arqub, Omar
in
Applied mathematics
,
Artificial Intelligence
,
Calculus
2016
Modeling of uncertainty differential equations is very important issue in applied sciences and engineering, while the natural way to model such dynamical systems is to use fuzzy differential equations. In this paper, we present a new method for solving fuzzy differential equations based on the reproducing kernel theory under strongly generalized differentiability. The analytic and approximate solutions are given with series form in terms of their parametric form in the space
W
2
2
[
a
,
b
]
⊕
W
2
2
[
a
,
b
]
.
The method used in this paper has several advantages; first, it is of global nature in terms of the solutions obtained as well as its ability to solve other mathematical, physical, and engineering problems; second, it is accurate, needs less effort to achieve the results, and is developed especially for the nonlinear cases; third, in the proposed method, it is possible to pick any point in the interval of integration and as well the approximate solutions and their derivatives will be applicable; fourth, the method does not require discretization of the variables, and it is not effected by computation round off errors and one is not faced with necessity of large computer memory and time. Results presented in this paper show potentiality, generality, and superiority of our method as compared with other well-known methods.
Journal Article
Kernel-based covariate functional balancing for observational studies
2018
Covariate balance is often advocated for objective causal inference since it mimics randomization in observational data. Unlike methods that balance specific moments of covariates, our proposal attains uniform approximate balance for covariate functions in a reproducing-kernel Hilbert space. The corresponding infinite-dimensional optimization problem is shown to have a finite-dimensional representation in terms of an eigenvalue optimization problem. Large-sample results are studied, and numerical examples show that the proposed method achieves better balance with smaller sampling variability than existing methods.
Journal Article
EQUIVALENCE OF DISTANCE-BASED AND RKHS-BASED STATISTICS IN HYPOTHESIS TESTING
2013
We provide a unifying framework linking two classes of statistics used in two-sample and independence testing: on the one hand, the energy distances and distance covariances from the statistics literature; on the other, maximum mean discrepancies (MMD), that is, distances between embeddings of distributions to reproducing kernel Hilbert spaces (RKHS), as established in machine learning. In the case where the energy distance is computed with a semimetric of negative type, a positive definite kernel, termed distance kernel, may be defined such that the MMD corresponds exactly to the energy distance. Conversely, for any positive definite kernel, we can interpret the MMD as energy distance with respect to some negative-type semimetric. This equivalence readily extends to distance covariance using kernels on the product space. We determine the class of probability distributions for which the test statistics are consistent against all alternatives. Finally, we investigate the performance of the family of distance kernels in two-sample and independence tests: we show in particular that the energy distance most commonly employed in statistics is just one member of a parametric family of kernels, and that other choices from this family can yield more powerful tests.
Journal Article
EFFICIENT CALIBRATION FOR IMPERFECT COMPUTER MODELS
2015
Many computer models contain unknown parameters which need to be estimated using physical observations. Tuo and Wu (2014) show that the calibration method based on Gaussian process models proposed by Kennedy and O'Hagan [J. R. Stat. Soc. Ser. B. Stat. Methodol. 63 (2001) 425-464] may lead to an unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L₂ calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Numerical examples show that the proposed method outperforms the existing ones.
Journal Article
Deep transfer operator learning for partial differential equations under conditional shift
by
Karniadakis, George Em
,
Kontolati, Katiana
,
Goswami, Somdatta
in
639/705/1041
,
639/705/1042
,
Benchmarks
2022
Transfer learning enables the transfer of knowledge gained while learning to perform one task (source) to a related but different task (target), hence addressing the expense of data acquisition and labelling, potential computational power limitations and dataset distribution mismatches. We propose a new transfer learning framework for task-specific learning (functional regression in partial differential equations) under conditional shift based on the deep operator network (DeepONet). Task-specific operator learning is accomplished by fine-tuning task-specific layers of the target DeepONet using a hybrid loss function that allows for the matching of individual target samples while also preserving the global properties of the conditional distribution of the target data. Inspired by conditional embedding operator theory, we minimize the statistical distance between labelled target data and the surrogate prediction on unlabelled target data by embedding conditional distributions onto a reproducing kernel Hilbert space. We demonstrate the advantages of our approach for various transfer learning scenarios involving nonlinear partial differential equations under diverse conditions due to shifts in the geometric domain and model dynamics. Our transfer learning framework enables fast and efficient learning of heterogeneous tasks despite considerable differences between the source and target domains.
A promising area for deep learning is in modelling complex physical processes described by partial differential equations (PDEs), which is computationally expensive for conventional approaches. An operator learning approach called DeepONet was recently introduced to tackle PDE-related problems, and in new work, this approach is extended with transfer learning, which transfers knowledge obtained from learning to perform one task to a related but different task.
Journal Article