Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
33,068 result(s) for "Hilbert space"
Sort by:
NONPARAMETRIC STOCHASTIC APPROXIMATION WITH LARGE STEP-SIZES
We consider the random-design least-squares regression problem within the reproducing kernel Hubert space (RKHS) framework. Given a stream of independent and identically distributed input/output data, we aim to learn a regression function within an RKHS ℋ, even if the optimal predictor (i.e., the conditional expectation) is not in ℋ. In a stochastic approximation framework where the estimator is updated after each observation, we show that the averaged unregularized least-mean-square algorithm (a form of stochastic gradient descent), given a sufficient large step-size, attains optimal rates of convergence for a variety of regimes for the smoothnesses of the optimal prediction function and the functions in ℋ. Our results apply as well in the usual finite-dimensional setting of parametric least-squares regression, showing adaptivity of our estimator to the spectral decay of the covariance matrix of the covariates.
EQUIVALENCE OF DISTANCE-BASED AND RKHS-BASED STATISTICS IN HYPOTHESIS TESTING
We provide a unifying framework linking two classes of statistics used in two-sample and independence testing: on the one hand, the energy distances and distance covariances from the statistics literature; on the other, maximum mean discrepancies (MMD), that is, distances between embeddings of distributions to reproducing kernel Hilbert spaces (RKHS), as established in machine learning. In the case where the energy distance is computed with a semimetric of negative type, a positive definite kernel, termed distance kernel, may be defined such that the MMD corresponds exactly to the energy distance. Conversely, for any positive definite kernel, we can interpret the MMD as energy distance with respect to some negative-type semimetric. This equivalence readily extends to distance covariance using kernels on the product space. We determine the class of probability distributions for which the test statistics are consistent against all alternatives. Finally, we investigate the performance of the family of distance kernels in two-sample and independence tests: we show in particular that the energy distance most commonly employed in statistics is just one member of a parametric family of kernels, and that other choices from this family can yield more powerful tests.
PARTIAL DISTANCE CORRELATION WITH METHODS FOR DISSIMILARITIES
Distance covariance and distance correlation are scalar coefficients that characterize independence of random vectors in arbitrary dimension. Properties, extensions and applications of distance correlation have been discussed in the recent literature, but the problem of defining the partial distance correlation has remained an open question of considerable interest. The problem of partial distance correlation is more complex than partial correlation partly because the squared distance covariance is not an inner product in the usual linear space. For the definition of partial distance correlation, we introduce a new Hubert space where the squared distance covariance is the inner product. We define the partial distance correlation statistics with the help of this Hubert space, and develop and implement a test for zero partial distance correlation. Our intermediate results provide an unbiased estimator of squared distance covariance, and a neat solution to the problem of distance correlation for dissimilarities rather than distances.
Complex interpolation between Hilbert, Banach and operator spaces
Motivated by a question of Vincent Lafforgue, the author studies the Banach spaces X satisfying the following property: there is a function \\varepsilon\\to \\Delta_X(\\varepsilon) tending to zero with \\varepsilon>0 such that every operator T\\colon \\ L_2\\to L_2 with \\|T\\|\\le \\varepsilon that is simultaneously contractive (i.e., of norm \\le 1) on L_1 and on L_\\infty must be of norm \\le \\Delta_X(\\varepsilon) on L_2(X). The author shows that \\Delta_X(\\varepsilon) \\in O(\\varepsilon^\\alpha) for some \\alpha>0 if X is isomorphic to a quotient of a subspace of an ultraproduct of \\theta-Hilbertian spaces for some \\theta>0 (see Corollary 6.7), where \\theta-Hilbertian is meant in a slightly more general sense than in the author's earlier paper (1979).
Fuzzy conformable fractional differential equations: novel extended approach and new numerical solutions
The aim of this article is to propose a new definition of fuzzy fractional derivative, so-called fuzzy conformable. To this end, we discussed fuzzy conformable fractional integral softly. Meanwhile, uniqueness, existence, and other properties of solutions of certain fuzzy conformable fractional differential equations under strongly generalized differentiability are also utilized. Furthermore, all needed requirements for characterizing solutions by equivalent systems of crisp conformable fractional differential equations are debated. In this orientation, modern trend and new computational algorithm in terms of analytic and approximate conformable solutions are proposed. Finally, the reproducing kernel Hilbert space method in the conformable emotion is constructed side by side with numerical results, tabulated data, and graphical representations.
Adaptation of reproducing kernel algorithm for solving fuzzy Fredholm–Volterra integrodifferential equations
In this article, we propose the reproducing kernel Hilbert space method to obtain the exact and the numerical solutions of fuzzy Fredholm–Volterra integrodifferential equations. The solution methodology is based on generating the orthogonal basis from the obtained kernel functions in which the constraint initial condition is satisfied, while the orthonormal basis is constructing in order to formulate and utilize the solutions with series form in terms of their r -cut representation form in the Hilbert space W 2 2 Ω ⊕ W 2 2 Ω . Several computational experiments are given to show the good performance and potentiality of the proposed procedure. Finally, the utilized results show that the present method and simulated annealing provide a good scheduling methodology to solve such fuzzy equations.
FUNCTIONAL DATA ANALYSIS FOR DENSITY FUNCTIONS BY TRANSFORMATION TO A HILBERT SPACE
Functional data that are nonnegative and have a constrained integral can be considered as samples of one-dimensional density functions. Such data are ubiquitous. Due to the inherent constraints, densities do not live in a vector space and, therefore, commonly used Hubert space based methods of functional data analysis are not applicable. To address this problem, we introduce a transformation approach, mapping probability densities to a Hubert space of functions through a continuous and invertible map. Basic methods of functional data analysis, such as the construction of functional modes of variation, functional regression or classification, are then implemented by using representations of the densities in this linear space. Representations of the densities themselves are obtained by applying the inverse map from the linear functional space to the density space. Transformations of interest include log quantile density and log hazard transformations, among others. Rates of convergence are derived for the representations that are obtained for a general class of transformations under certain structural properties. If the subjectspecific densities need to be estimated from data, these rates correspond to the optimal rates of convergence for density estimation. The proposed methods are illustrated through simulations and applications in brain imaging.
Numerical solutions of fuzzy differential equations using reproducing kernel Hilbert space method
Modeling of uncertainty differential equations is very important issue in applied sciences and engineering, while the natural way to model such dynamical systems is to use fuzzy differential equations. In this paper, we present a new method for solving fuzzy differential equations based on the reproducing kernel theory under strongly generalized differentiability. The analytic and approximate solutions are given with series form in terms of their parametric form in the space W 2 2 [ a , b ] ⊕ W 2 2 [ a , b ] . The method used in this paper has several advantages; first, it is of global nature in terms of the solutions obtained as well as its ability to solve other mathematical, physical, and engineering problems; second, it is accurate, needs less effort to achieve the results, and is developed especially for the nonlinear cases; third, in the proposed method, it is possible to pick any point in the interval of integration and as well the approximate solutions and their derivatives will be applicable; fourth, the method does not require discretization of the variables, and it is not effected by computation round off errors and one is not faced with necessity of large computer memory and time. Results presented in this paper show potentiality, generality, and superiority of our method as compared with other well-known methods.
A REPRODUCING KERNEL HILBERT SPACE APPROACH TO FUNCTIONAL LINEAR REGRESSION
We study in this paper a smoothness regularization method for functional linear regression and provide a unified treatment for both the prediction and estimation problems. By developing a tool on simultaneous diagonalization of two positive definite kernels, we obtain shaper results on the minimax rates of convergence and show that smoothness regularized estimators achieve the optimal rates of convergence for both prediction and estimation under conditions weaker than those for the functional principal components based methods developed in the literature. Despite the generality of the method of regularization, we show that the procedure is easily implementable. Numerical results are obtained to illustrate the merits of the method and to demonstrate the theoretical developments.