Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
93 result(s) for "Nickl, Richard"
Sort by:
NONPARAMETRIC STATISTICAL INFERENCE FOR DRIFT VECTOR FIELDS OF MULTI-DIMENSIONAL DIFFUSIONS
The problem of determining a periodic Lipschitz vector field b = (b₁, . . . , bd ) from an observed trajectory of the solution (Xt : 0 ≤ t ≤ T) of the multi-dimensional stochastic differential equation d Xt = b (Xt) dt + d Wt , t ≥ 0, where Wt is a standard d-dimensional Brownian motion, is considered. Convergence rates of a penalised least squares estimator, which equals the maximum a posteriori (MAP) estimate corresponding to a high-dimensional Gaussian product prior, are derived. These results are deduced from corresponding contraction rates for the associated posterior distributions. The rates obtained are optimal up to log-factors in L²-loss in any dimension, and also for supremum norm loss when d ≤ 4. Further, when d ≤ 3, nonparametric Bernstein–von Mises theorems are proved for the posterior distributions of b. From this, we deduce functional central limit theorems for the implied estimators of the invariant measure μb . The limiting Gaussian process distributions have a covariance structure that is asymptotically optimal from an information-theoretic point of view.
NONPARAMETRIC BAYESIAN POSTERIOR CONTRACTION RATES FOR DISCRETELY OBSERVED SCALAR DIFFUSIONS
We consider nonparametric Bayesian inference in a reflected diffusion model dXt = b(Xt)dt + σ(Xt)dWt, with discretely sampled observations X0, XΔ,...,XnΔ. We analyse the nonlinear inverse problem corresponding to the \"low frequency sampling\" regime where Δ > 0 is fixed and n → ∞. A general theorem is proved that gives conditions for prior distributions Π on the diffusion coefficient σ and the drift function b that ensure minimax optimal contraction rates of the posterior distribution over Hölder–Sobolev smoothness classes. These conditions are verified for natural examples of nonparametric random wavelet series priors. For the proofs, we derive new concentration inequalities for empirical processes arising from discretely observed diffusions that are of independent interest.
NONPARAMETRIC BERNSTEIN—VON MISES THEOREMS IN GAUSSIAN WHITE NOISE
Bernstein—von Mises theorems for nonparametric Bayes priors in the Gaussian white noise model are proved. It is demonstrated how such results justify Bayes methods as efficient frequentist inference procedures in a variety of concrete nonparametric problems. Particularly Bayesian credible sets are constructed that have asymptotically exact 1 — α frequentist coverage level and whose L 2 -diameter shrinks at the minimax rate of convergence (within logarithmic factors) over Hölder balls. Other applications include general classes of linear and nonlinear functionals and credible bands for auto-convolutions. The assumptions cover nonconjugate product priors defined on general orthonormal bases of L 2 satisfying weak conditions.
On polynomial-time computation of high-dimensional posterior measures by Langevin-type algorithms
The problem of generating random samples of high-dimensional posterior distributions is considered. The main results consist of non-asymptotic computational guarantees for Langevin-type MCMC algorithms which scale polynomially in key quantities such as the dimension of the model, the desired precision level, and the number of available statistical measurements. As a direct consequence, it is shown that posterior mean vectors as well as optimisation based maximum a posteriori (MAP) estimates are computable in polynomial time, with high probability under the distribution of the data. These results are complemented by statistical guarantees for recovery of the ground truth parameter generating the data. Our results are derived in a general high-dimensional non-linear regression setting (with Gaussian process priors) where posterior measures are not necessarily log-concave, employing a set of local ‘geometric’ assumptions on the parameter space, and assuming that a good initialiser of the algorithm is available. The theory is applied to a representative non-linear example from PDEs involving a steady-state Schrödinger equation.
Uniform Limit Theorems for Wavelet Density Estimators
Let $p_{n}(y) = \\Sigma_{k}\\hat{\\alpha}_{k}\\phi(y - k) + \\Sigma_{l=0}^{j_{n}-1} \\Sigma_{k} \\hat{\\beta}_{lk}2^{l/2}\\psi(2^{l}y - k)$ be the linear wavelet density estimator, where ϕ, ψ are a father and a mother wavelet (with compact support), $\\hat{\\alpha}_{k}$ , $\\hat{\\beta}_{lk}$ are the empirical wavelet coefficients based on an i.i.d. sample of random variables distributed according to a density $P_{0}$ on $\\mathbb{R}$ , and $j_{n} \\epsilon \\mathbb{Z}$ , $j_{n} \\nearrow \\infty$ . Several uniform limit theorems are proved: First, the almost sure rate of convergence of $sup_{y\\epsilon\\mathbb{R}}|p_{n}(y) - Ep_{n}(y)|$ is obtained, and a law of the logarithm for a suitably scaled version of this quantity is established. This implies that $sup_{y\\epsilon\\mathbb{R}}|p_{n}(y) - p_{0}(y)|$ attains the optimal almost sure rate of convergence for estimating $p_{0}$ , if $j_{n}$ is suitably chosen. Second, a uniform central limit theorem as well as strong invariance principles for the distribution function of $p_{n}$ , that is, for the stochastic processes $\\sqrt{n}(F_{n}^{W}(s) - F(s)) = \\sqrt{n} \\int_{-\\infty}^{s}(p_{n} - p_{0}), s \\epsilon \\mathbb{R}$ , are proved; and more generally, uniform central limit theorems for the processes $\\sqrt{n}\\int(p_{n} - p_{0})f$ , $f \\epsilon \\digamma$ , for other Donsker classes $\\digamma$ of interest are considered. As a statistical application, it is shown that essentially the same limit theorems can be obtained for the hard thresholding wavelet estimator introduced by Donoho et al. [Ann. Statist. 24 (1996) 508-539].
Concentration inequalities and confidence bands for needlet density estimators on compact homogeneous manifolds
Let X 1 , . . . , X n be a random sample from some unknown probability density f defined on a compact homogeneous manifold M of dimension d ≥ 1. Consider a ‘needlet frame’ describing a localised projection onto the space of eigenfunctions of the Laplace operator on M with corresponding eigenvalues less than 2 2 j , as constructed in Geller and Pesenson (J Geom Anal 2011 ). We prove non-asymptotic concentration inequalities for the uniform deviations of the linear needlet density estimator f n ( j ) obtained from an empirical estimate of the needlet projection of f . We apply these results to construct risk-adaptive estimators and nonasymptotic confidence bands for the unknown density f . The confidence bands are adaptive over classes of differentiable and Hölder-continuous functions on M that attain their Hölder exponents.
ON THE BERNSTEIN-VON MISES PHENOMENON FOR NONPARAMETRIC BAYES PROCEDURES
We continue the investigation of Bernstein-von Mises theorems for nonparametric Bayes procedures from [Ann. Statist. 41 (2013) 1999-2028]. We introduce multiscale spaces on which nonparametric priors and posteriors are naturally defined, and prove Bernstein-von Mises theorems for a variety of priors in the setting of Gaussian nonparametric regression and in the i.i.d. sampling model. From these results we deduce several applications where posterior-based inference coincides with efficient frequentist procedures, including Donsker- and Kolmogorov-Smirnov theorems for the random posterior cumulative distribution functions. We also show that multiscale posterior credible bands for the regression or density function are optimal frequentist confidence bands.
ON ADAPTIVE INFERENCE AND CONFIDENCE BANDS
The problem of existence of adaptive confidence bands for an unknown density f that belongs to a nested scale of Hölder classes over ℝ or [0, 1] is considered. Whereas honest adaptive inference in this problem is impossible already for a pair of Hölder balls Σ(r), Σ(5), r ≠ s, of fixed radius, a nonparametric distinguishability condition is introduced under which adaptive confidence bands can be shown to exist. It is further shown that this condition is necessary and sufficient for the existence of honest asymptotic confidence bands, and that it is strictly weaker than similar analytic conditions recently employed in Giné and Nickl [Ann. Statist. 38 (2010) 1122-1170]. The exceptional sets for which honest inference is not possible have vanishingly small probability under natural priors on Hölder balls Σ(s). If no upper bound for the radius of the Hölder balls is known, a price for adaptation has to be paid, and near-optimal adaptation is possible for standard procedures. The implications of these findings for a general theory of adaptive inference are discussed.
Uniform central limit theorems for kernel density estimators
Let be the classical kernel density estimator based on a kernel K and n independent random vectors X i each distributed according to an absolutely continuous law on . It is shown that the processes , , converge in law in the Banach space , for many interesting classes of functions or sets, some -Donsker, some just -pregaussian. The conditions allow for the classical bandwidths h n that simultaneously ensure optimal rates of convergence of the kernel density estimator in mean integrated squared error, thus showing that, subject to some natural conditions, kernel density estimators are ‘plug-in’ estimators in the sense of Bickel and Ritov (Ann Statist 31:1033–1053, 2003). Some new results on the uniform central limit theorem for smoothed empirical processes, needed in the proofs, are also included.