Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
347
result(s) for
"62G07"
Sort by:
On L 𝔮 convergence of the Hamiltonian Monte Carlo
2023
We establish
convergence for Hamiltonian Monte Carlo (HMC) algorithms. More specifically, under mild conditions for the associated Hamiltonian motion, we show that the outputs of the algorithms converge (strongly for
and weakly for
) to the desired target distribution. In addition, we establish a general convergence rate for an
convergence given a convergence rate at a specific
, and apply this result to conclude geometric convergence in the Euclidean space for HMC with uniformly strongly logarithmic concave target and auxiliary distributions.
We also present the results of experiments to illustrate convergence in
Journal Article
FUNCTIONAL DATA ANALYSIS FOR DENSITY FUNCTIONS BY TRANSFORMATION TO A HILBERT SPACE
2016
Functional data that are nonnegative and have a constrained integral can be considered as samples of one-dimensional density functions. Such data are ubiquitous. Due to the inherent constraints, densities do not live in a vector space and, therefore, commonly used Hubert space based methods of functional data analysis are not applicable. To address this problem, we introduce a transformation approach, mapping probability densities to a Hubert space of functions through a continuous and invertible map. Basic methods of functional data analysis, such as the construction of functional modes of variation, functional regression or classification, are then implemented by using representations of the densities in this linear space. Representations of the densities themselves are obtained by applying the inverse map from the linear functional space to the density space. Transformations of interest include log quantile density and log hazard transformations, among others. Rates of convergence are derived for the representations that are obtained for a general class of transformations under certain structural properties. If the subjectspecific densities need to be estimated from data, these rates correspond to the optimal rates of convergence for density estimation. The proposed methods are illustrated through simulations and applications in brain imaging.
Journal Article
Optimal cross-validation in density estimation with the$L^{2}$ -loss
2014
We analyze the performance of cross-validation (CV) in the density estimation framework with two purposes: (i) risk estimation and (ii) model selection. The main focus is given to the so-called leave-p-out CV procedure (Lpo), where p denotes the cardinality of the test set. Closed-form expressions are settled for the Lpo estimator of the risk of projection estimators. These expressions provide a great improvement upon V-fold cross-validation in terms of variability and computational complexity.
¶
From a theoretical point of view, closed-form expressions also enable to study the Lpo performance in terms of risk estimation. The optimality of leave-one-out (Loo), that is Lpo with p=1, is proved among CV procedures used for risk estimation. Two model selection frameworks are also considered: estimation, as opposed to identification. For estimation with finite sample size n, optimality is achieved for p large enough [with p/n=o(1)] to balance the overfitting resulting from the structure of the model collection. For identification, model selection consistency is settled for Lpo as long as p/n is conveniently related to the rate of convergence of the best estimator in the collection: (i) p/n\\to1 as n\\to+\\infty with a parametric rate, and (ii) p/n=o(1) with some nonparametric estimators. These theoretical results are validated by simulation experiments.
Journal Article
ANTI-CONCENTRATION AND HONEST, ADAPTIVE CONFIDENCE BANDS
by
Kato, Kengo
,
Chetverikov, Denis
,
Chernozhukov, Victor
in
62G07
,
62G15
,
Anti-concentration of separable Gaussian processes
2014
Modern construction of uniform confidence bands for nonparametric densities (and other functions) often relies on the classical Smirnov-Bickel-Rosenblatt (SBR) condition; see, for example, Giné and Nickl [Probab. Theory Related Fields 143 (2009) 569-596]. This condition requires the existence of a limit distribution of an extreme value type for the supremum of a studentized empirical process (equivalently, for the supremum of a Gaussian process with the same covariance function as that of the studentized empirical process). The principal contribution of this paper is to remove the need for this classical condition. We show that a considerably weaker sufficient condition is derived from an anti-concentration property of the supremum of the approximating Gaussian process, and we derive an inequality leading to such a property for separable Gaussian processes. We refer to the new condition as a generalized SBR condition. Our new result shows that the supremum does not concentrate too fast around any value. We then apply this result to derive a Gaussian multiplier bootstrap procedure for constructing honest confidence bands for nonparametric density estimators (this result can be applied in other nonparametric problems as well). An essential advantage of our approach is that it applies generically even in those cases where the limit distribution of the supremum of the studentized empirical process does not exist (or is unknown). This is of particular importance in problems where resolution levels or other tuning parameters have been chosen in a data-driven fashion, which is needed for adaptive constructions of the confidence bands. Finally, of independent interest is our introduction of a new, practical version of Lepski's method, which computes the optimal, nonconservative resolution levels via a Gaussian multiplier bootstrap method.
Journal Article
ON ADAPTIVE POSTERIOR CONCENTRATION RATES
2015
We investigate the problem of deriving posterior concentration rates under different loss functions in nonparametric Bayes. We first provide a lower bound on posterior coverages of shrinking neighbourhoods that relates the metric or loss under which the shrinking neighbourhood is considered, and an intrinsic pre-metric linked to frequentist separation rates. In the Gaussian white noise model, we construct feasible priors based on a spike and slab procedure reminiscent of wavelet thresholding that achieve adaptive rates of contraction under L² or L∞ metrics when the underlying parameter belongs to a collection of Holder balls and that moreover achieve our lower bound. We analyse the consequences in terms of asymptotic behaviour of posterior credible balls as well as frequentist minimax adaptive estimation. Our results are appended with an upper bound for the contraction rate under an arbitrary loss in a generic regular experiment. The upper bound is attained for certain sieve priors and enables to extend our results to density estimation.
Journal Article
Conditional quantile estimation of nonstationary time series: a locally stationary approach with environmental data application
2025
This paper investigates conditional distribution estimation for locally stationary time series (LSTS). We employ a nonparametric method, specifically proposing a Nadaraya-Watson (NW) estimator, to capture the time-varying dependence structures inherent in nonstationary time series. Our approach accommodates gradual temporal changes by leveraging local stationarity, enabling more accurate estimation of conditional distributions over time. We provide theoretical guarantees for the proposed methodology and demonstrate its practical relevance through a numerical experiment and an application using the rainfall dataset of Butuan City, Philippines. The results highlight the efficiency of NW estimation in modeling complex and time-varying phenomena.
Journal Article
A SIMPLE BOOTSTRAP METHOD FOR CONSTRUCTING NONPARAMETRIC CONFIDENCE BANDS FOR FUNCTIONS
2013
Standard approaches to constructing nonparametric confidence bands for functions are frustrated by the impact of bias, which generally is not estimated consistently when using the bootstrap and conventionally smoothed function estimators. To overcome this problem it is common practice to either undersmooth, so as to reduce the impact of bias, or oversmooth, and thereby introduce an explicit or implicit bias estimator. However, these approaches, and others based on nonstandard smoothing methods, complicate the process of inference, for example, by requiring the choice of new, unconventional smoothing parameters and, in the case of undersmoothing, producing relatively wide bands. In this paper we suggest a new approach, which exploits to our advantage one of the difficulties that, in the past, has prevented an attractive solution to the problem—the fact that the standard bootstrap bias estimator suffers from relatively high-frequency stochastic error. The high frequency, together with a technique based on quantiles, can be exploited to dampen down the stochastic error term, leading to relatively narrow, simple-to-construct confidence bands.
Journal Article
GENERALIZED DENSITY CLUSTERING
2010
We study generalized density-based clustering in which sharply defined clusters such as clusters on lower-dimensional manifolds are allowed. We show that accurate clustering is possible even in high dimensions. We propose two data-based methods for choosing the bandwidth and we study the stability properties of density clusters. We show that a simple graph-based algorithm successfully approximates the high density clusters.
Journal Article
A New Bayesian Approach to Global Optimization on Parametrized Surfaces in R3
2024
This work introduces a new Riemannian optimization method for registering open parameterized surfaces with a constrained global optimization approach. The proposed formulation leads to a rigorous theoretic foundation and guarantees the existence and the uniqueness of a global solution. We also propose a new Bayesian clustering approach where local distributions of surfaces are modeled with spherical Gaussian processes. The maximization of the posterior density is performed with Hamiltonian dynamics which provide a natural and computationally efficient spherical Hamiltonian Monte Carlo sampling. Experimental results demonstrate the efficiency of the proposed method.
Journal Article
Deep data density estimation through Donsker-Varadhan representation
by
Park, Seonho
,
Pardalos, Panos M.
in
Artificial intelligence
,
Artificial neural networks
,
Deep learning
2025
Estimating the data density is one of the challenging problem topics in the deep learning society. In this paper, we present a simple yet effective methodology for estimating the data density using the Donsker-Varadhan variational lower bound on the KL divergence and the modeling based on the deep neural network. We demonstrate that the optimal critic function associated with the Donsker-Varadhan representation on the KL divergence between the data and the uniform distribution can estimate the data density. Also, we present the deep neural network-based modeling and its stochastic learning procedure. The experimental results and possible applications of the proposed method demonstrate that it is competitive with the previous methods for data density estimation and has a lot of possibilities for various applications.
Journal Article