Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
14 result(s) for "Comminges, Laëtitia"
Sort by:
MINIMAX ESTIMATION OF LINEAR AND QUADRATIC FUNCTIONALS ON SPARSITY CLASSES
For the Gaussian sequence model, we obtain nonasymptotic minimax rates of estimation of the linear, quadratic and the ℓ2-norm functionals on classes of sparse vectors and construct optimal estimators that attain these rates. The main object of interest is the class B0(s) of s-sparse vectors θ = (θ1,..., θd), for which we also provide completely adaptive estimators (independent of s and of the noise variance σ) having logarithmically slower rates than the minimax ones. Furthermore, we obtain the minimax rates on the ℓq-balls Bq(r) = {θ ϵ ℝd : ∥θ∥q ≤ r} where 0 < q ≤ 2, and ${\\Vert \\mathrm{\\theta }\\Vert }_{\\mathrm{q}}={\\left({\\mathrm{\\Sigma }}_{\\mathrm{i}=1}^{\\mathrm{d}}|{\\mathrm{\\theta }}_{\\mathrm{i}}{|}^{\\mathrm{q}}\\right)}^{1/\\mathrm{q}}$. This analysis shows that there are, in general, three zones in the rates of convergence that we call the sparse zone, the dense zone and the degenerate zone, while a fourth zone appears for estimation of the quadratic functional. We show that, as opposed to estimation of θ, the correct logarithmic terms in the optimal rates for the sparse zone scale as log(d/s2) and not as log(d/s). For the class B0(s), the rates of estimation of the linear functional and of the ℓ2-norm have a simple elbow at s = √d (boundary between the sparse and the dense zones) and exhibit similar performances, whereas the estimation of the quadratic functional Q(θ) reveals more complex effects: the minimax risk on B0(s) is infinite and the sparseness assumption needs to be combined with a bound on the ℓ2-norm. Finally, we apply our results on estimation of the ℓ2-norm to the problem of testing against sparse alternatives. In particular, we obtain a nonasymptotic analog of the Ingster–Donoho–Jin theory revealing some effects that were not captured by the previous asymptotic analysis.
TIGHT CONDITIONS FOR CONSISTENCY OF VARIABLE SELECTION IN THE CONTEXT OF HIGH DIMENSIONALITY
We address the issue of variable selection in the regression model with very high ambient dimension, that is, when the number of variables is very large. The main focus is on the situation where the number of relevant variables, called intrinsic dimension, is much smaller than the ambient dimension d. Without assuming any parametric form of the underlying regression function, we get tight conditions making it possible to consistently estimate the set of relevant variables. These conditions relate the intrinsic dimension to the ambient dimension and to the sample size. The procedure that is provably consistent under these tight conditions is based on comparing quadratic functionals of the empirical Fourier coefficients with appropriately chosen threshold values. The asymptotic analysis reveals the presence of two quite different regimes. The first regime is when the intrinsic dimension is fixed. In this case the situation in nonparametric regression is the same as in linear regression, that is, consistent variable selection is possible if and only if log d is small compared to the sample size n. The picture is different in the second regime, that is, when the number of relevant variables denoted by s tends to infinity as n → ∞. Then we prove that consistent variable selection in nonparametric set-up is possible only if s + log log d is small compared to log n. We apply these results to derive minimax separation rates for the problem of variable selection.
OPTIMAL ADAPTIVE ESTIMATION OF LINEAR FUNCTIONALS UNDER SPARSITY
We consider the problem of estimation of a linear functional in the Gaussian sequence model where the unknown vector θ ∈ ℝ d belongs to a class of s-sparse vectors with unknown s. We suggest an adaptive estimator achieving a nonasymptotic rate of convergence that differs from the minimax rate at most by a logarithmic factor. We also show that this optimal adaptive rate cannot be improved when s is unknown. Furthermore, we address the issue of simultaneous adaptation to s and to the variance σ² of the noise. We suggest an estimator that achieves the optimal adaptive rate when both s and σ² are unknown.
ADAPTIVE ROBUST ESTIMATION IN SPARSE VECTOR MODEL
For the sparse vector model, we consider estimation of the target vector, of its ℓ2-norm and of the noise variance. We construct adaptive estimators and establish the optimal rates of adaptive estimation when adaptation is considered with respect to the triplet “noise level—noise distribution—sparsity.” We consider classes of noise distributions with polynomially and exponentially decreasing tails as well as the case of Gaussian noise. The obtained rates turn out to be different from the minimax nonadaptive rates when the triplet is known. A crucial issue is the ignorance of the noise variance. Moreover, knowing or not knowing the noise distribution can also influence the rate. For example, the rates of estimation of the noise variance can differ depending on whether the noise is Gaussian or sub-Gaussian without a precise knowledge of the distribution. Estimation of noise variance in our setting can be viewed as an adaptive variant of robust estimation of scale in the contamination model, where instead of fixing the “nominal” distribution in advance we assume that it belongs to some class of distributions.
Minimax optimal estimators for general additive functional estimation
In this paper, we observe a sparse mean vector through Gaussian noise and we aim at estimating some additive functional of the mean in the minimax sense. More precisely, we generalize the results of (Collier et al., 2017, 2019) to a very large class of functionals. The optimal minimax rate is shown to depend on the polynomial approximation rate of the marginal functional, and optimal estimators achieving this rate are built.
Tight conditions for consistency of variable selection in the context of high dimensionality
We address the issue of variable selection in the regression model with very high ambient dimension, that is, when the number of variables is very large. The main focus is on the situation where the number of relevant variables, called intrinsic dimension, is much smaller than the ambient dimension d. Without assuming any parametric form of the underlying regression function, we get tight conditions making it possible to consistently estimate the set of relevant variables. These conditions relate the intrinsic dimension to the ambient dimension and to the sample size. The procedure that is provably consistent under these tight conditions is based on comparing quadratic functionals of the empirical Fourier coefficients with appropriately chosen threshold values. The asymptotic analysis reveals the presence of two quite different re gimes. The first regime is when the intrinsic dimension is fixed. In this case the situation in nonparametric regression is the same as in linear regression, that is, consistent variable selection is possible if and only if log d is small compared to the sample size n. The picture is different in the second regime, that is, when the number of relevant variables denoted by s tends to infinity as \\(n\\to\\infty\\). Then we prove that consistent variable selection in nonparametric set-up is possible only if s+loglog d is small compared to log n. We apply these results to derive minimax separation rates for the problem of variable
On estimation of nonsmooth functionals of sparse normal means
We study the problem of estimation of the value N_gamma(\\theta) = sum(i=1)^d |\\theta_i|^gamma for 0 < gamma <= 1 based on the observations y_i = \\theta_i + \\epsilon\\xi_i, i = 1,...,d, where \\theta = (\\theta_1,...,\\theta_d) are unknown parameters, \\epsilon>0 is known, and \\xi_i are i.i.d. standard normal random variables. We prove that the non-asymptotic minimax risk on the class B_0(s) of s-sparse vectors and we propose estimators achieving the minimax rate.
Adaptive robust estimation in sparse vector model
For the sparse vector model, we consider estimation of the target vector, of its L2-norm and of the noise variance. We construct adaptive estimators and establish the optimal rates of adaptive estimation when adaptation is considered with respect to the triplet \"noise level - noise distribution - sparsity\". We consider classes of noise distributions with polynomially and exponentially decreasing tails as well as the case of Gaussian noise. The obtained rates turn out to be different from the minimax non-adaptive rates when the triplet is known. A crucial issue is the ignorance of the noise variance. Moreover, knowing or not knowing the noise distribution can also influence the rate. For example, the rates of estimation of the noise variance can differ depending on whether the noise is Gaussian or sub-Gaussian without a precise knowledge of the distribution. Estimation of noise variance in our setting can be viewed as an adaptive variant of robust estimation of scale in the contamination model, where instead of fixing the \"nominal\" distribution in advance, we assume that it belongs to some class of distributions.
Minimax testing of a composite null hypothesis defined via a quadratic functional in the model of regression
We consider the problem of testing a particular type of composite null hypothesis under a nonparametric multivariate regression model. For a given quadratic functional \\(Q\\), the null hypothesis states that the regression function \\(f\\) satisfies the constraint \\(Q[f]=0\\), while the alternative corresponds to the functions for which \\(Q[f]\\) is bounded away from zero. On the one hand, we provide minimax rates of testing and the exact separation constants, along with a sharp-optimal testing procedure, for diagonal and nonnegative quadratic functionals. We consider smoothness classes of ellipsoidal form and check that our conditions are fulfilled in the particular case of ellipsoids corresponding to anisotropic Sobolev classes. In this case, we present a closed form of the minimax rate and the separation constant. On the other hand, minimax rates for quadratic functionals which are neither positive nor negative makes appear two different regimes: \"regular\" and \"irregular\". In the \"regular\" case, the minimax rate is equal to \\(n^{-1/4}\\) while in the \"irregular\" case, the rate depends on the smoothness class and is slower than in the \"regular\" case. We apply this to the issue of testing the equality of norms of two functions observed in noisy environments.
Optimal adaptive estimation of linear functionals under sparsity
We consider the problem of estimation of a linear functional in the Gaussian sequence model where the unknown vector theta in R^d belongs to a class of s-sparse vectors with unknown s. We suggest an adaptive estimator achieving a non-asymptotic rate of convergence that differs from the minimax rate at most by a logarithmic factor. We also show that this optimal adaptive rate cannot be improved when s is unknown. Furthermore, we address the issue of simultaneous adaptation to s and to the variance sigma^2 of the noise. We suggest an estimator that achieves the optimal adaptive rate when both s and sigma^2 are unknown.