Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
SLOPE IS ADAPTIVE TO UNKNOWN SPARSITY AND ASYMPTOTICALLY MINIMAX
by
Su, Weijie
, Candès, Emmanuel
in
Asymptotic methods
/ Mathematical problems
/ Measurement errors
/ Normal distribution
/ Regression analysis
/ Sparsity
/ Studies
2016
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
SLOPE IS ADAPTIVE TO UNKNOWN SPARSITY AND ASYMPTOTICALLY MINIMAX
by
Su, Weijie
, Candès, Emmanuel
in
Asymptotic methods
/ Mathematical problems
/ Measurement errors
/ Normal distribution
/ Regression analysis
/ Sparsity
/ Studies
2016
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
SLOPE IS ADAPTIVE TO UNKNOWN SPARSITY AND ASYMPTOTICALLY MINIMAX
Journal Article
SLOPE IS ADAPTIVE TO UNKNOWN SPARSITY AND ASYMPTOTICALLY MINIMAX
2016
Request Book From Autostore
and Choose the Collection Method
Overview
We consider high-dimensional sparse regression problems in which we observe y = Xβ + z, where X is an n × p design matrix and z is an n-dimensional vector of independent Gaussian errors, each with variance σ². Our focus is on the recently introduced SLOPE estimator [Ann. Appl. Stat. 9 (2015) 1103-1140], which regularizes the least-squares estimates with the rank-dependent penalty${\\Sigma _{1 \\leqslant i \\leqslant {p^{{\\lambda _i}}}|\\hat \\beta {|_{(i)}}$, |β̂|(i) is the ith largest magnitude of the fitted coefficients. Under Gaussian designs, where the entries of X are i.i.d. N(0,1/n), we show that SLOPE, with weights λi just about equal to σ· Φ⁻¹ (1 — iq/(2p)) [Φ⁻¹(α) is the orth quantile of a standard normal and q is a fixed number in (0,1)] achieves a squared error of estimation obeying sup ℙ(||β̂SLOPE - β||² > (1 + ε)2σ²klog(p/k))→0 ||β||₀≤k as the dimension p increases to ∞, and where ε > 0 is an arbitrary small constant. This holds under a weak assumption on the l₀-sparsity level, namely, k/p → 0 and (k log p)/n → 0, and is sharp in the sense that this is the best possible error any estimator can achieve. A remarkable feature is that SLOPE does not require any knowledge of the degree of sparsity, and yet automatically adapts to yield optimal total squared errors over a wide range of l₀-sparsity classes. We are not aware of any other estimator with this property.
Publisher
Institute of Mathematical Statistics
This website uses cookies to ensure you get the best experience on our website.