Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS
by
Wang, Zhaoran
, Zhang, Tong
, Liu, Han
in
62F30
/ 62J12
/ 90C26
/ 90C52
/ Algorithms
/ Complexity theory
/ Computational statistics
/ Estimators
/ Generalized linear models
/ geometric computational rate
/ Learning disabilities
/ Least squares
/ Logistics
/ Nonconvex regularized M-estimation
/ Objective functions
/ optimal statistical rate
/ Oracles
/ path-following method
/ Perceptron convergence procedure
/ Regularization methods
/ Statistical analysis
/ Statistical properties
/ Statistical theories
/ Studies
2014
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS
by
Wang, Zhaoran
, Zhang, Tong
, Liu, Han
in
62F30
/ 62J12
/ 90C26
/ 90C52
/ Algorithms
/ Complexity theory
/ Computational statistics
/ Estimators
/ Generalized linear models
/ geometric computational rate
/ Learning disabilities
/ Least squares
/ Logistics
/ Nonconvex regularized M-estimation
/ Objective functions
/ optimal statistical rate
/ Oracles
/ path-following method
/ Perceptron convergence procedure
/ Regularization methods
/ Statistical analysis
/ Statistical properties
/ Statistical theories
/ Studies
2014
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS
by
Wang, Zhaoran
, Zhang, Tong
, Liu, Han
in
62F30
/ 62J12
/ 90C26
/ 90C52
/ Algorithms
/ Complexity theory
/ Computational statistics
/ Estimators
/ Generalized linear models
/ geometric computational rate
/ Learning disabilities
/ Least squares
/ Logistics
/ Nonconvex regularized M-estimation
/ Objective functions
/ optimal statistical rate
/ Oracles
/ path-following method
/ Perceptron convergence procedure
/ Regularization methods
/ Statistical analysis
/ Statistical properties
/ Statistical theories
/ Studies
2014
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS
Journal Article
OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS
2014
Request Book From Autostore
and Choose the Collection Method
Overview
We provide theoretical analysis of the statistical and computational properties of penalized M-estimators that can be formulated as the solution to a possibly nonconvex optimization problem. Many important estimators fall in this category, including least squares regression with nonconvex regularization, generalized linear models with nonconvex regularization and sparse elliptical random design regression. For these problems, it is intractable to calculate the global solution due to the nonconvex formulation. In this paper, we propose an approximate regularization path-following method for solving a variety of learning problems with nonconvex objective functions. Under a unified analytic framework, we simultaneously provide explicit statistical and computational rates of convergence for any local solution attained by the algorithm. Computationally, our algorithm attains a global geometric rate of convergence for calculating the full regularization path, which is optimal among all first-order algorithms. Unlike most existing methods that only attain geometric rates of convergence for one single regularization parameter, our algorithm calculates the full regularization path with the same iteration complexity. In particular, we provide a refined iteration complexity bound to sharply characterize the performance of each stage along the regularization path. Statistically, we provide sharp sample complexity analysis for all the approximate local solutions along the regularization path. In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator. These results show that the final estimator attains an oracle statistical property due to the usage of nonconvex penalty.
Publisher
Institute of Mathematical Statistics,The Institute of Mathematical Statistics
Subject
MBRLCatalogueRelatedBooks
Related Items
Related Items
We currently cannot retrieve any items related to this title. Kindly check back at a later time.
This website uses cookies to ensure you get the best experience on our website.