Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Stopping Rules for Gradient Methods for Non-convex Problems with Additive Noise in Gradient
by
Polyak, Boris
, Stonyakin, Fedor
, Kuruzov, Ilya
in
Convexity
/ Methods
/ Optimization
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Stopping Rules for Gradient Methods for Non-convex Problems with Additive Noise in Gradient
by
Polyak, Boris
, Stonyakin, Fedor
, Kuruzov, Ilya
in
Convexity
/ Methods
/ Optimization
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Stopping Rules for Gradient Methods for Non-convex Problems with Additive Noise in Gradient
Journal Article
Stopping Rules for Gradient Methods for Non-convex Problems with Additive Noise in Gradient
2023
Request Book From Autostore
and Choose the Collection Method
Overview
We study the gradient method under the assumption that an additively inexact gradient is available for, generally speaking, non-convex problems. The non-convexity of the objective function, as well as the use of an inexactness specified gradient at iterations, can lead to various problems. For example, the trajectory of the gradient method may be far enough away from the starting point. On the other hand, the unbounded removal of the trajectory of the gradient method in the presence of noise can lead to the removal of the trajectory of the method from the desired global solution. The results of investigating the behavior of the trajectory of the gradient method are obtained under the assumption of the inexactness of the gradient and the condition of gradient dominance. It is well known that such a condition is valid for many important non-convex problems. Moreover, it leads to good complexity guarantees for the gradient method. A rule of early stopping of the gradient method is proposed. Firstly, it guarantees achieving an acceptable quality of the exit point of the method in terms of the function. Secondly, the stopping rule ensures a fairly moderate distance of this point from the chosen initial position. In addition to the gradient method with a constant step, its variant with adaptive step size is also investigated in detail, which makes it possible to apply the developed technique in the case of an unknown Lipschitz constant for the gradient. Some computational experiments have been carried out which demonstrate effectiveness of the proposed stopping rule for the investigated gradient methods.
Publisher
Springer Nature B.V
Subject
This website uses cookies to ensure you get the best experience on our website.