Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
A simple and practical adaptive trust-region method
by
Hinder, Oliver
, Hamad, Fadi
in
Arc cutting
/ Optimization
/ Regularization
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
A simple and practical adaptive trust-region method
by
Hinder, Oliver
, Hamad, Fadi
in
Arc cutting
/ Optimization
/ Regularization
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Paper
A simple and practical adaptive trust-region method
2025
Request Book From Autostore
and Choose the Collection Method
Overview
We present an adaptive trust-region method for unconstrained optimization that allows inexact solutions to the trust-region subproblems. Our method is a simple variant of the classical trust-region method of \\citet{sorensen1982newton}. The method achieves the best possible convergence bound up to an additive log factor, for finding an \\(\\epsilon\\)-approximate stationary point, i.e., \\(O( \\Delta_f L^{1/2} \\epsilon^{-3/2}) + \\tilde{O}(1)\\) iterations where \\(L\\) is the Lipschitz constant of the Hessian, \\(\\Delta_f\\) is the optimality gap, and \\(\\epsilon\\) is the termination tolerance for the gradient norm. This improves over existing trust-region methods whose worst-case bound is at least a factor of \\(L\\) worse. We compare our performance with state-of-the-art trust-region (TRU) and cubic regularization (ARC) methods from the GALAHAD library on the CUTEst benchmark set on problems with more than 100 variables. We use fewer function, gradient, and Hessian evaluations than these methods. For instance, our algorithm's median number of gradient evaluations is \\(23\\) compared to \\(36\\) for TRU and \\(29\\) for ARC. Compared to the conference version of this paper \\cite{hamad2022consistently}, our revised method includes several practical enhancements. These modifications dramatically improved performance, including an order of magnitude reduction in the shifted geometric mean of wall-clock times. We also show it suffices for the second derivatives to be locally Lipschitz to guarantee that either the minimum gradient norm converges to zero or the objective value tends towards negative infinity, even when the iterates diverge.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.