Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Closed-Form Last Layer Optimization
by
Hennig, Philipp
, Nathaël Da Costa
, Xu, Liyuan
, Galashov, Alexandre
, Gretton, Arthur
in
Closed form solutions
/ Exact solutions
/ Neural networks
/ Optimization
/ Parameters
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Closed-Form Last Layer Optimization
by
Hennig, Philipp
, Nathaël Da Costa
, Xu, Liyuan
, Galashov, Alexandre
, Gretton, Arthur
in
Closed form solutions
/ Exact solutions
/ Neural networks
/ Optimization
/ Parameters
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Paper
Closed-Form Last Layer Optimization
2025
Request Book From Autostore
and Choose the Collection Method
Overview
Neural networks are typically optimized with variants of stochastic gradient descent. Under a squared loss, however, the optimal solution to the linear last layer weights is known in closed-form. We propose to leverage this during optimization, treating the last layer as a function of the backbone parameters, and optimizing solely for these parameters. We show this is equivalent to alternating between gradient descent steps on the backbone and closed-form updates on the last layer. We adapt the method for the setting of stochastic gradient descent, by trading off the loss on the current batch against the accumulated information from previous batches. Further, we prove that, in the Neural Tangent Kernel regime, convergence of this method to an optimal solution is guaranteed. Finally, we demonstrate the effectiveness of our approach compared with standard SGD on a squared loss in several supervised tasks -- both regression and classification -- including Fourier Neural Operators and Instrumental Variable Regression.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.