Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
A Line Search Based Proximal Stochastic Gradient Algorithm with Dynamical Variance Reduction
by
Porta, Federica
, Ruggiero, Valeria
, Franchini, Giorgia
, Trombini, Ilaria
in
Algorithms
/ Artificial intelligence
/ Big Data
/ Computation
/ Computational Mathematics and Numerical Analysis
/ Convergence
/ Distance learning
/ Expected values
/ Iterative methods
/ Machine learning
/ Mathematical and Computational Engineering
/ Mathematical and Computational Physics
/ Mathematics
/ Mathematics and Statistics
/ Methods
/ Optimization
/ Random variables
/ Risk
/ Robustness (mathematics)
/ Special Issue on Machine Learning on Scientific Computing
/ Theoretical
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
A Line Search Based Proximal Stochastic Gradient Algorithm with Dynamical Variance Reduction
by
Porta, Federica
, Ruggiero, Valeria
, Franchini, Giorgia
, Trombini, Ilaria
in
Algorithms
/ Artificial intelligence
/ Big Data
/ Computation
/ Computational Mathematics and Numerical Analysis
/ Convergence
/ Distance learning
/ Expected values
/ Iterative methods
/ Machine learning
/ Mathematical and Computational Engineering
/ Mathematical and Computational Physics
/ Mathematics
/ Mathematics and Statistics
/ Methods
/ Optimization
/ Random variables
/ Risk
/ Robustness (mathematics)
/ Special Issue on Machine Learning on Scientific Computing
/ Theoretical
2023
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
A Line Search Based Proximal Stochastic Gradient Algorithm with Dynamical Variance Reduction
by
Porta, Federica
, Ruggiero, Valeria
, Franchini, Giorgia
, Trombini, Ilaria
in
Algorithms
/ Artificial intelligence
/ Big Data
/ Computation
/ Computational Mathematics and Numerical Analysis
/ Convergence
/ Distance learning
/ Expected values
/ Iterative methods
/ Machine learning
/ Mathematical and Computational Engineering
/ Mathematical and Computational Physics
/ Mathematics
/ Mathematics and Statistics
/ Methods
/ Optimization
/ Random variables
/ Risk
/ Robustness (mathematics)
/ Special Issue on Machine Learning on Scientific Computing
/ Theoretical
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
A Line Search Based Proximal Stochastic Gradient Algorithm with Dynamical Variance Reduction
Journal Article
A Line Search Based Proximal Stochastic Gradient Algorithm with Dynamical Variance Reduction
2023
Request Book From Autostore
and Choose the Collection Method
Overview
Many optimization problems arising from machine learning applications can be cast as the minimization of the sum of two functions: the first one typically represents the expected risk, and in practice it is replaced by the empirical risk, and the other one imposes a priori information on the solution. Since in general the first term is differentiable and the second one is convex, proximal gradient methods are very well suited to face such optimization problems. However, when dealing with large-scale machine learning issues, the computation of the full gradient of the differentiable term can be prohibitively expensive by making these algorithms unsuitable. For this reason, proximal stochastic gradient methods have been extensively studied in the optimization area in the last decades. In this paper we develop a proximal stochastic gradient algorithm which is based on two main ingredients. We indeed combine a proper technique to dynamically reduce the variance of the stochastic gradients along the iterative process with a descent condition in expectation for the objective function, aimed to fix the value for the steplength parameter at each iteration. For general objective functionals, the a.s. convergence of the limit points of the sequence generated by the proposed scheme to stationary points can be proved. For convex objective functionals, both the a.s. convergence of the whole sequence of the iterates to a minimum point and an
O
(
1
/
k
)
convergence rate for the objective function values have been shown. The practical implementation of the proposed method does not need neither the computation of the exact gradient of the empirical risk during the iterations nor the tuning of an optimal value for the steplength. An extensive numerical experimentation highlights that the proposed approach appears robust with respect to the setting of the hyperparameters and competitive compared to state-of-the-art methods.
Publisher
Springer US,Springer Nature B.V
This website uses cookies to ensure you get the best experience on our website.