Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
43 result(s) for "Stonyakin, Fedor"
Sort by:
Stopping Rules for Gradient Methods for Non-convex Problems with Additive Noise in Gradient
We study the gradient method under the assumption that an additively inexact gradient is available for, generally speaking, non-convex problems. The non-convexity of the objective function, as well as the use of an inexactness specified gradient at iterations, can lead to various problems. For example, the trajectory of the gradient method may be far enough away from the starting point. On the other hand, the unbounded removal of the trajectory of the gradient method in the presence of noise can lead to the removal of the trajectory of the method from the desired global solution. The results of investigating the behavior of the trajectory of the gradient method are obtained under the assumption of the inexactness of the gradient and the condition of gradient dominance. It is well known that such a condition is valid for many important non-convex problems. Moreover, it leads to good complexity guarantees for the gradient method. A rule of early stopping of the gradient method is proposed. Firstly, it guarantees achieving an acceptable quality of the exit point of the method in terms of the function. Secondly, the stopping rule ensures a fairly moderate distance of this point from the chosen initial position. In addition to the gradient method with a constant step, its variant with adaptive step size is also investigated in detail, which makes it possible to apply the developed technique in the case of an unknown Lipschitz constant for the gradient. Some computational experiments have been carried out which demonstrate effectiveness of the proposed stopping rule for the investigated gradient methods.
Generalized Mirror Prox Algorithm for Monotone Variational Inequalities: Universality and Inexact Oracle
We introduce an inexact oracle model for variational inequalities with monotone operators, propose a numerical method that solves such variational inequalities, and analyze its convergence rate. As a particular case, we consider variational inequalities with Hölder-continuous operator and show that our algorithm is universal. This means that, without knowing the Hölder exponent and Hölder constant, the algorithm has the least possible in the worst-case sense complexity for this class of variational inequalities. We also consider the case of variational inequalities with a strongly monotone operator and generalize the algorithm for variational inequalities with inexact oracle and our universal method for this class of problems. Finally, we show how our method can be applied to convex–concave saddle point problems with Hölder-continuous partial subgradients.
An adaptive Gradient-type Method for Composite Optimization Problems with Gradient Dominance Condition and Generalized Smoothness 1
We consider an interesting class of composite optimization problems with a gradient dominance condition and introduce corresponding analogue of the recently proposed concept of an inexact oracle. This concept is applied to some classes of smooth functional.
An adaptive Gradient-type Method for Composite Optimization Problems with Gradient Dominance Condition and Generalized Smoothness1
We consider an interesting class of composite optimization problems with a gradient dominance condition and introduce corresponding analogue of the recently proposed concept of an inexact oracle. This concept is applied to some classes of smooth functional.
Some adaptive proximal method for a special class of variational inequalities and related problems
An adaptive proximal method for a special class of variational inequalities and related problems is proposed. For example, the so-called mixed variational inequalities and composite saddle problems are considered. Some estimates of the necessary number of iterations are obtained to achieve a given quality of the solution.
Adaptive Variant of Frank-Wolfe Method for Relative Smooth Convex Optimization Problems
The paper introduces a new adaptive version of the Frank-Wolfe algorithm for relatively smooth convex functions. It is proposed to use the Bregman divergence other than half the square of the Euclidean norm in the formula for step-size. Algorithm convergence estimates for minimization problems of relatively smooth convex functions with the triangle scaling property are proved. Computational experiments are performed, and conditions are shown in which the obvious advantage of the proposed algorithm over its Euclidean norm analogue is shown. We also found examples of problems for which the proposed variation of the Frank-Wolfe method works better than known accelerated gradient-type methods for relatively smooth convex functions with the triangle scaling property.
On Some Versions of Subspace Optimization Methods with Inexact Gradient Information
It is well-known that accelerated gradient first order methods possess optimal complexity estimates for the class of convex smooth minimization problems. In many practical situations, it makes sense to work with inexact gradients. However, this can lead to the accumulation of corresponding inexactness in the theoretical estimates of the rate of convergence. We propose some modification of the methods for convex optimization with inexact gradient based on the subspace optimization such as Nemirovski's Conjugate Gradients and Sequential Subspace Optimization. We research the method convergence for different condition of inexactness both in gradient value and accuracy of subspace optimization problems. Besides this, we investigate generalization of this result to the class of quasar-convex (weakly-quasi-convex) functions.
Adaptive Gradient Methods for Some Classes of Non-Smooth Optimization Problems
We propose several adaptive algorithmic methods for problems of non-smooth convex optimization. The first of them is based on a special artificial inexactness. Namely, the concept of inexact (\\( \\delta, \\Delta, L\\))-model of objective functional in optimization is introduced and some gradient-type methods with adaptation of inexactness parameters are proposed. A similar concept of an inexact model is introduced for variational inequalities as well as for saddle point problems. Analogues of switching sub-gradient schemes are proposed for convex programming problems with some general assumptions.
Lipschitz-Free Mirror Descent Methods for Relatively Strongly Convex Functions with/without Absolute and Relative Inexactness
In this paper, we analyze the mirror descent algorithm for non-smooth optimization problems in which the objective function is relatively strongly convex, without relying on the standard Lipschitz continuity assumption commonly used in the literature. We provide convergence analyses for both exact and inexact subgradient information. Furthermore, through numerical experiments, we compare the derived bounds on the quality of the approximate solutions with existing estimates in the literature and demonstrate the effectiveness of the proposed results.
About some works of Boris Polyak on convergence of gradient methods and their development
The paper presents a review of the state-of-the-art of subgradient and accelerated methods of convex optimization, including in the presence of disturbances and access to various information about the objective function (function value, gradient, stochastic gradient, higher derivatives). For nonconvex problems, the Polak-Lojasiewicz condition is considered and a review of the main results is given. The behavior of numerical methods in the presence of sharp minima is considered. The purpose of this survey is to show the influence of the works of B.T. Polyak (1935 -- 2023) on gradient optimization methods and their neighborhoods on the modern development of numerical optimization methods.