Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
17,710
result(s) for
"Calculus of Variations and Optimal Control"
Sort by:
Isoperimetric inequalities in unbounded convex bodies
by
Leonardi, Gian Paolo
,
Ritoré, Manuel
,
Vernadakis, Efstratios
in
Boundary value problems
,
Calculus of variations and optimal control; optimization -- Manifolds -- Optimization of shapes other than minimal surfaces. msc
,
Convex and discrete geometry -- General convexity -- Inequalities and extremum problems. msc
2022
We consider the problem of minimizing the relative perimeter under a volume constraint in an unbounded convex body
Asymptotic Spreading for General Heterogeneous Fisher-KPP Type Equations
by
Berestycki, Henri
,
Nadin, Grégoire
in
Asymptotic theory
,
Differential equations, Parabolic
,
Reaction-diffusion equations
2022
In this monograph, we review the theory and establish new and general results regarding spreading properties for heterogeneous
reaction-diffusion equations:
The characterizations of these sets involve two new notions of generalized principal eigenvalues
for linear parabolic operators in unbounded domains. In particular, it allows us to show that
Sample size selection in optimization methods for machine learning
2012
This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large-scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates obtained during the computation of a batch gradient. We establish an
complexity bound on the total cost of a gradient method. The second part of the paper describes a practical Newton method that uses a smaller sample to compute Hessian vector-products than to evaluate the function and the gradient, and that also employs a dynamic sampling technique. The focus of the paper shifts in the third part of the paper to
L
1
-regularized problems designed to produce sparse solutions. We propose a Newton-like method that consists of two phases: a (minimalistic) gradient projection phase that identifies zero variables, and subspace phase that applies a subsampled Hessian Newton iteration in the free variables. Numerical tests on speech recognition problems illustrate the performance of the algorithms.
Journal Article
Split Monotone Variational Inclusions
by
Moudafi, A.
in
Algorithms
,
Applications of Mathematics
,
Calculus of variations and optimal control
2011
Based on the very recent work by Censor-Gibali-Reich (
http://arxiv.org/abs/1009.3780
), we propose an extension of their new variational problem (Split Variational Inequality Problem) to monotone variational inclusions. Relying on the Krasnosel’skii-Mann Theorem for averaged operators, we analyze an algorithm for solving new split monotone inclusions under weaker conditions. Our weak convergence results improve and develop previously discussed Split Variational Inequality Problems, feasibility problems and related problems and algorithms.
Journal Article
Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results
by
Toint, Philippe L.
,
Cartis, Coralia
,
Gould, Nicholas I. M.
in
Adaptive algorithms
,
Algorithms
,
Applied sciences
2011
An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser et al. (Optim Methods Softw 22(3):413–431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation.
Journal Article
Smoothing methods for nonsmooth, nonconvex minimization
2012
We consider a class of smoothing methods for minimization problems where the feasible set is convex but the objective function is not convex, not differentiable and perhaps not even locally Lipschitz at the solutions. Such optimization problems arise from wide applications including image restoration, signal reconstruction, variable selection, optimal control, stochastic equilibrium and spherical approximations. In this paper, we focus on smoothing methods for solving such optimization problems, which use the structure of the minimization problems and composition of smoothing functions for the plus function (
x
)
+
. Many existing optimization algorithms and codes can be used in the inner iteration of the smoothing methods. We present properties of the smoothing functions and the gradient consistency of subdifferential associated with a smoothing function. Moreover, we describe how to update the smoothing parameter in the outer iteration of the smoothing methods to guarantee convergence of the smoothing methods to a stationary point of the original minimization problem.
Journal Article
Nonlinear Diffusion Equations and Curvature Conditions in Metric Measure Spaces
2019
The aim of this paper is to provide new characterizations of the curvature dimension condition in the context of metric measure spaces (X,\\mathsf d,\\mathfrak m). On the geometric side, the authors' new approach takes into account suitable weighted action functionals which provide the natural modulus of K-convexity when one investigates the convexity properties of N-dimensional entropies. On the side of diffusion semigroups and evolution variational inequalities, the authors' new approach uses the nonlinear diffusion semigroup induced by the N-dimensional entropy, in place of the heat flow. Under suitable assumptions (most notably the quadraticity of Cheeger's energy relative to the metric measure structure) both approaches are shown to be equivalent to the strong \\mathrm {CD}^{*}(K,N) condition of Bacher-Sturm.
Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity
by
Toint, Philippe L.
,
Cartis, Coralia
,
Gould, Nicholas I. M.
in
Algorithms
,
Applied sciences
,
Approximation
2011
An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization and analysed in Cartis, Gould and Toint (Part I, Math Program, doi:
10.1007/s10107-009-0286-5
, 2009), generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser, Deuflhard and Erdmann (Optim Methods Softw 22(3):413–431, 2007). In this companion paper, we further the analysis by providing worst-case global iteration complexity bounds for ARC and a second-order variant to achieve approximate first-order, and for the latter second-order, criticality of the iterates. In particular, the second-order ARC algorithm requires at most
iterations, or equivalently, function- and gradient-evaluations, to drive the norm of the gradient of the objective below the desired accuracy
, and
iterations, to reach approximate nonnegative curvature in a subspace. The orders of these bounds match those proved for Algorithm 3.3 of Nesterov and Polyak which minimizes the cubic model globally on each iteration. Our approach is more general in that it allows the cubic model to be solved only approximately and may employ approximate Hessians.
Journal Article
A relaxed constant positive linear dependence constraint qualification and applications
by
Silva, Paulo J. S.
,
Andreani, Roberto
,
Haeser, Gabriel
in
Algorithms
,
Applied mathematics
,
Applied sciences
2012
In this work we introduce a relaxed version of the constant positive linear dependence constraint qualification (CPLD) that we call RCPLD. This development is inspired by a recent generalization of the constant rank constraint qualification by Minchenko and Stakhovski that was called RCRCQ. We show that RCPLD is enough to ensure the convergence of an augmented Lagrangian algorithm and that it asserts the validity of an error bound. We also provide proofs and counter-examples that show the relations of RCRCQ and RCPLD with other known constraint qualifications. In particular, RCPLD is strictly weaker than CPLD and RCRCQ, while still stronger than Abadie’s constraint qualification. We also verify that the second order necessary optimality condition holds under RCRCQ.
Journal Article
On the power and limitations of affine policies in two-stage adaptive optimization
2012
We consider a two-stage adaptive linear optimization problem under right hand side uncertainty with a min–max objective and give a sharp characterization of the power and limitations of affine policies (where the second stage solution is an affine function of the right hand side uncertainty). In particular, we show that the worst-case cost of an optimal affine policy can be
times the worst-case cost of an optimal fully-adaptable solution for any
δ
> 0, where
m
is the number of linear constraints. We also show that the worst-case cost of the best affine policy is
times the optimal cost when the first-stage constraint matrix has non-negative coefficients. Moreover, if there are only
k
≤
m
uncertain parameters, we generalize the performance bound for affine policies to
, which is particularly useful if only a few parameters are uncertain. We also provide an
-approximation algorithm for the general case without any restriction on the constraint matrix but the solution is not an affine function of the uncertain parameters. We also give a tight characterization of the conditions under which an affine policy is optimal for the above model. In particular, we show that if the uncertainty set,
is a simplex, then an affine policy is optimal. However, an affine policy is suboptimal even if
is a convex combination of only (
m
+ 3) extreme points (only two more extreme points than a simplex) and the worst-case cost of an optimal affine policy can be a factor (2 −
δ
) worse than the worst-case cost of an optimal fully-adaptable solution for any
δ
> 0.
Journal Article