Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
70 result(s) for "Toint, Philippe L."
Sort by:
Synthetic Population Generation Without a Sample
The advent of microsimulation in the transportation sector has created the need for extensive disaggregated data concerning the population whose behavior is modeled. Because of the cost of collecting this data and the existing privacy regulations, this need is often met by the creation of a synthetic population on the basis of aggregate data. Although several techniques for generating such a population are known, they suffer from a number of limitations. The first is the need for a sample of the population for which fully disaggregated data must be collected, although such samples may not exist or may not be financially feasible. The second limiting assumption is that the aggregate data used must be consistent, a situation that is most unusual because these data often come from different sources and are collected, possibly at different moments, using different protocols. The paper presents a new synthetic population generator in the class of the Synthetic Reconstruction methods, whose objective is to obviate these limitations. It proceeds in three main successive steps: generation of individuals, generation of household type's joint distributions, and generation of households by gathering individuals. The main idea in these generation steps is to use data at the most disaggregated level possible to define joint distributions, from which individuals and households are randomly drawn. The method also makes explicit use of both continuous and discrete optimization and uses the χ 2 metric to estimate distances between estimated and generated distributions. The new generator is applied for constructing a synthetic population of approximately 10,000,000 individuals and 4,350,000 households localized in the 589 municipalities of Belgium. The statistical quality of the generated population is discussed using criteria extracted from the literature, and it is shown that the new population generator produces excellent results.
Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
A class of trust-region methods is presented for solving unconstrained nonlinear and possibly nonconvex discretized optimization problems, like those arising in systems governed by partial differential equations. The algorithms in this class make use of the discretization level as a means of speeding up the computation of the step. This use is recursive, leading to true multilevel/multiscale optimization methods reminiscent of multigrid methods in linear algebra and the solution of partial differential equations. A simple algorithm of the class is then described and its numerical performance is shown to be numerically promising. This observation then motivates a proof of global convergence to first-order stationary points on the fine grid that is valid for all algorithms in the class.
A block-coordinate approach of multi-level optimization with an application to physics-informed neural networks
Multi-level methods are widely used for the solution of large-scale problems, because of their computational advantages and exploitation of the complementarity between the involved sub-problems. After a re-interpretation of multi-level methods from a block-coordinate point of view, we propose a multi-level algorithm for the solution of nonlinear optimization problems and analyze its evaluation complexity. We apply it to the solution of partial differential equations using physics-informed neural networks (PINNs) and consider two different types of neural architectures, a generic feedforward network and a frequency-aware network. We show that our approach is particularly effective if coupled with these specialized architectures and that this coupling results in better solutions and significant computational savings.
On the Global Convergence of a Filter--SQP Algorithm
A mechanism for proving global convergence in SQP--filter methods for nonlinear programming (NLP) is described. Such methods are characterized by their use of thedominance concept of multiobjective optimization, instead of a penalty parameter whose adjustment can be problematic. The main point of interest is to demonstrate how convergence for NLP can be induced without forcing sufficient descent in a penalty-type merit function. The proof relates to a prototypical algorithm, within which is allowed a range of specific algorithm choices associated with the Hessian matrix representation, updating the trust region radius, and feasibility restoration.
CUTEst: a Constrained and Unconstrained Testing Environment with safe threads for mathematical optimization
We describe the most recent evolution of our constrained and unconstrained testing environment and its accompanying SIF decoder. Code-named SIFDecode and CUTEst, these updated versions feature dynamic memory allocation, a modern thread-safe Fortran modular design, a new Matlab interface and a revised installation procedure integrated with GALAHAD.
Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results
An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser et al. (Optim Methods Softw 22(3):413–431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation.
Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity
An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization and analysed in Cartis, Gould and Toint (Part I, Math Program, doi: 10.1007/s10107-009-0286-5 , 2009), generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser, Deuflhard and Erdmann (Optim Methods Softw 22(3):413–431, 2007). In this companion paper, we further the analysis by providing worst-case global iteration complexity bounds for ARC and a second-order variant to achieve approximate first-order, and for the latter second-order, criticality of the iterates. In particular, the second-order ARC algorithm requires at most iterations, or equivalently, function- and gradient-evaluations, to drive the norm of the gradient of the objective below the desired accuracy , and iterations, to reach approximate nonnegative curvature in a subspace. The orders of these bounds match those proved for Algorithm 3.3 of Nesterov and Polyak which minimizes the cubic model globally on each iteration. Our approach is more general in that it allows the cubic model to be solved only approximately and may employ approximate Hessians.
Global Convergence of a Trust-Region SQP-Filter Algorithm for General Nonlinear Programming
A trust-region SQP-filter algorithm of the type introduced by Fletcher and Leyffer [Math. Program., 91 (2002), pp. 239--269] that decomposes the step into its normal and tangential components allows for an approximate solution of the quadratic subproblem and incorporates the safeguarding tests described in Fletcher, Leyffer, and Toint [On the Global Convergence of an SLP-Filter Algorithm, Technical Report 98/13, Department of Mathematics, University of Namur, Namur, Belgium, 1998; On the Global Convergence of a Filter-SQP Algorithm, Technical Report 00/15, Department of Mathematics, University of Namur, Namur, Belgium, 2000] is considered. It is proved that, under reasonable conditions and for every possible choice of the starting point, the sequence of iterates has at least one first-order critical accumulation point.
On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming
We estimate the worst-case complexity of minimizing an unconstrained, nonconvex composite objective with a structured nonsmooth term by means of some first-order methods. We find that it is unaffected by the nonsmoothness of the objective in that a first-order trust-region or quadratic regularization method applied to it takes at most $\\mathcal{O}(\\epsilon^{-2})$ function evaluations to reduce the size of a first-order criticality measure below $\\epsilon$. Specializing this result to the case when the composite objective is an exact penalty function allows us to consider the objective- and constraint-evaluation worst-case complexity of nonconvex equality-constrained optimization when the solution is computed using a first-order exact penalty method. We obtain that in the reasonable case when the penalty parameters are bounded, the complexity of reaching within $\\epsilon$ of a KKT point is at most $\\mathcal{O}(\\epsilon^{-2})$ problem evaluations, which is the same in order as the function-evaluation complexity of steepest-descent methods applied to unconstrained, nonconvex smooth optimization. [PUBLICATION ABSTRACT]
On the complexity of finding first-order critical points in constrained nonlinear optimization
The complexity of finding -approximate first-order critical points for the general smooth constrained optimization problem is shown to be no worse that in terms of function and constraints evaluations. This result is obtained by analyzing the worst-case behaviour of a first-order short-step homotopy algorithm consisting of a feasibility phase followed by an optimization phase, and requires minimal assumptions on the objective function. Since a bound of the same order is known to be valid for the unconstrained case, this leads to the conclusion that the presence of possibly nonlinear/nonconvex inequality/equality constraints is irrelevant for this bound to apply.