Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
36 result(s) for "Balesdent, Mathieu"
Sort by:
Multi-fidelity modeling with different input domain definitions using deep Gaussian processes
Multi-fidelity approaches combine different models built on a scarce but accurate dataset (high-fidelity dataset), and a large but approximate one (low-fidelity dataset) in order to improve the prediction accuracy. Gaussian processes (GPs) are one of the popular approaches to exhibit the correlations between these different fidelity levels. Deep Gaussian processes (DGPs) that are functional compositions of GPs have also been adapted to multi-fidelity using the multi-fidelity deep Gaussian process (MF-DGP) model. This model increases the expressive power compared to GPs by considering non-linear correlations between fidelities within a Bayesian framework. However, these multi-fidelity methods consider only the case where the inputs of the different fidelity models are defined over the same domain of definition (e.g., same variables, same dimensions). However, due to simplification in the modeling of the low fidelity, some variables may be omitted or a different parametrization may be used compared to the high-fidelity model. In this paper, deep Gaussian processes for multi-fidelity (MF-DGP) are extended to the case where a different parametrization is used for each fidelity. The performance of the proposed multi-fidelity modeling technique is assessed on analytical test cases and on structural and aerodynamic real physical problems.
Two-Level Approach for Simultaneous Component Assignment and Layout Optimization with Applications to Spacecraft Optimal Layout
Optimal layout problems consist in positioning a given number of components in order to minimize an objective function while satisfying geometrical or functional constraints. Such kinds of problems appear in the design process of aerospace systems such as satellite or spacecraft design. These problems are NP-hard, highly constrained and dimensional. This paper describes a two-stage algorithm combining a genetic algorithm and a quasi-physical approach based on a virtual-force system in order to solve multi-container optimal layout problems such as satellite modules. In the proposed approach, a genetic algorithm assigns the components to the containers while a quasi-physical algorithm based on a virtual-force system is developed for positioning the components in the assigned containers. The proposed algorithm is experimented and validated on the satellite module layout problem benchmark. Its global performance is compared with previous algorithms from the literature.
Active Learning Strategy for Surrogate-Based Quantile Estimation of Field Function
Uncertainty quantification is widely used in engineering domains to provide confidence measures on complex systems. It often requires to accurately estimate extreme statistics on computationally intensive black-box models. In case of spatially or temporally distributed model outputs, one valuable metric results in the estimation of extreme quantile of the output stochastic field. In this paper, a novel active learning surrogate-based method is proposed to determine the quantile of an unidimensional output stochastic process with a confidence measure. This allows to control the error on the estimation of a extreme quantile measure of a stochastic process. The proposed approach combines dimension reduction techniques, Gaussian process and an adaptive refinement strategy to enrich the surrogate model and control the accuracy of the quantile estimation. The proposed methodology is applied on an analytical test case and a realistic aerospace problem for which the estimation of a flight envelop is of prime importance for launch safety reasons in the space industry.
Sequential calibration of material constitutive model using mixed-effects calibration
Identifying model parameters is nowadays intrinsically linked with quantifying the associated uncertainties. While classical methods allow to handle some types of uncertainties such as experimental noise, they are not designed to take into account the variability between the different test specimens, significant in particular for composites materials. The estimation of the impact of this intrinsic variability on the material properties can be achieved using population approaches where this variability is modeled by a probability distribution (e.g., a multivariate Gaussian distribution). The objective is to calibrate this distribution (or equivalently its parameters for a parametric distribution). Among the estimation methods can be found mixed-effects models where the parameters that characterize each replication are decomposed between the population averaged behavior (called fixed-effects) and the impact of material variability (called random-effects). Yet, when the number of model parameters or the computational time of a single run of the simulations increases (for multiaxial models for instance), the simultaneous, global identification of all the material parameters is difficult because of the number of unknown quantities to estimate and because of the required model evaluations. Furthermore, the parameters do not have the same influence on the material constitutive model depending for instance on the nature of the load (e.g., tension, compression). The method proposed in this paper enables to calibrate the model on multiple experiments. It decomposes the overall calibration problem into a sequence of calibrations, each subproblem allowing to calibrate the joint distribution of a subset of the model parameters. The calibration process is eased as the number as the number of unknown parameters is reduced compared to the full problem. The proposed calibration process is applied to an orthotropic elastic model with non linear longitudinal behavior, for a unidirectional composite ply made of carbon fibers and epoxy resin. The ability of the method to sequentially estimate the model parameters distribution is investigated. Its capability to ensure consistency throughout the calibration process is also discussed. Results show that the methodology allows to handle the calibration of complex material constitutive models in the mixed-effects framework.
Estimation of Rare Event Probabilities in Complex Aerospace and Other Systems
Rare event probability (10-4 and less) estimation has become a large area of research in the reliability engineering and system safety domains.A significant number of methods have been proposed to reduce the computation burden for the estimation of rare events from advanced sampling approaches to extreme value theory.
Modified Covariance Matrix Adaptation – Evolution Strategy algorithm for constrained optimization under uncertainty, application to rocket design
The design of complex systems often induces a constrained optimization problem under uncertainty. An adaptation of CMA-ES(λ, μ) optimization algorithm is proposed in order to efficiently handle the constraints in the presence of noise. The update mechanisms of the parametrized distribution used to generate the candidate solutions are modified. The constraint handling method allows to reduce the semi-principal axes of the probable research ellipsoid in the directions violating the constraints. The proposed approach is compared to existing approaches on three analytic optimization problems to highlight the efficiency and the robustness of the algorithm. The proposed method is used to design a two stage solid propulsion launch vehicle.
Analysis of multi-objective Kriging-based methods for constrained global optimization
Metamodeling, i.e., building surrogate models to expensive black-box functions, is an interesting way to reduce the computational burden for optimization purpose. Kriging is a popular metamodel based on Gaussian process theory, whose statistical properties have been exploited to build efficient global optimization algorithms. Single and multi-objective extensions have been proposed to deal with constrained optimization when the constraints are also evaluated numerically. This paper first compares these methods on a representative analytical benchmark. A new multi-objective approach is then proposed to also take into account the prediction accuracy of the constraints. A numerical evaluation is provided on the same analytical benchmark and a realistic aerospace case study.
Efficient global optimization of constrained mixed variable problems
Due to the increasing demand for high performance and cost reduction within the framework of complex system design, numerical optimization of computationally costly problems is an increasingly popular topic in most engineering fields. In this paper, several variants of the Efficient Global Optimization algorithm for costly constrained problems depending simultaneously on continuous decision variables as well as on quantitative and/or qualitative discrete design parameters are proposed. The adaptation that is considered is based on a redefinition of the Gaussian Process kernel as a product between the standard continuous kernel and a second kernel representing the covariance between the discrete variable values. Several parameterizations of this discrete kernel, with their respective strengths and weaknesses, are discussed in this paper. The novel algorithms are tested on a number of analytical test-cases and an aerospace related design problem, and it is shown that they require fewer function evaluations in order to converge towards the neighborhoods of the problem optima when compared to more commonly used optimization algorithms.
Probability of failure sensitivity with respect to decision variables
This note introduces a derivation of the sensitivities of a probability of failure with respect to decision variables. For instance, the gradient of the probability of failure with respect to deterministic design variables might be needed in RBDO. These sensitivities might also be useful for Uncertainty-based Multidisciplinary Design Optimization. The difficulty stems from the dependence of the failure domain on variations of the decision variables. This dependence leads to a derivative of the indicator function in the form of a Dirac distribution in the expression of the sensitivities. Based on an approximation of the Dirac, an estimator of the sensitivities is analytically derived in the case of Crude Monte Carlo first and Subset Simulation. The choice of the Dirac approximation is discussed.
A survey of multidisciplinary design optimization methods in launch vehicle design
Optimal design of launch vehicles is a complex problem which requires the use of specific techniques called Multidisciplinary Design Optimization (MDO) methods. MDO methodologies are applied in various domains and are an interesting strategy to solve such an optimization problem. This paper surveys the different MDO methods and their applications to launch vehicle design. The paper is focused on the analysis of the launch vehicle design problem and brings out the advantages and the drawbacks of the main MDO methods in this specific problem. Some characteristics such as the robustness, the calculation costs, the flexibility, the convergence speed or the implementation difficulty are considered in order to determine the methods which are the most appropriate in the launch vehicle design framework. From this analysis, several ways of improvement of the MDO methods are proposed to take into account the specificities of the launch vehicle design problem in order to improve the efficiency of the optimization process.