Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
13,416 result(s) for "Global optimization"
Sort by:
Chaotic grasshopper optimization algorithm for global optimization
Grasshopper optimization algorithm (GOA) is a new meta-heuristic algorithm inspired by the swarming behavior of grasshoppers. The present study introduces chaos theory into the optimization process of GOA so as to accelerate its global convergence speed. The chaotic maps are employed to balance the exploration and exploitation efficiently and the reduction in repulsion/attraction forces between grasshoppers in the optimization process. The proposed chaotic GOA algorithms are benchmarked on thirteen test functions. The results show that the chaotic maps (especially circle map) are able to significantly boost the performance of GOA.
Efficient computation of expected hypervolume improvement using box decomposition algorithms
In the field of multi-objective optimization algorithms, multi-objective Bayesian Global Optimization (MOBGO) is an important branch, in addition to evolutionary multi-objective optimization algorithms. MOBGO utilizes Gaussian Process models learned from previous objective function evaluations to decide the next evaluation site by maximizing or minimizing an infill criterion. A commonly used criterion in MOBGO is the Expected Hypervolume Improvement (EHVI), which shows a good performance on a wide range of problems, with respect to exploration and exploitation. However, so far, it has been a challenge to calculate exact EHVI values efficiently. This paper proposes an efficient algorithm for the exact calculation of the EHVI for in a generic case. This efficient algorithm is based on partitioning the integration volume into a set of axis-parallel slices. Theoretically, the upper bound time complexities can be improved from previously \\[O (n^2)\\] and \\[O(n^3)\\], for two- and three-objective problems respectively, to \\[\\varTheta (n\\log n)\\], which is asymptotically optimal. This article generalizes the scheme in higher dimensional cases by utilizing a new hyperbox decomposition technique, which is proposed by Dächert et al. (Eur J Oper Res 260(3):841–855, 2017). It also utilizes a generalization of the multilayered integration scheme that scales linearly in the number of hyperboxes of the decomposition. The speed comparison shows that the proposed algorithm in this paper significantly reduces computation time. Finally, this decomposition technique is applied in the calculation of the Probability of Improvement (PoI).
An adaptive Bayesian approach to gradient-free global optimization
Many problems in science and technology require finding global minima or maxima of complicated objective functions. The importance of global optimization has inspired the development of numerous heuristic algorithms based on analogies with physical, chemical or biological systems. Here we present a novel algorithm, SmartRunner, which employs a Bayesian probabilistic model informed by the history of accepted and rejected moves to make an informed decision about the next random trial. Thus, SmartRunner intelligently adapts its search strategy to a given objective function and moveset, with the goal of maximizing fitness gain (or energy loss) per function evaluation. Our approach is equivalent to adding a simple adaptive penalty to the original objective function, with SmartRunner performing hill ascent on the modified landscape. The adaptive penalty can be added to many other global optimization schemes, enhancing their ability to find high-quality solutions. We have explored SmartRunner’s performance on a standard set of test functions, the Sherrington–Kirkpatrick spin glass model, and Kauffman’s NK fitness model, finding that it compares favorably with several widely-used alternative approaches to gradient-free optimization.
Variable-fidelity expected improvement method for efficient global optimization of expensive functions
The efficient global optimization method (EGO) based on kriging surrogate model and expected improvement (EI) has received much attention for optimization of high-fidelity, expensive functions. However, when the standard EI method is directly applied to a variable-fidelity optimization (VFO) introducing assistance from cheap, low-fidelity functions via hierarchical kriging (HK) or cokriging, only high-fidelity samples can be chosen to update the variable-fidelity surrogate model. The theory of infilling low-fidelity samples towards the improvement of high-fidelity function is still a blank area. This article proposes a variable-fidelity EI (VF-EI) method that can adaptively select new samples of both low and high fidelity. Based on the theory of HK model, the EI of the high-fidelity function associated with adding low- and high-fidelity sample points are analytically derived, and the resulting VF-EI is a function of both the design variables x and the fidelity level l . Through maximizing the VF-EI, both the sample location and fidelity level of next numerical evaluation are determined, which in turn drives the optimization converging to the global optimum of high-fidelity function. The proposed VF-EI is verified by six analytical test cases and demonstrated by two engineering problems, including aerodynamic shape optimizations of RAE 2822 airfoil and ONERA M6 wing. The results show that it can remarkably improve the optimization efficiency and compares favorably to the existing methods.
Efficient global optimization of constrained mixed variable problems
Due to the increasing demand for high performance and cost reduction within the framework of complex system design, numerical optimization of computationally costly problems is an increasingly popular topic in most engineering fields. In this paper, several variants of the Efficient Global Optimization algorithm for costly constrained problems depending simultaneously on continuous decision variables as well as on quantitative and/or qualitative discrete design parameters are proposed. The adaptation that is considered is based on a redefinition of the Gaussian Process kernel as a product between the standard continuous kernel and a second kernel representing the covariance between the discrete variable values. Several parameterizations of this discrete kernel, with their respective strengths and weaknesses, are discussed in this paper. The novel algorithms are tested on a number of analytical test-cases and an aerospace related design problem, and it is shown that they require fewer function evaluations in order to converge towards the neighborhoods of the problem optima when compared to more commonly used optimization algorithms.
Pseudo expected improvement criterion for parallel EGO algorithm
The efficient global optimization (EGO) algorithm is famous for its high efficiency in solving computationally expensive optimization problems. However, the expected improvement (EI) criterion used for picking up candidate points in the EGO process produces only one design point per optimization cycle, which is time-wasting when parallel computing can be used. In this work, a new criterion called pseudo expected improvement (PEI) is proposed for developing parallel EGO algorithms. In each cycle, the first updating point is selected by the initial EI function. After that, the PEI function is built to approximate the real updated EI function by multiplying the initial EI function by an influence function of the updating point. The influence function is designed to simulate the impact that the updating point will have on the EI function, and is only corresponding to the position of the updating point (not the function value of the updating point). Therefore, the next updating point can be identified by maximizing the PEI function without evaluating the first updating point. As the sequential process goes on, a desired number of updating points can be selected by the PEI criterion within one optimization cycle. The efficiency of the proposed PEI criterion is validated by six benchmarks with dimension from 2 to 6. The results show that the proposed PEI algorithm performs significantly better than the standard EGO algorithm, and gains significant improvements over five of the six test problems compared against a state-of-the-art parallel EGO algorithm. Furthermore, additional experiments show that it affects the convergence of the proposed algorithm significantly when the global maximum of the PEI function is not found. It is recommended to use as much evaluations as one can afford to find the global maximum of the PEI function.
A multi-fidelity Bayesian optimization approach based on the expected further improvement
Sampling efficiency is important for simulation-based design optimization. While Bayesian optimization (BO) has been successfully applied in engineering problems, the cost associated with large-scale simulations has not been fully addressed. Extending the standard BO approaches to multi-fidelity optimization can utilize the information of low-fidelity models to further reduce the optimization cost. In this work, a multi-fidelity Bayesian optimization approach is proposed, in which hierarchical Kriging is used for constructing the multi-fidelity metamodel. The proposed approach quantifies the effect of HF and LF samples in multi-fidelity optimization based on a new concept of expected further improvement. A novel acquisition function is proposed to determine both the location and fidelity level of the next sample simultaneously, with the consideration of balance between the value of information provided by the new sample and the associated sampling cost. The proposed approach is compared with some state-of-the-art methods for multi-fidelity global optimization with numerical examples and an engineering case. The results show that the proposed approach can obtain global optimal solutions with reduced computational costs.
MFO-SFR: An Enhanced Moth-Flame Optimization Algorithm Using an Effective Stagnation Finding and Replacing Strategy
Moth-flame optimization (MFO) is a prominent problem solver with a simple structure that is widely used to solve different optimization problems. However, MFO and its variants inherently suffer from poor population diversity, leading to premature convergence to local optima and losses in the quality of its solutions. To overcome these limitations, an enhanced moth-flame optimization algorithm named MFO-SFR was developed to solve global optimization problems. The MFO-SFR algorithm introduces an effective stagnation finding and replacing (SFR) strategy to effectively maintain population diversity throughout the optimization process. The SFR strategy can find stagnant solutions using a distance-based technique and replaces them with a selected solution from the archive constructed from the previous solutions. The effectiveness of the proposed MFO-SFR algorithm was extensively assessed in 30 and 50 dimensions using the CEC 2018 benchmark functions, which simulated unimodal, multimodal, hybrid, and composition problems. Then, the obtained results were compared with two sets of competitors. In the first comparative set, the MFO algorithm and its well-known variants, specifically LMFO, WCMFO, CMFO, ODSFMFO, SMFO, and WMFO, were considered. Five state-of-the-art metaheuristic algorithms, including PSO, KH, GWO, CSA, and HOA, were considered in the second comparative set. The results were then statistically analyzed through the Friedman test. Ultimately, the capacity of the proposed algorithm to solve mechanical engineering problems was evaluated with two problems from the latest CEC 2020 test-suite. The experimental results and statistical analysis confirmed that the proposed MFO-SFR algorithm was superior to the MFO variants and state-of-the-art metaheuristic algorithms for solving complex global optimization problems, with 91.38% effectiveness.
Parallelized multiobjective efficient global optimization algorithm and its applications
In engineering practice, most optimization problems have multiple objectives, which are usually in a form of expensive black-box functions. The multiobjective efficient global optimization (MOEGO) algorithms have been proposed recently to sequentially sample the design space, aiming to seek for optima with a minimum number of sampling points. With the advance in computing resources, it is wise to make optimization parallelizable to shorten the total design cycle further. In this study, two different parallelized multiobjective efficient global optimization algorithms were proposed on the basis of the Kriging modeling technique. With use of the multiobjective expectation improvement, the proposed algorithm is able to balance local exploitation and global exploration. To implement parallel computing, the “Kriging Believer” and “multiple good local optima” strategies were adopted here to develop new sample infill criteria for multiobjective optimization problems. The proposed algorithms were applied to five mathematical benchmark examples first, which demonstrated faster convergence and better accuracy with more uniform distribution of Pareto points, in comparison with the two other conventional algorithms. The best performed “Kriging Believer” strategy approach was then applied to two more sophisticated real-life engineering case studies on the tailor-rolled blank (TRB) structures for crashworthiness design. After optimization, the TRB hat-shaped tube achieved a 3% increase in energy absorption and a 10.7% reduction in mass, and the TRB B-pillar attained a 10.1% reduction in mass and a 12.8% decrease in intrusion, simultaneously. These benchmark and engineering examples demonstrated that the proposed methods are fairly promising for being an effective tool for a range of design problems.
The Orb-Weaving Spider Algorithm for Training of Recurrent Neural Networks
The quality of operation of neural networks in solving application problems is determined by the success of the stage of their training. The task of learning neural networks is a complex optimization task. Traditional learning algorithms have a number of disadvantages, such as «sticking» in local minimums and a low convergence rate. Modern approaches are based on solving the problems of adjusting the weights of neural networks using metaheuristic algorithms. Therefore, the problem of selecting the optimal set of values of algorithm parameters is important for solving application problems with symmetry properties. This paper studies the application of a new metaheuristic optimization algorithm for weights adjustment—the algorithm of the spiders-cycle, developed by the authors of this article. The approbation of the proposed approach is carried out to adjust the weights of recurrent neural networks used to solve the time series forecasting problem on the example of three different datasets. The results are compared with the results of neural networks trained by the algorithm of the reverse propagation of the error, as well as three other metaheuristic algorithms: particle swarm optimization, bats, and differential evolution. As performance criteria for the comparison of algorithms of global optimization, in this work, descriptive statistics for metrics of the estimation of quality of predictive models, as well as the number of calculations of the target function, are used. The values of the MSE and MAE metrics on the studied datasets were obtained by adjusting the weights of the neural networks using the cycling spider algorithm at 1.32, 25.48, 8.34 and 0.38, 2.18, 1.36, respectively. Compared to the inverse error propagation algorithm, the cycling spider algorithm reduced the value of the error metrics. According to the results of the study, it is concluded that the developed algorithm showed high results and, in the assessment of performance, was not inferior to the existing algorithm.