Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
6,503
result(s) for
"optimization algorithm robustness"
Sort by:
Evolving the Whale Optimization Algorithm: The Development and Analysis of MISWOA
by
Li, Chunfang
,
Jiang, Mingyi
,
Song, Linsen
in
Adaptability
,
adaptive spiral indentation strategy
,
Algorithms
2024
This paper introduces an enhanced Whale Optimization Algorithm, named the Multi-Swarm Improved Spiral Whale Optimization Algorithm (MISWOA), designed to address the shortcomings of the traditional Whale Optimization Algorithm (WOA) in terms of global search capability and convergence velocity. The MISWOA combines an adaptive nonlinear convergence factor with a variable gain compensation mechanism, adaptive weights, and an advanced spiral convergence strategy, resulting in a significant enhancement in the algorithm’s global search capability, convergence velocity, and precision. Moreover, MISWOA incorporates a multi-population mechanism, further bolstering the algorithm’s efficiency and robustness. Ultimately, an extensive validation of MISWOA through “simulation + experimentation” approaches has been conducted, demonstrating that MISWOA surpasses other algorithms and the Whale Optimization Algorithm (WOA) and its variants in terms of convergence accuracy and algorithmic efficiency. This validates the effectiveness of the improvement method and the exceptional performance of MISWOA, while also highlighting its substantial potential for application in practical engineering scenarios. This study not only presents an improved optimization algorithm but also constructs a systematic framework for analysis and research, offering novel insights for the comprehension and refinement of swarm intelligence algorithms.
Journal Article
Novel Greylag Goose Optimization Algorithm with Evolutionary Game Theory (EGGO)
2025
In this paper, an Enhanced Greylag Goose Optimization Algorithm (EGGO) based on evolutionary game theory is presented to address the limitations of the traditional Greylag Goose Optimization Algorithm (GGO) in global search ability and convergence speed. By incorporating dynamic strategy adjustment from evolutionary game theory, EGGO improves global search efficiency and convergence speed. Furthermore, EGGO employs dynamic grouping, random mutation, and local search enhancement to boost efficiency and robustness. Experimental comparisons on standard test functions and the CEC 2022 benchmark suite show that EGGO outperforms other classic algorithms and variants in convergence precision and speed. Its effectiveness in practical optimization problems is also demonstrated through applications in engineering design, such as the design of tension/compression springs, gear trains, and three-bar trusses. EGGO offers a novel solution for optimization problems and provides a new theoretical foundation and research framework for swarm intelligence algorithms.
Journal Article
A novel swarm intelligence optimization approach: sparrow search algorithm
2020
In this paper, a novel swarm optimization approach, namely sparrow search algorithm (SSA), is proposed inspired by the group wisdom, foraging and anti-predation behaviours of sparrows. Experiments on 19 benchmark functions are conducted to test the performance of the SSA and its performance is compared with other algorithms such as grey wolf optimizer (GWO), gravitational search algorithm (GSA), and particle swarm optimization (PSO). Simulation results show that the proposed SSA is superior over GWO, PSO and GSA in terms of accuracy, convergence speed, stability and robustness. Finally, the effectiveness of the proposed SSA is demonstrated in two practical engineering examples.
Journal Article
Artificial lemming algorithm: a novel bionic meta-heuristic technique for solving real-world engineering optimization problems
2025
The advent of the intelligent information era has witnessed a proliferation of complex optimization problems across various disciplines. Although existing meta-heuristic algorithms have demonstrated efficacy in many scenarios, they still struggle with certain challenges such as premature convergence, insufficient exploration, and lack of robustness in high-dimensional, nonconvex search spaces. These limitations underscore the need for novel optimization techniques that can better balance exploration and exploitation while maintaining computational efficiency. In response to this need, we propose the Artificial Lemming Algorithm (ALA), a bio-inspired metaheuristic that mathematically models four distinct behaviors of lemmings in nature: long-distance migration, digging holes, foraging, and evading predators. Specifically, the long-distance migration and burrow digging behaviors are dedicated to highly exploring the search domain, whereas the foraging and evading predators behaviors provide exploitation during the optimization process. In addition, ALA incorporates an energy-decreasing mechanism that enables dynamic adjustments to the balance between exploration and exploitation, thereby enhancing its ability to evade local optima and converge to global solutions more robustly. To thoroughly verify the effectiveness of the proposed method, ALA is compared with 17 other state-of-the-art meta-heuristic algorithms on the IEEE CEC2017 benchmark test suite and the IEEE CEC2022 benchmark test suite. The experimental results indicate that ALA has reliable comprehensive optimization performance and can achieve superior solution accuracy, convergence speed, and stability in most test cases. For the 29 10-, 30-, 50-, and 100-dimensional CEC2017 functions, ALA obtains the lowest Friedman average ranking values among all competitor methods, which are 1.7241, 2.1034, 2.7241, and 2.9310, respectively, and for the 12 CEC2022 functions, ALA again wins the optimal Friedman average ranking of 2.1667. Finally, to further evaluate its applicability, ALA is implemented to address a series of optimization cases, including constrained engineering design, photovoltaic (PV) model parameter identification, and fractional-order proportional-differential-integral (FOPID) controller gain tuning. Our findings highlight the competitive edge and potential of ALA for real-world engineering applications. The source code of ALA is publicly available at https://github.com/StevenShaw98/Artificial-Lemming-Algorithm.
Journal Article
Gaining-sharing knowledge based algorithm for solving optimization problems: a novel nature-inspired algorithm
by
Mohamed, Ali Wagdy
,
Mohamed, Ali Khater
,
Hadi, Anas A.
in
Artificial Intelligence
,
Citations
,
Complex Systems
2020
This paper proposes a novel nature-inspired algorithm called Gaining Sharing Knowledge based Algorithm (GSK) for solving optimization problems over continuous space. The GSK algorithm mimics the process of gaining and sharing knowledge during the human life span. It is based on two vital stages, junior gaining and sharing phase and senior gaining and sharing phase. The present work mathematically models these two phases to achieve the process of optimization. In order to verify and analyze the performance of GSK, numerical experiments on a set of 30 test problems from the CEC2017 benchmark for 10, 30, 50 and 100 dimensions. Besides, the GSK algorithm has been applied to solve the set of real world optimization problems proposed for the IEEE-CEC2011 evolutionary algorithm competition. A comparison with 10 state-of-the-art and recent metaheuristic algorithms are executed. Experimental results indicate that in terms of robustness, convergence and quality of the solution obtained, GSK is significantly better than, or at least comparable to state-of-the-art approaches with outstanding performance in solving optimization problems especially with high dimensions.
Journal Article
An improved moth flame optimization algorithm based on modified dynamic opposite learning strategy
2023
Moth flame optimization (MFO) algorithm is a relatively new nature-inspired optimization algorithm based on the moth’s movement towards the moon. Premature convergence and convergence to local optima are the main demerits of the algorithm. To avoid these drawbacks, a modified dynamic opposite learning-based MFO algorithm (m-DMFO) is presented in this paper, incorporating a modified dynamic opposite learning (DOL) strategy. To validate the performance of the proposed m-DMFO algorithm, it is tested via twenty-three benchmark functions, IEEE CEC’2014 test functions and compared with a wide range of optimization algorithms. Moreover, Friedman rank test, Wilcoxon rank test, convergence analysis, and diversity measurement have been conducted to measure the robustness of the proposed m-DMFO algorithm. The numerical results show that, the proposed m-DMFO algorithm achieved superior results in more than 90% occasions. The proposed m-DMFO achieves the best rank in Friedman rank test and Wilcoxon rank test respectively. In addition, four engineering design problems have been solved by the suggested m-DMFO algorithm. According to the results, it achieves extremely impressive results, which also illustrates that the algorithm is qualified in solving real-world problems. Analyses of numerical results, diversity measure, statistical tests and convergence results ensure the enhanced performance of the proposed m-DMFO algorithm.
Journal Article
Circle Search Algorithm: A Geometry-Based Metaheuristic Optimization Algorithm
by
Jurado, Francisco
,
Turky, Rania A.
,
Tostado-Véliz, Marcos
in
algorithms
,
Biology
,
circle search algorithm
2022
This paper presents a novel metaheuristic optimization algorithm inspired by the geometrical features of circles, called the circle search algorithm (CSA). The circle is the most well-known geometric object, with various features including diameter, center, perimeter, and tangent lines. The ratio between the radius and the tangent line segment is the orthogonal function of the angle opposite to the orthogonal radius. This angle plays an important role in the exploration and exploitation behavior of the CSA. To evaluate the robustness of the CSA in comparison to other algorithms, many independent experiments employing 23 famous functions and 3 real engineering problems were carried out. The statistical results revealed that the CSA succeeded in achieving the minimum fitness values for 21 out of the tested 23 functions, and the p-value was less than 0.05. The results evidence that the CSA converged to the minimum results faster than the comparative algorithms. Furthermore, high-dimensional functions were used to assess the CSA’s robustness, with statistical results revealing that the CSA is robust to high-dimensional problems. As a result, the proposed CSA is a promising algorithm that can be used to easily handle a wide range of optimization problems.
Journal Article
Theory and Applications of Robust Optimization
by
Bertsimas, Dimitris
,
Brown, David B.
,
Caramanis, Constantine
in
Algorithms
,
Analysis
,
Approximation
2011
In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Journal Article
Glider snake optimizer (GSO): a nature-inspired metaheuristic algorithm for global and engineering optimization problems
by
Alhussan, Amel Ali
,
El-kenawy, El-Sayed M.
,
Ibrahim, Abdelhameed
in
Adaptation
,
Algorithms
,
Animals
2026
The rapid expansion of complex engineering and real-world optimization problems necessitates the development of efficient, adaptable, and computationally lightweight metaheuristic algorithms. In this study, a novel nature-inspired algorithm called glider snake optimization (GSO) is proposed, which draws behavioral inspiration from the gliding and serpentine locomotion patterns of arboreal snakes to enhance solution exploration and convergence control. The GSO algorithm incorporates a multi-segment movement mechanism, a flexible gliding path generator, and an elite guidance model to ensure effective balance between exploration and exploitation. Extensive experimental validation is conducted using a comprehensive set of 23 classical benchmark functions, high-dimensional test cases (100D, and 500D), the CEC 2019 benchmark suite, and several constrained engineering design problems. The results demonstrate that GSO outperforms or matches 13 state-of-the-art algorithms, including particle swarm optimization (PSO), grey wolf optimizer (GWO), whale optimization algorithm (WOA), and differential evolution (DE) in terms of accuracy, convergence speed, computational cost, and robustness. The algorithm also exhibits exceptional stability across parameter variations, as confirmed through sensitivity analysis and statistical significance testing. These findings highlight the potential of GSO as a powerful and efficient tool for solving complex optimization problems in both theoretical and practical domains. Additionally, GSO achieves leading performance on most benchmark functions, with error reductions of up to 90% compared with competing algorithms. GSO also demonstrates faster convergence and lower variance across repeated trials, confirming its robustness. These quantitative outcomes further reinforce the effectiveness of the proposed algorithm. The MATLAB and Python implementations of GSO are available at
https://nimakhodadadi.com
.
Journal Article
An adaptive snow ablation-inspired particle swarm optimization with its application in geometric optimization
2024
In response to the shortcomings of particle swarm optimization (PSO), such as low execution efficiency and difficulty in overcoming local optima, this paper proposes a multi-strategy PSO method incorporating snow ablation operation (SAO), known as SAO-MPSO. Firstly, Cubic initialization is performed on particles to obtain a good initial environment. Subsequently, SAO and PSO are combined in parallel, and a balanced search mechanism led by multiple sub-populations is devised, significantly improving the search efficiency of overall population. Finally, the degree day method of SAO is introduced, and particles are endowed with memory of environmental changes to prevent premature convergence of PSO, while balancing the exploration and exploitation (ENE) capabilities in later phases. All adaptive parameters are used throughout this method in place of fixed parameters to improve the robustness and adaptability. For a comprehensive analysis of SAO-MPSO, its good ENE ability is verified on CEC 2020 and CEC 2022 and this method is compared with existing improved PSO versions on both test sets. The results show that SAO-MPSO has certain advantages in the comparison of similar improved algorithms. In order to further validate the strength of SAO-MPSO in dealing with nonlinear optimization problems (OPs) with strong constraints, firstly, based on the ball Wang-Ball (BWB) curve, a combined BWB (CBWB) curve is constructed, and a construction method for CBWB curves that satisfy
G
1
and
G
2
continuity is derived. Then, with the energy minimization and scale parameters of the CBWB curve as the optimization objective and variables respectively, a shape optimization model that satisfies
G
2
continuity is established. Finally, three numerical optimization examples based on this model are solved using SAO-MPSO and compared with 10 other methods. The results show that the energy obtained by SAO-MPSO is the smallest, which verifies the effectiveness of this method applied to shape OPs of CBWB curve.
Journal Article