Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
36
result(s) for
"Random opposition-based learning"
Sort by:
An Improved Hybrid Aquila Optimizer and Harris Hawks Algorithm for Solving Industrial Engineering Optimization Problems
2021
Aquila Optimizer (AO) and Harris Hawks Optimizer (HHO) are recently proposed meta-heuristic optimization algorithms. AO possesses strong global exploration capability but insufficient local exploitation ability. However, the exploitation phase of HHO is pretty good, while the exploration capability is far from satisfactory. Considering the characteristics of these two algorithms, an improved hybrid AO and HHO combined with a nonlinear escaping energy parameter and random opposition-based learning strategy is proposed, namely IHAOHHO, to improve the searching performance in this paper. Firstly, combining the salient features of AO and HHO retains valuable exploration and exploitation capabilities. In the second place, random opposition-based learning (ROBL) is added in the exploitation phase to improve local optima avoidance. Finally, the nonlinear escaping energy parameter is utilized better to balance the exploration and exploitation phases of IHAOHHO. These two strategies effectively enhance the exploration and exploitation of the proposed algorithm. To verify the optimization performance, IHAOHHO is comprehensively analyzed on 23 standard benchmark functions. Moreover, the practicability of IHAOHHO is also highlighted by four industrial engineering design problems. Compared with the original AO and HHO and five state-of-the-art algorithms, the results show that IHAOHHO has strong superior performance and promising prospects.
Journal Article
An enhanced opposition-based African vulture optimizer for solving engineering design problems and global optimization
by
Lala, Himadri
,
Chandran, Vanisree
,
Mohapatra, Prabhujit
in
639/705/1041
,
639/705/1042
,
African vulture optimizer
2025
By combining opposition-based learning techniques with conventional African Vulture Optimization (AVO), this study offers a notable improvement in the handling of optimization problems. Despite the limitations of AVO, such as issues involving extremely rough search spaces, more iterations or function evaluations are necessary. To overcome this limitation, our proposed paper, an enhanced opposition-based learning (EOBL), speeds up the convergence and, at the same time, assists the algorithm in escaping local optima. A combination of this new technique with AVO, the Enhanced Opposition-based African Vulture Optimizer (EOBAVO), is proposed. The performance of the suggested EOBAVO was evaluated through experiments using the CEC2005 and CEC2022 benchmark functions in addition to seven engineering challenges. Furthermore, statistical analyses, including the t-test and Wilcoxon rank-sum test, were conducted, and they demonstrated that the proposed EOBAVO surpasses several of the leading algorithms currently in use. The results indicate that the proposed approach can be regarded as a competent and efficient solution for complex optimization challenges.
Journal Article
A Modified Group Teaching Optimization Algorithm for Solving Constrained Engineering Optimization Problems
by
Wen, Changsheng
,
Rao, Honghua
,
Liu, Qingxin
in
Algorithms
,
engineering problems
,
Evolutionary computation
2022
The group teaching optimization algorithm (GTOA) is a meta heuristic optimization algorithm simulating the group teaching mechanism. The inspiration of GTOA comes from the group teaching mechanism. Each student will learn the knowledge obtained in the teacher phase, but each student’s autonomy is weak. This paper considers that each student has different learning motivations. Elite students have strong self-learning ability, while ordinary students have general self-learning motivation. To solve this problem, this paper proposes a learning motivation strategy and adds random opposition-based learning and restart strategy to enhance the global performance of the optimization algorithm (MGTOA). In order to verify the optimization effect of MGTOA, 23 standard benchmark functions and 30 test functions of IEEE Evolutionary Computation 2014 (CEC2014) are adopted to verify the performance of the proposed MGTOA. In addition, MGTOA is also applied to six engineering problems for practical testing and achieved good results.
Journal Article
Improved aquila optimizer with mRMR for feature selection of high-dimensional gene expression data
2024
Accurate classification of gene expression data is crucial for disease diagnosis and drug discovery. However, gene expression data usually has a large number of features, which poses a challenge for accurate classification. In this paper, a novel feature selection method based on minimal redundancy maximal relevance (mRMR) and aquila optimizer is proposed, which introduces the mRMR method in the initialization stage of the population to generate excellent initial populations, effectively improve the quality of the population, and then, the using random opposition-based learning strategy to improve the diversity of aquila population and accelerate the convergence speed of the algorithm, and finally, introducing inertia weight in the position update formula in the late iteration of the aquila optimizer to avoid the algorithm falling into the local optimum and improve the algorithm’s capability to find the optimum. In order to verify the effectiveness of the proposed method, ten real gene expression datasets are selected in this paper and compared with several meta-heuristic algorithms. Experimental results show that the proposed method is significantly superior to other meta-heuristic algorithms in terms of fitness value, classification accuracy and the number of selected features. Compared with the original aquila optimizer, the average classification accuracy of the proposed method on KNN and SVM classifiers is improved by 3.48–12.41% and 0.53–18.63% respectively. The proposed method significantly reduces the feature dimension of gene expression data, retains important features, and obtains higher classification accuracy, providing a new method and idea for feature selection of gene expression data.
Journal Article
Convergence analysis of flow direction algorithm and its improvement
2023
Flow direction algorithm (FDA) is a new physics-based optimization algorithm for solving global optimization problems. Although the FDA has shown effectiveness in many areas, there has been a lack of rigorous theoretical guarantees. This paper first proves that FDA is globally convergent with probability 1 by establishing a Markov process model. Furthermore, to enhance the FDA’s exploration and exploitation abilities, we propose an improved FDA algorithm (IFDA) by introducing random opposition-based learning and an adaptive neighbour generation strategy. Finally, extensive experiments and statistical tests are investigated on the classical benchmark functions, CEC 2019 benchmark function, and wireless sensor network coverage optimization problem with several state-of-the-art algorithms, demonstrating the proposed algorithm’s efficiency and effectiveness.
Journal Article
An Improved Aquila Optimizer Based on Search Control Factor and Mutations
2022
The Aquila Optimizer (AO) algorithm is a meta-heuristic algorithm with excellent performance, although it may be insufficient or tend to fall into local optima as as the complexity of real-world optimization problems increases. To overcome the shortcomings of AO, we propose an improved Aquila Optimizer algorithm (IAO) which improves the original AO algorithm via three strategies. First, in order to improve the optimization process, we introduce a search control factor (SCF) in which the absolute value decreasing as the iteration progresses, improving the hunting strategies of AO. Second, the random opposition-based learning (ROBL) strategy is added to enhance the algorithm’s exploitation ability. Finally, the Gaussian mutation (GM) strategy is applied to improve the exploration phase. To evaluate the optimization performance, the IAO was estimated on 23 benchmark and CEC2019 test functions. Finally, four real-world engineering problems were used. From the experimental results in comparison with AO and well-known algorithms, the superiority of our proposed IAO is validated.
Journal Article
Improved African Vulture Optimization Algorithm Based on Random Opposition-Based Learning Strategy
2024
This paper proposes an improved African vulture optimization algorithm (IROAVOA), which integrates the random opposition-based learning strategy and disturbance factor to solve problems such as the relatively weak global search capability and the poor ability to balance exploration and exploitation stages. IROAVOA is divided into two parts. Firstly, the random opposition-based learning strategy is introduced in the population initialization stage to improve the diversity of the population, enabling the algorithm to more comprehensively explore the potential solution space and improve the convergence speed of the algorithm. Secondly, the disturbance factor is introduced at the exploration stage to increase the randomness of the algorithm, effectively avoiding falling into the local optimal solution and allowing a better balance of the exploration and exploitation stages. To verify the effectiveness of the proposed algorithm, comprehensive testing was conducted using the 23 benchmark test functions, the CEC2019 test suite, and two engineering optimization problems. The algorithm was compared with seven state-of-the-art metaheuristic algorithms in benchmark test experiments and compared with five algorithms in engineering optimization experiments. The experimental results indicate that IROAVOA achieved better mean and optimal values in all test functions and achieved significant improvement in convergence speed. It can also solve engineering optimization problems better than the other five algorithms.
Journal Article
Improved War Strategy Optimization with Extreme Learning Machine for Health Data Classification
2025
Classification of diseases is of great importance for early diagnosis and effective treatment processes. However, etiological factors of some common diseases complicate the classification process. Therefore, classification of health datasets by processing them with artificial neural networks can play an important role in the diagnosis and follow-up of diseases. In this study, disease classification performance was examined by using Extreme Learning Machine (ELM), one of the machine learning methods, and an opposition-based WSO algorithm with a random opposite-based learning strategy is proposed. Common health datasets: Breast, Bupa, Dermatology, Diabetes, Hepatitis, Lymphography, Parkinsons, SAheart, SPECTF, Vertebral, and WDBC are used in the experimental studies. Performance evaluation was made by accuracy, precision, sensitivity, specificity, and F1 score metrics. The proposed IWSO-based ELM model has demonstrated better classification success compared to the ALO, DA, PSO, GWO, WSO, OWSO metaheuristics, and LightGBM, XGBoost, SVM, Neural Network (MLP), CNN machine and deep learning methods. In the Wilcoxon test, it was determined that IWSO was p < 0.05 when compared to other algorithms. In the Friedman test, it was determined that IWSO was first in the ranking of success compared to other algorithms. The results reveal that the IWSO approach developed with ELM is an effective method for the accurate diagnosis of common diseases.
Journal Article
Short term wind speed prediction based on CEESMDAN and improved seagull optimization kernel extreme learning machine
by
Shi, Hongyu
,
Qin, Xiwen
,
Zhang, Siqi
in
Algorithms
,
Convergence
,
Earth and Environmental Science
2025
Accurate wind speed predictions are crucial for the planning, operation, and energy management of wind farms. In this paper, we propose a novel wind speed prediction model, CEESMDAN-LNR-SOA-KELM. Firstly, we employ the CEESMDAN decomposition method to extract features from the original wind speed data, capturing the underlying characteristics of the data. Secondly, we apply a nonlinear treatment to the convergence factor A of the seagull optimization algorithm (SOA) to better adapt to the complexity and diversity of the problem, thereby enhancing the algorithm's convergence speed. Additionally, we introduce a random opposition-based learning strategy to effectively prevent the SOA algorithm from getting stuck in local optima. We further optimize the parameters of KELM using LNR-SOA. The results of function optimization demonstrate that the proposed improvement strategy significantly enhances the parameter optimization capability of the SOA algorithm. The wind speed data from the Sotavento Galicia wind farm in Spain were used as the subject of the numerical experiments. The experimental results indicate that the model proposed in this paper demonstrates higher accuracy and reliability in wind speed prediction compared to the comparative models. It provides an effective forecasting tool for the wind energy industry and meteorological predictions.
Journal Article
Improved Honey Badger Algorithm and Its Application to K-Means Clustering
2025
As big data continues to evolve, cluster analysis still has a place. Among them, the K-means algorithm is the most widely used method in the field of clustering, which can cause unstable clustering results due to the random selection of the initial clustering center of mass. In this paper, an improved honey badger optimization algorithm is proposed: (1) The population is initialized using sin chaos to make the population uniformly distributed. (2) The density factor is improved to enhance the optimization accuracy of the population. (3) A nonlinear inertia weight factor is introduced to prevent honey badger individuals from relying on the behavior of past individuals during position updating. (4) To improve the diversity of solutions, random opposition learning is performed on the optimal individuals. The improved algorithm outperforms the comparison algorithm in terms of performance through experiments on 23 benchmark test functions. Finally, in this paper, the improved algorithm is applied to K-means clustering and experiments are conducted on three data sets from the UCI data set. The results show that the improved honey badger optimized K-means algorithm improves the clustering effect over the traditional K-means algorithm.
Journal Article