Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
19
result(s) for
"Large-scale multiobjective optimization"
Sort by:
Weak Population–Empowered Large‐Scale Multiobjective Immune Algorithm
2025
The multiobjective immune optimization algorithms (MOIAs) utilize the principle of clonal selection, iteratively evolving by replicating a small number of superior solutions to optimize decision vectors. However, this method often leads to a lack of diversity and is particularly ineffective when facing large‐scale optimization problems. Moreover, an overemphasis on elite solutions may result in a large number of redundant offspring, reducing evolutionary efficiency. By delving into the causes of these issues, we find that a key factor is that existing algorithms overlook the role of weak solutions during the evolutionary process. With this in mind, we propose a weak population–empowered large‐scale multiobjective immune algorithm (WP–MOIA). The core of this algorithm is to construct, in addition to the traditional elite population, a cooperative evolutionary population based on a portion of the remaining solutions, referred to as the weak population. During the evolution, both populations work together: the elite population maximizes its advantageous status for local searches, focusing on exploitation, while the weak population seeks greater variation to escape its disadvantaged position, engaging in broader exploration. At the same time, the sizes of both populations are dynamically adjusted to collaboratively maintain the balance of evolution. Through comparisons with nine state‐of‐the‐art multiobjective evolutionary algorithms (MOEAs) and four powerful MOIAs on 30 benchmark problems, the proposed algorithm demonstrates superior performance in both small‐scale and large‐scale multiobjective optimization problems (MOPs), and exhibits better convergence efficiency. Especially in large‐scale MOPs, the new algorithm’s performance nearly surpasses all 13 advanced algorithms being compared.
Journal Article
Improving Search Accuracy in Large-Scale Biased Multiobjective Optimization Through Local Search
2025
Biased multiobjective optimization problems pose a challenge for evolutionary algorithms in obtaining high-accuracy solutions, and as the number of decision variables increases, this challenge becomes increasingly difficult to overcome. To address this issue, we propose a three-particle-based local search method (TPS) for multiobjective evolutionary algorithms (MOEAs). The main concept is to use three particles to maintain three equidistant values of a decision variable and gradually approach the local optimal value by adaptively adjusting their differences. Specifically, the TPS maintains a population with three particles and uses five proposed population state-transition operations to gradually move these three particles to a better state. A local optimal value can be obtained when these three particles become indistinguishable. The TPS is then embedded into an MOEA to form a new algorithm, called MOEA/TPS. To enable the TPS to search along the convergence and diversity directions, the two aggregation functions of the target problem are alternately used. Compared with twelve competitive MOEAs on various biased test problems with 30 to 2000 decision variables, our proposed algorithm demonstrates significant advantages in obtaining high-accuracy solutions.
Journal Article
Evolutionary Neural Architecture Search and Its Applications in Healthcare
by
Cao, Bin
,
Lyu, Zhihan
,
Liu, Xin
in
Artificial neural networks
,
distributed parallelism
,
Evolutionary algorithms
2024
Most of the neural network architectures are based on human experience, which requires a long and tedious trial-and-error process. Neural architecture search (NAS) attempts to detect effective architectures without human intervention. Evolutionary algorithms (EAs) for NAS can find better solutions than human-designed architectures by exploring a large search space for possible architectures. Using multiobjective EAs for NAS, optimal neural architectures that meet various performance criteria can be explored and discovered efficiently. Furthermore, hardware-accelerated NAS methods can improve the efficiency of the NAS. While existing reviews have mainly focused on different strategies to complete NAS, a few studies have explored the use of EAs for NAS. In this paper, we summarize and explore the use of EAs for NAS, as well as large-scale multiobjective optimization strategies and hardware-accelerated NAS methods. NAS performs well in healthcare applications, such as medical image analysis, classification of disease diagnosis, and health monitoring. EAs for NAS can automate the search process and optimize multiple objectives simultaneously in a given healthcare task. Deep neural network has been successfully used in healthcare, but it lacks interpretability. Medical data is highly sensitive, and privacy leaks are frequently reported in the healthcare industry. To solve these problems, in healthcare, we propose an interpretable neuroevolution framework based on federated learning to address search efficiency and privacy protection. Moreover, we also point out future research directions for evolutionary NAS. Overall, for researchers who want to use EAs to optimize NNs in healthcare, we analyze the advantages and disadvantages of doing so to provide detailed guidance, and propose an interpretable privacy-preserving framework for healthcare applications.
Journal Article
A Stable Large-Scale Multiobjective Optimization Algorithm with Two Alternative Optimization Methods
by
Liu, Tianyu
,
Zhu, Junjie
,
Cao, Lei
in
Algorithms
,
Analysis
,
Bayesian-based parameter adjusting
2023
For large-scale multiobjective evolutionary algorithms based on the grouping of decision variables, the challenge is to design a stable grouping strategy to balance convergence and population diversity. This paper proposes a large-scale multiobjective optimization algorithm with two alternative optimization methods (LSMOEA-TM). In LSMOEA-TM, two alternative optimization methods, which adopt two grouping strategies to divide decision variables, are introduced to efficiently solve large-scale multiobjective optimization problems. Furthermore, this paper introduces a Bayesian-based parameter-adjusting strategy to reduce computational costs by optimizing the parameters in the proposed two alternative optimization methods. The proposed LSMOEA-TM and four efficient large-scale multiobjective evolutionary algorithms have been tested on a set of benchmark large-scale multiobjective problems, and the statistical results demonstrate the effectiveness of the proposed algorithm.
Journal Article
A large-scale multiobjective evolutionary algorithm with overlapping decomposition and adaptive reference point selection
2023
Many large-scale multiobjective optimization problems with large decision space hinder the convergence search of evolutionary algorithms in various practical applications. Using the divide-and-conquer strategy to decompose the large-scale multiobjective problem into some subproblems and collaborative optimization is an effective strategy. However, the interactions between decision variables may cause many indirect interactions, which make complex high-dimensional problems impossible to decompose successfully using existing decomposition techniques. This paper proposes a multiobjective evolutionary algorithm with overlapping decomposition and adaptive reference point selection (MOEA-ODAR) for solving large-scale multiobjective problems. First, a decision variable overlap decomposition approach is suggested to group decision variables into several exclusive subcomponents. An adaptive resource allocation ensemble optimization method is then proposed to allocate corresponding resources to subcomponents with different structures. Finally, an adaptive reference point selection method based on Pareto shape estimation is designed to optimize the specific subcomponents. The theoretical analysis of the correctness of overlapping decomposition decision variables and collaborative optimization is presented. It is compared with the newly proposed excellent large-scale multiobjective optimization algorithm on many test problems. The experimental results show that the proposed algorithm performs better in terms of convergence, population distributivity, and computational efficiency. In addition, the superiority of the algorithm on large-scale many-objective problems is verified.
Journal Article
A parallel large-scale multiobjective evolutionary algorithm based on two-space decomposition
2025
Decomposition is an effective and popular strategy used by evolutionary algorithms to solve multiobjective optimization problems (MOPs). It can reduce the difficulty of directly solving MOPs, increase the diversity of the obtained solutions, and facilitate parallel computing. However, with the increase of the number of decision variables, the performance of multiobjective evolutionary algorithms (MOEAs) often deteriorates sharply. The advantages of the decomposition strategy are not fully exploited when solving such large-scale MOPs (LSMOPs). To this end, this paper proposes a parallel MOEA based on two-space decomposition (TSD) to solve LSMOPs. The main idea of the algorithm is to decompose the objective space and decision space into multiple subspaces, each of which is expected to contain some complete Pareto-optimal solutions, and then use multiple populations to conduct parallel searches in these subspaces. Specifically, the objective space decomposition approach adopts the traditional reference vector-based method, whereas the decision space decomposition approach adopts the proposed method based on a
diversity design subspace
(DDS). The algorithm uses a message passing interface (MPI) to implement its parallel environment. The experimental results demonstrate the effectiveness of the proposed DDS-based method. Compared with the state-of-the-art MOEAs in solving various benchmark and real-world problems, the proposed algorithm exhibits advantages in terms of general performance and computational efficiency.
Journal Article
A dual decomposition strategy for large-scale multiobjective evolutionary optimization
by
Yang, Cuicui
,
Wang, Peike
,
Ji, Junzhong
in
Artificial Intelligence
,
Computational Biology/Bioinformatics
,
Computational Science and Engineering
2023
Multiobjective evolutionary algorithms (MOEAs) have received much attention in multiobjective optimization in recent years due to their practicality. With limited computational resources, most existing MOEAs cannot efficiently solve large-scale multiobjective optimization problems (LSMOPs) that widely exist in the real world. This paper innovatively proposes a dual decomposition strategy (DDS) that can be embedded into many existing MOEAs to improve their performance in solving LSMOPs. Firstly, the outer decomposition uses a sliding window to divide large-scale decision variables into overlapped subsets of small-scale ones. A small-scale multiobjective optimization problem (MOP) is generated every time the sliding window slides. Then, once a small-scale MOP is generated, the inner decomposition immediately creates a set of global direction vectors to transform it into a set of single-objective optimization problems (SOPs). At last, all SOPs are optimized by adopting a block coordinate descent strategy, ensuring the solution’s integrity and improving the algorithm’s performance to some extent. Comparative experiments on benchmark test problems with seven state-of-the-art evolutionary algorithms and a deep learning-based algorithm framework have shown the remarkable efficiency and solution quality of the proposed DDS. Meanwhile, experiments on two real-world problems show that DDS can achieve the best performance beyond at least one order of magnitude with up to 3072 decision variables.
Journal Article
An adjoint feature-selection-based evolutionary algorithm for sparse large-scale multiobjective optimization
2025
Sparse large-scale multiobjective optimization problems (sparse LSMOPs) are characterized by an enormous number of decision variables, and their Pareto optimal solutions consist of a majority of decision variables with zero values. This property of sparse LSMOPs presents a great challenge in terms of how to rapidly and precisely search for Pareto optimal solutions. To deal with this issue, this paper proposes an adjoint feature-selection-based evolutionary algorithm tailored for tackling sparse LSMOPs. The proposed optimization strategy combines two distinct feature selection approaches. Specifically, the paper introduces the sequential forward selection approach to investigate independent sparse distribution, denoting it as the best sequence of decision variables for generating a high-quality initial population. Furthermore, it introduces the Relief approach to determine the relative sparse distribution, identifying crucial decisive variables with dynamic updates to guide the population in a promising evolutionary direction. Experiments are conducted on eight benchmark problems and two real-world problems, and experimental results verify that the proposed algorithm outperforms the existing state-of-the-art evolutionary algorithms for solving sparse LSMOPs.
Journal Article
Evolutionary Multiobjective Optimization with Endmember Priori Strategy for Large-Scale Hyperspectral Sparse Unmixing
by
Xie, Fei
,
Wang, Zhao
,
Li, Peng
in
Evolutionary algorithms
,
Genetic algorithms
,
Hyperspectral imaging
2021
Mixed pixels inevitably appear in the hyperspectral image due to the low resolution of the sensor and the mixing of ground objects. Sparse unmixing, as an emerging method to solve the problem of mixed pixels, has received extensive attention in recent years due to its robustness and high efficiency. In theory, sparse unmixing is essentially a multiobjective optimization problem. The sparse endmember term and the reconstruction error term can be regarded as two objectives to optimize simultaneously, and a series of nondominated solutions can be obtained as the final solution. However, the large-scale spectral library poses a challenge due to the high-dimensional number of spectra, it is difficult to accurately extract a few active endmembers and estimate their corresponding abundance from hundreds of spectral features. In order to solve this problem, we propose an evolutionary multiobjective hyperspectral sparse unmixing algorithm with endmember priori strategy (EMSU-EP) to solve the large-scale sparse unmixing problem. The single endmember in the spectral library is used to reconstruct the hyperspectral image, respectively, and the corresponding score of each endmember can be obtained. Then the endmember scores are used as a prior knowledge to guide the generation of the initial population and the new offspring. Finally, a series of nondominated solutions are obtained by the nondominated sorting and the crowding distances calculation. Experiments on two benchmark large-scale simulated data to demonstrate the effectiveness of the proposed algorithm.
Journal Article
Large-scale multiobjective competitive swarm optimizer algorithm based on regional multidirectional search
2025
Competitive swarm optimizer (CSO) based on multidirectional search plays a crucial role in addressing large-scale multiobjective optimization problems (LSMOPs). However, relying solely on uniform or cluster partitioning of the objective space for sampling, along with two search directions constructed with upper and lower boundaries of global variables, sometimes lacks consideration of regional information. This results in an inefficient search and hinders the global convergence of the algorithm. To solve these problems, this study proposes a large-scale multiobjective competitive swarm optimizer algorithm based on regional multidirectional search (AMSLMOEA). Firstly, an adaptive objective space partitioning method based on the evolutionary state of the population is designed to enhance the adaptability of partitioning. Secondly, an individual multidirectional search strategy is introduced. Considering the algorithm’s computational complexity, the strategy selects the optimal individual within each subregion and constructs four-directional search vectors based on the lower limit of the global decision variables and the upper limit of the individual decision variables within the subregion. To validate the effectiveness of AMSLMOEA, the performance is tested on four benchmark function sets. The results demonstrate that AMSLMOEA outperforms the vast majority of the compared algorithms in terms of the IGD and HV metrics.
Journal Article