A new competitive multiverse optimization technique for solving single‐objective and multiobjective problems

The development of useful algorithms for solving global optimization problems has recently drawing the research community's attention. A number of optimization algorithms have been suggested, which mimic a particular biological process or imitate natural evolution. In this work, a novel population‐based optimization technique is proposed, the so‐called competitive multiverse optimizer (CMVO) for solving global optimization problems. This novel method is fundamentally inspired by the multiverse optimizer algorithm (MVO) but with a different framework. Our basic idea is to inject a pairwise competition mechanism between universes and adopts a novel update strategy that make universes learn from the winner. In CMVO, the mechanism of updating universes is not confined to the optimal universe but rather uses a bicompetitive scheme at each generation in which the universe that loses the competition learns from the winner, unlike in MVO in which all universes learn from the optimal universe. The main idea of this work is to raise the exploration rate for the search space by performing pairwise competition and improve the exploitation ability through learning from the winner. This work also presents a multi‐objective version of CMVO called MOCMVO with a simple structure to show the rapid convergence toward the Pareto front. The performance of our proposed method in single‐objective optimization is demonstrated on a large number of mathematical problems of standard benchmark. The results obtained confirm the superior overall performance of CMVO in terms of the quality of obtained solutions, computational efficiency, and convergence speed relative to many of the state‐of‐the‐art metaheuristic algorithms. In multiobjective optimization, the proposed MOCMVO is evaluated and compared to other multiobjective algorithms using 10 multiobjective benchmarks involving five unconstrained, three constrained, and two engineering design problems. The experimental results of the proposed method in multiobjective optimization demonstrate the competitiveness of the proposed method in term of quantitative and qualitative measures.

results of the proposed method in multiobjective optimization demonstrate the competitiveness of the proposed method in term of quantitative and qualitative measures.

K E Y W O R D S
global numerical optimization, metaheuristics, multiobjective optimization, multiverse optimization, Pareto optimal solutions, single-objective optimization INTRODUCTION The never-ending search for the best solutions makes optimization concern for engineers, especially problems that involve a huge number of parameters. Optimization plays an important role in many research areas, such as computer sciences, mathematics, and economics. The solution of such engineering problems makes optimization a desirable element in applied modeling. Optimization refers to the process of analyzing, modeling, and solving analytical or numerical problems to determine the solutions that achieve a quantitative goal while considering potential constraints. 1 Many real-life problems can be formulated as an optimization program that aims to minimize or maximize cost. 2 For a given problem, a mathematical model refers to a system that describes the structure and identification of significant variables using mathematical concepts.
Optimization algorithms vastly used today have two main types: stochastic and deterministic. 3 Stochastic algorithms mimic phenomena in nature with translation rules. Their popularity in vast applications is due to their certain properties that are unavailable in deterministic algorithms. Deterministic algorithms use specific rules with a similar initial starting point to move from one solution to other. These algorithms behave predictably, which indicates that they always produce the same outputs given a particular input yield. Stochastic algorithms are widely used techniques and have been successful in solving many complex practical problems that are difficult to solve using deterministic algorithms. 4 Stochastic methods may produce different outputs for each run on a given input or same input owing to its simple computational concept. Therefore, the results of these techniques are unpredictable and can be obtained not by a single way but by multiple ways.
Stochastic algorithms are further classified into two types, that is, heuristic algorithms and metaheuristic algorithms. Heuristic, as the name suggests, is the process of finding the solutions by trial-and-error method. 5 Metaheuristic algorithms refer to the family of algorithms that use natural-inspired mechanisms and have many interesting features. Randomization provides a fundamental feature of modern search. In essence, the aforementioned algorithms utilize random walks and numbers when seeking potential solutions and evaluating them. Thus, stochastic optimization algorithm is generally unpredictable due to randomness. 5 No randomness optimization algorithms achieve absolute success in solving most of the problems currently being presented, but they can obtain the optimal solution much easier than deterministic algorithms.
Metaheuristic algorithms are regrouped by the number of individuals used at the same time. Metaheuristic algorithms have two main classes: individual and population based. 6 In the individual-based class, a single particle is produced for a specific problem and is improved during the iterative process (eg, trajectory methods). 7 By contrast, the population-based class performs a search process to solve a problem by describing candidate solutions with parallel computing. The population-based class can be generalized to include methods that describe, reproduce, and compete candidate solutions that are subjected to selective pressures and random changes for evolving toward an improved solution. 8 The randomness strategies of population-based algorithms not only guarantee the best possible global optimal solution but also avoid stagnation to a local optimum. However, they require specific mechanisms and need several tuning parameters to maintain the balance between exploitation and exploration.
The most popular techniques have elicited considerable attention in the search for a solution to accommodate concerns regarding optimization. The most popular individual-based algorithms are hill climbing and simulated annealing (SA), 9 whereas the most popular population-based algorithms are particle swarm optimization (PSO), 10 ant colony optimization (ACO), 11 and genetic algorithm (GA). 12 Population-based metaheuristics have some benefits over individual-based algorithms because they have a high capability to explore and can reduce the probability to stick in local optima and cover all promising search space for information exchange between populations. 13 The main idea behind modeling metaheuristic algorithms is to remedy many challenging complex optimization problems (NP-hard) compared to other traditional optimization methods that are failed to be more powerful to tackle engineering problem. In other words, an approach may be suitable for solving particular problems, but it is not suitable for solving those other problems. This last is true as global optimization problems have become more and more complex, such nonlinear and nondifferential search space that involves a unimodal functions, rotated shifted multimodal functions, and hybrid composition functions. Hence, many complex optimizations become increasingly more complex, thus more renewal and better optimization algorithms are severely required.
• Metaheuristic algorithms are characterized by their simple mathematical model that simulates simple natural phenomena such as social grouping, evolutionary concepts and animals' behavior. 14 Simplicity is usually associated with the ability of programmers to emulate various concepts found in nature, and this feature is evident in the vast number of algorithms derived from nature currently present in literature. Metaheuristic algorithms are simple because they can be hybridized into two or more metaheuristics or can be combined with traditional optimization techniques.
• Metaheuristic algorithms are more versatile than other optimization methods due to their black box character that offers only a few assumptions about the underlying objective functions. The output state for a particular input state is difficult to be predicted. Thus, the input and output states are important. The versatility of metaheuristic algorithms lies in the enforcement of metaheuristics to different parameters (eg, numeric, alphabetic, and symbolic) without any remarkable modifications in the global framework.
• Randomization refers to a basic principle of random optimization algorithms, that is, derived information need not be used to obtain the optimal solution. 7 Contrary to gradient-based optimization approaches that are not stochastic (ie, dependent on the problem), metaheuristics optimize problems gradually and stochastically find the optimal solution (ie, independent on the problem). Metaheuristics can randomly search the entire search space and can thus leap over the local solutions to obtain the global solutions.
Despite the advantages of metaheuristic algorithms, their optimization of real problems has difficulties mainly in multiple objectives, multicriteria objectives, and dynamic objection functions. 15 Several objectives should be simultaneously optimized but are usually conflicting. The scientific community has used metaheuristics as primary optimization engines to solve multiobjective optimization problems. The field of multiobjective optimization has also handled multiple objectives by using metaheuristic techniques.
In recent years, multiobjective optimization has become a powerful framework in the evolutionary computation community, and many optimization algorithms based on multiple objectives have been proposed, 3 such as GA, 16 immune clone algorithm, 17 differential evolution (DE) algorithm, 18 knee point driven, 19 and hybrid infection optimization algorithm. 20 In the past decade, many multiobjective algorithms have been developed. This study focuses on popular metaheuristic population-based algorithms.
A comprehensive mechanism for the work of all population-based multiobjective algorithms is nearly identical. Therefore, the optimization process is a computational method that guides multiple candidates through iterative attempts to measure a given operator (Pareto dominance). During optimization, new and nondominated solutions are created where the algorithm tries to improve them in the next iteration. All generated solutions are listed in a new set of nondominated solutions, which is stored in a repository, thereby the Pareto front is formed. A population-based algorithm differs from other algorithms because it uses various computational techniques to improve nondominated solutions.
This study concentrates on solving multi-and single-objective problems by a new proposed algorithm. In particular, we develop a new population-based algorithm called competitive multiverse optimizer (CMVO) that injects a competitive mechanism between populations to build an advanced algorithm. This mechanism was incorporated into the proposed algorithm to enhance the exploration rate for the search space by performing pairwise competition and to provide a very good exploitation rate through learning from the winner. We injected a competitive mechanism into the MVO to facilitate the search capability and improve the exploration and exploitation processes. Consequently, the combined competitive mechanism and MVO can reinforce the computational strengths of the two algorithms and can thus reduce the cost. The proposed algorithm is applied to find solutions for multiple optimization problems. These benefits motivate us to attack multiobjective problems directly to evolve a set of solutions in one run of the optimization process instead of solving multiple separate problems. A multiobjective version of CMVO (MOCMVO) is evaluated here by solving a set of multi-objective and engineering design problems.
This article first presents a mathematical model of competitive mechanism. An optimization algorithm is then proposed using the mathematical model to solve single-objective and multiobjective optimization problems. The main contributions of this study can be summarized as follows: • A novel algorithm is proposed based on a competitive mechanism with a specific learning strategy in which the universes are updated to learn from competitive's winners. This algorithm is called CMVO and it is used to optimize single-objective optimization problems.
• A comparison with those of other metaheuristic algorithms in literature is conducted using a large number of mathematical problems of standard benchmark to confirm the applicability of our model. The experimental results demonstrate that the proposed CMVO shows better performance.
• We designed a multiobjective version for CMVO, which has a simple structure, few parameters, and no complicated operators, called MOCMVO.
• Methodical experiments have been conducted on 10 test functions, involving five unconstrained, three constrained, and two engineering design problems, to demonstrate the effectiveness of MOCMVO. The efficiency of MOCMVO is compared with those of other multiobjective algorithms, and the results suggest that, overall, MOCMVO achieved better performance.
The rest of the article is structured as follows. Section 2 reviews the related works. Section 3 briefly introduces the concepts of MVO. Section 4 outlines the mathematical overview of the CMVO. Section 5 discusses the use of MVO in multi-objective optimization. Section 6 presents the experimental setup and results. Section 7 summarizes the conclusions and directions for future work.

LITERATURE REVIEW
This section starts with a brief discussion and review of several studies on single-and multiobjective optimization techniques that used to solve real-complex problems and standard mathematical benchmarks. A general optimization with a single-objective problem can be mathematically presented as follows: 4 Minimize ∶ Minimize f (X); where f(X) is the objective function, X = (x1, .., xD) is a vector of variables, D is the dimension of the problem, L = (l1, l2, … , lD) and U = (u1, u2, … , uD) represent the lower and upper limits of the corresponding variables, respectively.
Most metaheuristic optimization algorithms are nature-inspired and involve stochastic operators such as mutation, crossover, natural selection, and survival of the fittest. These algorithms are based on the iterative search process guided by the evolutionary behaviors of natural systems. Metaheuristics involve three major classes, namely, biological systems, physical or chemical systems, 4 and swarming behavior. 21 Algorithms mainly inspired by biological systems such as DE, evolution strategy, and genetic programming are usually use the concepts of evolutionary biological phenomena or natural organisms. GA is a famous algorithm based on Darwin's principle of survival of the fittest and it used to solve many problems in literature such as global optimization. 22 All algorithms inspired based on swarming behavior mimic all kinds of intelligent coexistence behaviors mainly for animals, 14 such as birds, fishes, ants, bees, fireflies, cats, and monkeys. Swarm-based algorithms are a branch of natural-inspired models and have been gradually developing to simulate the social intelligence of living swarms or herds of creatures in nature. Algorithms include particle such as PSO, ACO, Gray wolf optimization (GWO), artificial bee colony (ABC), owl optimization algorithm (OOA), Falcon optimization algorithm (FOA), cuckoo search algorithm (CSA), and firefly algorithm (FA). Many researchers around the world have been benefited from the diversity in swarm-based algorithms, which are applied to solve complex optimization problems in various fields such as test scheduling problems, 23,24 engineering optimization problems, 10,11,[25][26][27][28] heat exchangers problems, [29][30][31] neural network parameter optimization, 32,33 health-care, 34,35 real-time object tracking, 36,37 protein detection, 38,39 task scheduling in cloud computing, 40,41 and clustering for wireless sensor networks. 42,43 The third category of algorithms that use physical or chemical systems, typically simulate physical phenomena occurring in nature like Newton's gravitational law, quantum mechanics, and universe theory. The mechanism is almost identical to the first category, but the search candidates communicate and navigate through search space according to physical theories. A considerable number of metaheuristic algorithms in the literature have taken inspiration from physical phenomena for handling optimization problems. For solving global optimization problems using Physics-based algorithms, a considerable number of works have been proposed such as electromagnetic field optimization (EFO), 44 SA, 45 multiverse optimization (MVO), 13 and flow regime algorithm (FRA). 46 Moreover, MVO has been used widely in many real-world optimization applications such as geological engineering, which it was used to slope stability assessment, 47 economic dispatch problem, which is used for solving a linear programming model for proper management of electricity production sources, 48 handwritten documents, which is applied as an automatic clustering algorithm, 49 image segmentation, which is adopted in color image segmentation problem, 50 optimal power flow problem, which is implemented for solving fuel cost reduction, voltage deviation minimization, and voltage stability improvement. 51 Recently, multiobjective optimization is a promising direction to handle problems that involve conflicting objects. Multiobjective optimization algorithms are applied to find the approximation to the entire Pareto optimal solutions. PSO and GA are popular methods for handling multiobjective problems. 52 NSGA-II is an extended version of GA for multiple objective function optimization. It is a very famous multiobjective algorithm using an elite strategy, which is the main feature, to improve the search ability with low computational cost compared with its ancestor NSGA by Srinivas and Deb. 53 In NSGA-II, the populations are arranged using a nondominated classification method into each front. The working idea of the mentioned algorithm is to implement a sorting mechanism that selects a nondomination level depending on the rank and the crowding distance into each front. In selection process, an evaluation of candidate solutions is performed on each Pareto front, and the results are used to promote diverse fronts of nondominated solutions. The NSGA-II algorithm has become increasingly adaptable to address the most complex problems for any scientific discipline.
Coello et al 54 presented a modified scheme of PSO called MOPSO to address multiobjective problems, and this method incorporates the mechanism of archiving by using the concept of Pareto optimality. Moreover, the mechanism of leader selection helps individuals to cover the entire Pareto optimal front. MOPSO greatly impacts the convergence speed because it allows convergence to a local optimum in very early iteration. However, this condition may cause premature convergence.
Several studies are currently developing and proposing algorithms based on stochastic population for handling multiobjective problems. For example, Luo et al 55 proposed a multiobjective ABC that allows the ABC algorithm to deal with multiobjective optimization problems. This method is based on the nondominated sorting strategy and uses the concept of Pareto dominance to determine the Pareto optimal front. The authors also developed an external archive based on preference indicators to maintain nondominated solutions.
Gaurav and Vijay 56 also proposed an elitist framework called multiobjective spotted Hyena optimizer (MOSHO). This framework is based on a fixed-sized archive for storing the nondominated Pareto optimal solutions. The authors also used an elite selection mechanism, which is the roulette wheel, to increase the parallel search for conflicting solutions from archive to mimic the social and hunting behaviors of spotted hyenas.
Abbasian et al 57 presented a multiobjective GSA (MOGSA) by introducing the concept of Pareto sorting mechanism and Pareto principle approach. The authors also presented a new strategy to maintain the archive with a size reduced using a clustering method. The proposed strategy improves the diversity of solutions and adds an elitism policy to MOGSA.
Wanga and Tang 58 proposed a novel intelligent multipopulation differential evolution to solve multiobjective optimization problems. This approach incorporates multiple populations for multiobjective problems, clustering and statistical methods, and crossover operators and data analysis for diversity. The technique improves current nondominated solutions.
Other methods for multiobjective problems have also been proposed, such as multiobjective ant lion optimizer (MOALO) 59 and multiobjective gray wolf optimization (MOGWO) 60 that implement an enhanced external archive to enhance the solution quality, and multiobjective grasshopper optimization algorithm GOA. 61 Recent studies show the capability of metaheuristic optimization algorithms in handling multiobjective problems.
Thus, a novel algorithm called CMVO is developed in this study using the competitive scheme applied to the population-based algorithm MVO. This algorithm is an alternative to the current optimization algorithms in literature and aims to solve multiple optimization problems, such as single and multiple objectives.

MULTIVERSE OPTIMIZER
The MVO algorithm is simulated on the basis of concepts that exist theoretically in astronomy including white holes, which are the main element of the creation of universes and have never been observed in the entire universe; black holes, which draw objects toward them by gravity applied to those around them; and worm holes, which are tunnels that connect parts in the outer space. 13 In this theory, the three components are mathematically modeled to develop an optimizer, which mimics teleportation and object exchange between universes starting from white holes through worm holes to black holes. The aforementioned algorithm relies on a population of evolving individuals. Each individual is a candidate solution that encodes the three concepts of black, white, and worm holes in its objects. The multicandidate solutions help each other and share information with each other to move toward promising areas. The best solution has high chances to reproduce for guiding the exploration of the space toward promising solutions. Thus, the probability of falling into the local optima is low.
To combine the solutions, white and black holes are randomly created in the universes and thus causes movement of objects. Notably, each universe is evaluated with an objective function, and its objective value is considered as an inflation rate. MVO uses black and white holes to explore the search spaces, whereas it uses worm holes to exploit the search spaces. 62 The mechanism of object exchange across universes is that high-inflation universes always attempt to dispose objects and send them to the receiving universes with low inflation. Moreover, low-inflation universes take objects from high-inflation universes to accomplish a stable universe (ie, stable state) with an amended inflation rate. At the end of this mechanism, the inflation rates in all universes are balanced, and all universes are in stable states. 62 During this process, the universes are initialized as usual and then arranged in accordance with their inflation. At each stage, the exchange of objects between the local universes is maintained toward the best universe through worm hole tunnels. The formula for modeling this mechanism is as follows: where x j indicates the jth in the formed universe; x j i represents the jth variable in the ith universe; the traveling distance rate (TDR) and wormhole existence probability (WEP) are the two main coefficients; the values of rd1, rd2, r1 are random variables in the interval [0, 1]. The formulas for the two coefficients are given by where p (=6) denotes the accuracy of exploitation over the iterations. where min refers to the lower bound of WEP and max refers to the upper bound of WEP, l indicates the current iteration, and L shows the maximum number of iterations.

COMPETITIVE MULTIVERSE OPTIMIZER
In this article, our proposal aims to obtain solutions with good quality and avoid premature convergence of MVO algorithm. The balance between exploration and exploitation mechanisms rendered in MVO is critically reviewed to take the necessary measures. The proposed CMVO algorithm is motivated by the previous work of CSO by Han et al. 63 In the modified algorithm, the dynamic mechanism of object exchange between universes differs from that in the standard version of MVO in which universes present a competition mechanism.
In the proposed method, CMVO is a population-based algorithm and can be considered in the family of evolutionary algorithms. Competitive mechanism is implemented through which the universes are adjusted rather than depending on the global and personal best universes. The competitive mechanism ensures avoiding premature convergence by preserv-ing population diversity. 63 In CMVO, the population is randomly grouped on the basis of bicompetitions to produce two sets, winners, and losers. In each competition, the position of losers from the competition is adjusted by learning from the winners rather than from the global and personal best positions. After each competition, the winner enters the next generation.
The fundamental idea of MVO is to use one universe to learn from the optimal universe. However, half of the group of randomly selected upper and lower losers is updated by the remaining half (upper and lower winners) during competition in the proposed method. Consequently, most universes may converge together to the optimal solutions. The competition concept not only preserves a good balance between exploration and exploitation but also helps to converge toward optimal solutions and maintain the diversity for all populations. In the proposed variant, mathematical equations of updating position are adjusted using the following learning strategy: where X w,k indicates the winner universes in the kth round of competition; X l,k indicates the loser universes in the kth; X i l,k represents the kth round of competition in the ith of loser universes; the TDR and WEP are the two main coefficients; X k is the mean position value of the relevant universes; the values of rd1, rd2, r2, r3, r4 are random variables in the interval [0, 1]. Algorithm 1 shows the competition mechanism of the modified version of MVO and the upgradations of winner and loser.

Algorithm 1. Competition mechanism
The following steps in Algorithm 2 present a general outline of CMVO algorithm.
Step 1 (Initialization step): random universes are initialized based on population size and dimension of search space. After initialization step, all universes included in population will be randomly divided into two groups.
Step 2 (Bicompetition step): two groups of population at each iteration (where CMVO selects two at a time) are allowed to participate in the bicompetition. Ultimately, the one with for each individual do Exchange objects between universes (winners to losers). Objects in each universe teleport to the losers universes. end for end for best universe with lower fitness value. F I G U R E 1 Pareto-optimal surface for a two-objective problem the highest fitness is determined as the winner, and the other one is the loser.
Step 3 (Upgradation): the winner directly moves to the next iteration, whereas the loser will be updated. The mechanism of updating the position of universes is based on Equation 5.
The computational complexity of the proposed CMVO is similar to that of MVO. Specifically, each iteration of MVO has a complexity O(m * n + Cof * m), where n is the dimension of the problem, m is the population size, and Cof is the cost of the objective function. The competitive mechanism is problem-dependent, and the major computational cost is involved in updating the losers. Hence, the computational complexity of CMVO is O(mn), which is exactly same to the computational complexity of MVO as reported in Reference 13.

MULTIOBJECTIVE COMPETITIVE MULTIVERSE OPTIMIZER
MOCMVO is applied in mathematical optimization problems with more than one conflicting objective function to be solved together and at same time. In this type of problem, the aim is to produce a tradeoff among the objectives, that is, the Pareto optimal solution set. As the name implies, multiobjective optimization can be visualized as follows: 16 Minimize ∶ f i (x1, x2, .., xp) for i = 1, 2, … , n; (6) subject to ∶ (7) There are n objective functions and p variables so f(x) is an n dimensional vector, and x is a p dimensional vector corresponding to p decisions or variables.
A multiobjective problem is often mathematically formed in terms of a set of solutions that contain Pareto optimal front instead of a single optimal one in a single-objective problem. The concept of nondominance is expressed in terms of vector comparison; we suppose the existence of two vectors: x = (x1, x2, .., xp) and y = (y1, y2, .., yp) of n objects values. In the context of maximization problems, vector x is dominating vector y iff, ., n} and similarly, in a minimization problem, x dominates y iff, ., n} X is defined as the set of feasible solutions or feasible decision alternatives. Thus, in a maximization problem, x is nondominated in X if there exists no other x in X such that f (x) ≥ f (x) and x ≠ x . The set of all nondominated solutions in X is designated Pareto optimal (PO), and referred to as a Pareto-optimal set. The plot of the objective function of all solutions in the Pareto-optimal set is called Pareto front (PF). Figure 1 shows the concept of Pareto optimality in the two-objective case. In the figure, points A and B are two points of nondominated solutions on the Pareto front. Neither is preferred to the other. Point A has a smaller value of f2 but a larger value of f1 than point B. Correspondingly, point B has a smaller value of f1 but a larger value of f2 than point A. Neither solution A nor solution B is dominated by any other solution on the Pareto front or Pareto optimal surface. No solution has a better value of f1 and f2.
Our proposed multiobjective optimization algorithm has two main goals. First, superior convergence should be achieved to the Pareto optimal fronts for constrained, unconstrained, and engineering problems. Second, the obtained solutions should be high coverage of Pareto optimal solutions that are entirely uniformly distributed.
In determining the multiobjective mechanism of CMVO, we combine two new modules. We use an external archive as the first module in CMVO to store and update the nondominated solutions produced during the search process. MOCMVO uses an external archive similar to the one employed by MOPSO. 54 First, the archive is initialized as empty, that is, A = . During the evolution process, good solutions are added to the archive which it updates in every generation. The nondominated solutions found at each round are controlled to store in the current archive, which contains the set of Pareto front solutions found so far. The content of the archive should be updated regularly on the basis of three different possible cases as follows, and each new member to be archived should be compared with each archive resident. Specifically, if there is at least one dominance exists of a new member on the archive residences, then the new member can add it to the archive, and the archive members should be omitted; If there is no dominance exists of a new external member on the archive members, then the new member is refused to be added to the archive; if there is no dominance exists in neither the new member nor archive members, then the new solution should be archived. To avoid filling the archive, the worst nondominated solution should be discarded, and the new nondominated solution should be added to the archive.
Considering the existence of multiple best solutions, the white and worm holes should be chosen from the archive. We employ a guidance mechanism to guide other solutions toward promising regions of the search space for finding a solution close to the global optimum. Accordingly, solutions can be selected from the archive to establish tunnels between solutions.
In the guidance mechanism, we measure the crowding degree of the solutions, which is used as a criterion to select the less crowded one between nondominated solutions and as the new solution for next generation. It is also used to delete the crowded archive members when the external archive population has reached its maximum size.
Pi = 1∕Ni defines the probability of choosing the target (Pi) from the current solutions in the archive. In the aforementioned equation, Ni represents the number of solutions in the neighborhood of the ith solution.
In general, MOCMVO can find the Pareto optimal solutions, save them in the archive, and improve their distribution. Algorithm 3 summarizes the steps of the proposed MOCMVO algorithm. for each individual do Exchange objects between universes (winners to losers). Objects in each universe teleport to the losers universes. end for end for best universe with lower fitness value.

COMPUTATIONAL RESULTS
In this section, we discuss the results we achieved from a set of experiments for evaluating the proposed CMVO and MOCMVO algorithms. Here, two main experiments were conducted to evaluate the performance of the proposed algorithms. The first experiment aims to evaluate our CMVO algorithm by using global optimization problems in

Results of CMVO algorithm
In this section, 23 test problems are selected to evaluate the performance of the proposed CMVO algorithm. 9 A comprehensive experimental study is conducted on a large set of benchmark functions, which are illustrated in Table 2, including high-dimensional unimodal, high-dimensional multimodal, and composite multimodal problems. 21 The unimodal benchmark functions (F1-F7) only have a single global optimum and are easy to solve compared with the next categories. They are helpful to inspect the exploitation process of algorithms. 25 By contrast, multimodal functions (F8-F13) have a massive number of local optima, difficult to solve among three categories, and adequate to evaluate the exploration process. 13 Composite functions (F14-F23) have lower dimensions and fewer local minima than the previous groups. They are the combination of different rotated and shifted biased multimodal test functions. 6 To verify the capability and competitiveness of CMVO to solve different global test functions, it is compared with seven popular algorithms, namely, MVO that is the original version of CMVO, PSO that is the best algorithm among swarm-based techniques, GA that is the best among evolutionary algorithms, and GOA, LSO, SSA, WOA that are the latest nature-inspired metaheuristic algorithms presented in the literature.
To collect quantitative results, all statistics obtained (mean and standard deviation) are averaged over 30 independent runs on each benchmark function. The numerical metrics (ie, minimum objective function value, maximum objective function value, standard deviation, and mean) are presented to show which algorithm behaves more stable than others when solving the test functions. The "mean" metric refers to the algorithm's optimizing performance fairly, while the "standard deviation" metric indicates the stability of the algorithm's optimizing performance.
All results in the following experiments are produced by executing a number of times for each algorithm per test function. Then, the performance of each algorithm is recorded using the average value (mean) and the standard deviation (stdDev) of the set of solutions. Thereafter, we rank the result of compared algorithms in accordance with their mean values or their standard deviation values if the compared algorithms have the same mean value. Finally, we calculate the average ranking values of each algorithm for further comparison. In each of the test functions, we have used the same parameters in all algorithms where 50 candidate solutions (population size) are utilized over 1000 iterations, and the results are presented in Tables 3, 5, and 7.
The stochastic nature of algorithms does not guarantee the production of an optimal solution. Thus, classical parametric techniques only compare the overall performance of algorithms. By contrast, nonparametric statistical tests can prove that the results are statistically significant and can perform valid comparisons without the assumption of the aforementioned properties. 9 Accordingly, improved justifications on the correctness of the comparison are provided.
Kowalik's function Hartman's function 01 Hartman's function 02 6 0,1 −3.32 Shekel's function 01 4 0,10 −10 Shekel's function 02 4 0,10 −10  Wilcoxon rank-sum test at a 0.05 significance level is conducted in this work to test the statistical significance of the experimental results. The Wilcoxon nonparametric statistical test is performed for independent paired samples.
In general, p-value is commonly defined under two hypotheses. The null hypothesis is that the difference between the mean values of two algorithms does not significantly exist, and the alternative hypothesis is that two algorithms are significantly different. The null hypothesis is rejected if the p-value is lower than a prefixed threshold, and the alternative hypothesis is true. On the basis of the test results, three signs (ie, +, −, and @) are assigned for reading pairwise comparison algorithms, namely, CMVO versus MVO, CMVO versus GA, CMVO versus PSO, CMVO versus GOA, CMVO versus LSO, CMVO versus SSA, and CMVO versus WOA. "@" indicates that the null hypothesis is true (ie, no significant difference is found from the comparison of the first and second algorithms), "-" indicates that the CMVO is significantly better than the other algorithm, and "+" indicates that the CMVO is significantly worse than the other algorithm. Tables 4, 6, and 8 show the p-values produced by the Wilcoxon rank-sum test together and final results of the test (rejecting or not the null hypothesis) are listed in above tables for the comparison of our proposed algorithm CMVO with other algorithms. Table 3 shows the results for (1) unimodal benchmark functions. Notably, CMVO obtains the best results for all test functions of category (1) except in function f1 and f6 on which the GOA and PSO find the best results among all algorithms. CMVO outperforms the other algorithms, and it obtains the best average rank of 1.4 followed by SSA (3.5), PSO (4), WOA (4.8), GOA (5), and LSO (5.1). As shown in Table 4, the results based on the Wilcoxon rank-sum test produced by CMVO are statistically significant better because the p-values are less than 0.05, which rejects the null hypothesis in all cases. However, the PSO and SSA yields are more significant than the CMVO yields on function f6. Finally, it means that CMVO's performance is statistically better than the other algorithms for the employed experimental design. As mentioned earlier, the unimodal benchmark functions are used to examine exploitation capability. Therefore, CMVO can exploit the search region accurately and it presents a high exploitation capability, thus converges rapidly toward the global optimum. Table 5 shows the results obtained for the high-dimensional multimodal benchmark functions in category (2). As shown in the table, CMVO provides the best result on all functions of this category. Globally, CMVO achieves the best average rank of 1 in this category followed by GA (2.5), WOA (3.5), LSO (4.3), SSA (5.3), and PSO (5.8). Table 6 shows that CMVO always returns lower p-values than other algorithms and rejects the null hypothesis because its p-values are less than 0.05. As previously discussed, the multimodal benchmark problems are used to inspect the exploration process. CMVO algorithm shows a high rate of exploration. Thus, this algorithm can expose the search regions extensively and explore the promising areas of the multimodal benchmark problems. Consequently, the capabilities to avoid local   optima are satisfactory and are reflected in the results described above. The reason is that the algorithm can escape from entrapment in local optima and determine the global optima on all the multimodal benchmark functions. Table 7 presents the statistical results for composite multimodal benchmark problems in category (3). These results show that CMVO has the merit of clearly outstripping for five functions, namely, f14, f15, f21, f22, and f23, among ten functions in category (3). However, CMVO offers a very promising performance because it achieves the best average rank (3.7) followed by SSA (3.8), GA (4), MVO (4.1), PSO (4.5), and WOA (4.5). As shown in Table 8, it can be seen that there is a significant difference in all cases for a level of 0.05. The p-values reported in Table 8 show that this superiority is statistically significant on five of them (F14, F15, and F21 to F23). The CMVO algorithm outperforms other algorithms in the mentioned composite test functions. However, no significant differences were found between CMVO and all other algorithms on average. The p-values in this category do not show important superiority over those of unimodal and multimodal benchmark functions because of the difficulty of the composite multimodal benchmark problems that make them challenging for the algorithms employed in this area. As previously mentioned, the composite multimodal benchmark problems are used to benchmark the exploration and exploitation capabilities together. Therefore, the statistical results of the algorithms on composite multimodal benchmark problems appropriately deliver capabilities that maintain the exploration and exploitation of the challenging search space. Since the composite search spaces are highly similar to the real search spaces, these results make the CMVO algorithm potentially able to solve challenging optimization problems.

Results of MOCMVO algorithm
To evaluate the performance of the proposed MOCMVO algorithm, 10 popular benchmark tests are applied and can be divided into three main categories: unconstrained (ie, ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6), constrained (ie, TNK, BNH, and OSY), and engineering design (ie, welded beam and cantilever) multiobjective problems. In our simulations, a wide range of different algorithms, which are MOPSO, MOMVO, MOALO, and MOGWO algorithm, is used to perform comparisons with MOCMVO. The optimization task is run five times in all benchmark tests. The obtained results are illustrated in the form of mean and standard deviation, and the statistical results are collected and reported in the tables below. The subsequent experiments are carried out using 50 search agents, 300 maximum iterations, and an archive size of 200 samples.
To measure the qualitative and quantitative results of the algorithms, we employ different assessment methods taken from. 67 With regard to quantitative results, the following figures depict the Pareto optimal fronts produced by each algorithm. In terms of the qualitative results, generational distance (GD) 67   quantify the accuracy and convergence of each algorithm. 68 Spread and spacing metrics 67 are employed to measure the coverage. Tables 9 and 10 show the statistical results for the constrained ZDT benchmark problems. The details of the aforementioned functions are summarized in Appendix (Table A1). The mean and standard deviation of the different qualitative metric values are averaged over five independent runs being recorded for the compared algorithm, where the best mean among the five compared algorithms is emphasized in bold. The GD and IGD results in Table 9 show that MOCMVO outperforms the three other algorithms on most of the test benchmarks. As shown in the GD column, MOCMVO achieves the second best results on the ZDT3 benchmarking function and delivers the best results on the remaining benchmarking functions. As shown in the IGD column, MOCMVO acquires the second best results on the ZDT1, ZDT4, and ZDT6 benchmarking functions and gains the best results on the remaining benchmarking functions. Figure 2 shows the significance of the results. Notably, MOCMVO produces lower rank result in GD and IGD than the other algorithms. From the characteristics of IGD and GD mentioned earlier, it can be stated that the MOCMVO is found to show superior convergence and improved performance on unconstraint benchmark problems.

Unconstrained test functions
The results of the algorithms in terms of spread and spacing metrics in Table 10 show that MOPSO outperforms others in spread metric. However, MOCMVO achieves a better result in spacing metric compared to MOMVO, MOPSO,  Figure 2 shows the significance of the results. Notably, MOCMVO obtains low rank result in spacing metric, whereas MOPSO obtains low rank result in spread metric. These results show that the performance of MOCMVO and MOPSO is more stable than that of other agorithms in covering the entire Pareto optimal front on unconstrained test problems. Figure 3 shows the shape of the best Pareto optimal front obtained by the three algorithms on ZDT1, ZDT2, and ZDT3. As shown in the figure, MOCMVO and MOPSO successfully cover all the true Pareto optimal fronts with the best distribution. Moreover, MOALO, MOGWO, and MOMVO show the poorest convergence with separated regions. This finding is in agreement with the obtained results.
Compared with MOMVO, MOPSO, MALOA, and MOGWO, the proposed MOCMVO algorithm is superior and clearly generates higher coverage and better convergence to the true Pareto optimal front.

Constrained test functions
This experiment is designed to measure the performance and efficiency of MOCMVO on three constrained test functions (ie, BNH, OSY, and TNK). MOCMVO is also compared with MOMVO, MOPSO, MOALO, and MOGWO for benchmarking. The details of the aforementioned functions are summarized in appendix (Table A2). Figure 4 shows the obtained Pareto optimal solutions for all optimization algorithms with all constrained functions. Tables 11 and 12 summarize the results of this experiment. The GD and IGD results in Table 11 show that MOCMVO outperforms the three other algorithms in all cases. MOMVO achieves the second best results, whereas MOPSO, MOALO, and MOGWO obtain the worst results. Figure 5 shows significant results where MOCMVO obtains the lowest rank of results in GD and IGD. From the characteristics of IGD and GD mentioned earlier, MOCMVO is found to provide superior convergence and improved performance on unconstrained test problems.
To observe the coverage of the algorithms, the experiment based on spread and spacing metrics is conducted. Table 12 shows that MOCMVO obtains the best results in spacing metric for all functions except on OSY function where MOMVO obtains the best result. Furthermore, MOPSO obtains the best results in spread metric for all functions except on TNK function where MOCMVO produces the best result. These results are in agreement with those in Figure 5. MOCMVO obtains the best rank in spacing metric, whereas MOMVO attains the best rank in spread metric. From the characteristics of spread and spacing metrics mentioned earlier, the performance of the MOCMVO and MOPSO algorithms is found to be more stable than that of MOMVO and MOALO in covering the entire Pareto optimal front. Figure 4 shows the Pareto optimal fronts obtained by algorithms after solving constrained test functions. The shape of Pareto fronts on the basis of constrained test functions has very different forms compared with unconstrained test functions, such as linear front on OSY, wave-shaped front on TNK, and concave front. The most interesting pattern is that the Pareto optimal solutions obtained by MOCMVO show higher coverage than those of MOPSO on BNH and TNK. Moreover, the coverage of MOPSO on OSY is better than that of MOCMVO. The Pareto optimal solutions of MOMVO, MOALO, and MOGWO are also far from the true Pareto optimal front obtained compared with those of MOCMVO.
Consequently, convergence to the Pareto front of the MOCMVO algorithm and the coverage on it can be clearly seen in the Pareto optimal sets obtained in this group.

Constrained engineering design
From the benchmarking of the previous groups of multiobjective optimization, our algorithm is found to solve many complex problems efficiently. The last set of benchmark tests is vague and comprises two real engineering design problems. These problems highly suit the benchmarking of MOCMVO algorithm performance. The details of the aforementioned functions are summarized in Appendix (Table A3). A similar set of performance metrics is employed to compare the results of algorithms quantitatively. Tables 13 and 14 show the results. The results in Tables 13 and 14 show that MOCMVO is the best algorithm. As shown in Figure 6, MOCMVO successfully finds a very accurate approximation of the true Pareto optimal solutions for real problems as well. Furthermore, the algorithm is suited for covering all Pareto optimal fronts. This comprehensive study shows that the proposed CMVO algorithm is highly recommended compared with algorithms in literature. First, this recommendation is made because of the competitive convergence of the proposed method toward the global optimum. The superior convergence is due to the competitive mechanism in which only the winner solutions from the competition always update the position of others. Convergence capability positively impacts handling of multiple objectives. High exploitation also results in high convergence to Pareto fronts. However, the competitive mechanism finds a large number of nondominated solutions that are always used to update the position of others. Second, CMVO benefits from high exploration that stems from the major operator in learning from the winners to update the losers. The high exploration allows the archive mechanism to discard and eliminate the most populated regions. This mechanism again emphasizes that the coverage of the Pareto optimal front obtained is improving during the optimization process.
Overall, the proposed CMVO algorithm with pairwise competitions can generate sufficient amount of diverse solutions. This algorithm can be considered as an alternative practical algorithm for handling various problems in optimization.

CONCLUSION
This work aims to enrich literature by proposing a novel metaheuristic optimization algorithm. This article presents a competitive mechanism between population integrated into MVO algorithm, namely, competitive MVO (CMVO). Contrary to MVO, CMVO adjusted half of the population objects in each of its iterations by performing a competitive mechanism. To investigate the efficiency of the proposed algorithm in solving optimization problems, a comprehensive comparative study was conducted on 23 test functions. The exploration, exploitation, and convergence of CMVO were benchmarked and discussed on the test functions. The results show that the proposed algorithm outperform the current popular and powerful algorithms in literature such as the original MVO, PSO, GA, GOA, LSO, SSA, and WOA. To explore the capability of the proposed algorithm for solving multiobjective optimization problems, the algorithm was tested on five unconstrained test functions, three constrained test functions, and two constrained engineering functions. For result verification, we used popular algorithms in the same field such as MOMVO, MOPSO, MOALO, and MOGWO to perform comparisons. The quantitative results from various aspects were conducted on the basis of four widely used metrics: GD, IGD, spread metric, and spacing metric. To evaluate the qualitative results, several best Pareto optimal fronts were also reported. The results show that MOCMVO outperforms MOALO , MOMVO, and MOGWO on nearly all of the test functions and present very competitive results compared with the MOPSO algorithm. MOCMVO benefits from high convergence and can even find solutions that are close to the global optimal of any shape as well. Finally, the results of constrained engineering design problems confirm that MOCMVO can solve challenging problems with many constraints and unknown search spaces.
Our future work will focus on considering other engineering design problems with multiple objectives and applying other hybridization models to develop a more effective technique.