A

Grey wolf optimization (GWO) algorithm is a new emerging algorithm that is based on the social hierarchy of grey wolves as well as their hunting and cooperation strategies. Introduced in 2014, this algorithm has been used by a large number of researchers and designers, such that the number of citations to the original paper exceeded many other algorithms. In a recent study by Niu et al., one of the main drawbacks of this algorithm for optimizing real-world problems was introduced. In sum-mary, they showed that GWO’s performance degrades as the optimal solution of the problem diverges from 0. In this paper, by introducing a straightforward modiﬁcation to the original GWO algorithm, that is, neglecting its social hierarchy, the authors were able to largely eliminate this defect and open a new perspective for future use of this algorithm. The efﬁciency of the proposed method was validated by applying it to benchmark and real-world engineering problems.

X i (t + 1) = ( X 1 + X 2 + X 3 )/3 In the above equations, t represents the current iteration of the algorithm; A and C denote the coefficient vectors; X α , X β , X δ denote the position vectors of α, β, and δ, and X i denotes the position vector of the ith grey wolf. A and C vectors are calculated as follows: where a decreases linearly from 2 to 0 during iterations, and R 1 and R 2 are vectors of uniformly distributed random numbers in the range of 0 and 1.
Deficiency and defect of GWO: Niu et al. [9] have demonstrated well that the GWO algorithm has good performance compared to most of the other algorithms for basic functions, whose optimal solution is zero; on the other hand, it has poor performance for the shifted functions, whose optimal solution is far from zero. However, most real-world problems have various non-zero optimal points. Therefore, the practical use of this algorithm is strangely restricted. Consequently, it seems that we need a fundamental modification in GWO algorithm to be efficiently applicable to a wide range of real-world problems.

Greedy non-hierarchical GWO (G-NHGWO):
In the original GWO algorithm, three of the best solutions are always stored as α, β and δ wolves, and these three members of the population always guide the rest of the population in their update equation, which is analogous to the social hierarchy of the grey wolf packs in nature. Therefore, this algorithm has a good speed in converging to the optimal solution of basic functions with optimal solution of zero. This update mechanism has two major drawbacks in optimizing real-world functions: first, due to the use of the best global solutions found so far, the algorithm converges very quickly to a local optimal solution and loses its optimization power significantly; second, it causes the loss of a variety of new population in each iteration of the algorithm. In order to fix these two shortcomings and strengthen the GWO algorithm, we have defined and saved the best solution found so far by each grey wolf as its personal best position, like the PSO algorithm [10]; for instance, the personal best position for the ith wolf would be X best i . Then, instead of selecting α, β and δ wolves to guide the population updating, i.e. by neglecting the social hierarchy of grey wolf pack, three members r 1 , r 2 and r 3 are randomly selected and their positions, i.e. X r1 , X r2 , and X r3 , are used to guide the population update mechanism. For example, the updating equations of the ith member would be as follows: It should be noted that in [11] a random walk is proposed for updating the positions of α, β, and δ. However, this is different from our case, in which we use three randomly selected members from the population and use their positions as the new leaders. Furthermore, the proposed algorithm is a greedy-based method, and hence, the grey wolves move to a new position only if it is better than their current position. In other words, in the proposed G-NHGWO method all grey wolves are always located at the best positions they have found so far.  [12] to demonstrate the power and effectiveness of the proposed modified algorithm compared to the original algorithm. Functions 1 to 14, which represent the real-world problems having shifted functions, have been successfully implemented in many articles [13]. Functions 1 to 6 are unimodal functions, functions 7 to 12 are multimodal functions, and functions 13 and 14 are expanded multimodal functions. The number of population for both algorithms has been set to 30, based on the original reference [1], and the number of iterations has also been set to 10,000. Hence, the number of function evaluations for both algorithms is equal to 3,00,000, which is exactly equal to the value proposed by the cec2005 [12]. A total of 25 independent runs were carried out for optimizing each test function by each algorithm, and the result over all runs, including the mean and standard deviation, are presented in Table 1.GWO (−). In this table, %Imp shows the percentage of improvement of the mean index through using the G-NHGWO with respect to 6-gen. system [14] 20-gen. system [15] 40-gen. system [16] GWO the original GWO and Winner means whether G-NHGWO outperforms GWO (+) or reaches worse solutions than those of (-). According to the results given in this table, the proposed algorithm, G-NHGWO, was able to fix the defect of the original GWO algorithm and succeeded in optimizing a wide range of real-world shifted functions. The G-NHGWO algorithm outperformed the original GWO in 13 out of the 14 test functions and even reached much better solutions for test functions, such as F1 and F6. The proposed algorithm only performs worse than the original algorithm for F11; however, the value of the objective function obtained by the proposed algorithm for this test function is not noticeably different compared to that of the original algorithm. Additionally, based on the %Imp parameter, in 10 out of 14 functions, G-NHGWO leads to a mean index that is more than 50% lower than that of GWO. Furthermore, the convergence characteristics of the algorithms for F1 and F6 are depicted in Figure 1, which clearly shows that the G-NHGWO algorithm has a better performance than the original GWO algorithm in escaping from the local optima and achieving better global solutions.
In the second part of the simulation studies, in order to compare the performance of the proposed G-NHGWO with that of GWO in solving a real-world engineering problem, they were used for solving the economic load dispatch (ELD) problem for 6-, 20-, and 40-generator test power systems [14][15][16]. The power transmission losses are considered in 6-and 20-generator test systems and the valve points effects are considered in 20-, and 40-generator test power systems [14][15][16]. The maximum number of objective function evaluations and the penalty factor for violating the power balance equation caused by generation deficit were set to 50,000 and 1 × 10 8 for all the studied test systems, respectively. Each algorithm was run 25 times for solving the ELD problem in each system and their statistical results were reported in Table 2. It is observed from this table that G-NHGWO also outperforms GWO in solving the realworld optimization problem. G-NHGWO yields a lower mean index and a much lower standard deviation among its final costs; this means that its output has a significantly fewer deviation from the optimal solution. Furthermore, it is seen that the best generation cost among all runs for each test system has been obtained by G-NHGWO. Table 3 presents the best solutions found by GWO and G-NHGWO algorithms for all the test systems.
Conclusion: A greedy non-Hierarchical grey wolf optimization (G-NHGWO) algorithm is proposed to enhance the optimization power of the original GWO algorithm for real-world shifted functions with nonzero optimal solutions. In the proposed algorithm, the social hierarchy of the grey wolves is neglected and three random wolves replace the three best wolves in the original GWO to guide the wolf pack in the hunting process. Furthermore, the update equations of the original GWO were modified to use the personal best positions of the grey wolves instead of their positions. Exploiting this strategy, the G-NHGWO algorithm has been able to fix the two main disadvantages of the original GWO algorithm: getting stuck in the local optimal solutions; and lack of population diversity. The obtained results on 14 real-world shift test functions prove the effectiveness and efficiency of the proposed G-NHGWO algorithm in optimizing real-world functions. There are numerous improved and hybrid versions of the original GWO algorithm. Since the proposed G-NHGWO is a basic version, most of the improvements and hybridisation techniques can be converted to their non-hierarchal counterparts to improve the performance further, which is the subject of future studies.