1. Introduction
Over the past twenty years, the heuristic optimization algorithms have been widely appreciated for their simplicity, flexibility, and robustness. Common heuristic optimization algorithms include genetic algorithm (GA) [
1], simulated annealing [
2], crow search algorithm [
3], ant colony optimization [
4], differential evolution (DE) [
5], particle swarm optimization (PSO) [
6], bat algorithm (BA) [
7], cuckoo search algorithm (CSA) [
8], whale optimization algorithm (WOA) [
9], firefly algorithm (FA) [
10], grey wolf optimizer (GWO) [
11], teaching-learning-based optimization [
12], artificial bee colony (ABC) [
13], and chimp optimization algorithm (ChOA) [
14]. As technology continues to evolve and update, the heuristic optimization algorithms are currently widely applied in various areas of real life, such as the welded beam problem [
15], feature selection [
16,
17,
18], the welding shop scheduling problem [
19], economic dispatch problem [
20], training neural networks [
21], path planning [
15,
22], churn prediction [
23], image segmentation [
24], 3D reconstruction of porous media [
25], bankruptcy prediction [
26], tuning of fuzzy control systems [
27,
28,
29], interconnected multi-machine power system stabilizer [
30], power systems [
31,
32], large scale unit commitment problem [
33], combined economic and emission dispatch problem [
34], multi-robot exploration [
35], training multi-layer perceptron [
36], parameter estimation of photovoltaic cells [
37], and resource allocation in wireless networks [
38].
Although there are many metaheuristic algorithms, each has its shortcomings. GWO does not have a good local and global search balance. In [
39,
40], the authors studied the possibility of enhancing the exploration process in GWO by changing the original control parameters. GWO lacks population diversity, and in [
41], the authors replaced the typical real-valued encoding method with a complex-valued one, which increases the diversity of the population. ABC has the disadvantages of slow convergence and lack of population diversity. In [
42], the authors’ proposed approach refers to the procedure of differential mutation in DE and produces uniform distributed food sources in the employed bee phase to avoid the local optimal solution. In [
43], in order to speed up the convergence of ABC, the authors proposed a new chaos-based operator and a new neighbour selection strategy improving the standard ABC. WOA has the disadvantage of premature convergence and easily falls into the local optimum. In [
37], the authors modified WOA using chaotic maps to prevent the population from falling into local optima. In [
44] WOA was hybridized with DE, which has a good exploration ability for function optimization problems to provide a promising candidate solution. CSA faces the problem of getting stuck in local minima. In [
45], the authors addressed this problem by introducing a new concept of time varying flight length in CSA. BA has many challenges, such as a lack of population diversity, insufficient local search ability, and poor performance on high-dimensional optimization problems. In [
46], Boltzmann selection and a monitor mechanism were employed to keep the suitable balance between exploration and exploitation ability. FA has a few drawbacks, such as computational complexity and convergence speed. In order to overcome such obstacles, in [
47], the chaotic form of two algorithms, namely the sine–cosine algorithm and the firefly algorithms, are integrated to improve the convergence speed and efficiency, thus minimizing several complexity issues. DE is an excellent algorithm for dealing with nonlinear and complex problems, but the convergence rate is slow. In [
48], the authors proposed that oppositional-based DE employs OBL to help population initialization and generational jumping.
There are many ways to help improve the performance of the metaheuristic algorithm, for example, Opposition-Based Learning (OBL) [
49], chaotic [
50], Lévy flight [
51], etc.
Tizhood introduced OBL [
49] in 2005. The main idea of OBL is to find a better candidate solution and use this candidate solution to find a solution closer to the global optimum. Hui et al. [
52] added generalized OBL to the PSO to speed up the convergence speed. Ahmed et al. [
53] proposed an improved version of the grasshopper optimization algorithm based on OBL. First, they use the OBL population for better distribution in the initialization phase, and then they use it in the iteration of the algorithm to help the population jump out of the local optimum, etc.
Paul introduced Lévy flight [
51]. In Lévy flight, the small jumps are interspersed with longer jumps or “flights”, which causes the variance of the distribution to diverge. As a consequence, Lévy flight do not have a characteristic length scale. Lévy flight is widely used in meta-heuristic algorithms. The small jumps help the algorithm with local exploration, and the longer jumps help with the global search. Ling [
51] et al. proposed an improvement to the whale optimization algorithm based on a Lévy flight, which helps increase the diversity of the population against premature convergence and enhances the capability of jumping out of local optimal optima. Liu et al. [
54] proposed a novel ant colony optimization algorithm with the Lévy flight mechanism that guarantees the search speed extends the searching space to improve the performance of the ant colony optimization, etc.
The chaotic sequence [
50] is a commonly used method for initializing the population in the meta-heuristic algorithm, which can broaden the search space of the population and speed up the convergence speed of the algorithm. Kuang et al. [
55] added the Tent map to the artificial bee colony algorithm to make the population more diverse and obtain a better initial population. Suresh et al. [
56] proposed novel improvements to the Cuckoo Search, and one of these improvements is the use of the logistic chaotic [
50] function to initialize the population. Afrabandpey et al. [
57] used chaotic sequences in the Bat Algorithm for population initialization instead of random initialization, etc.
This paper mainly focuses on the Chimp Optimization Algorithm (ChOA), which was proposed in 2020 by Khishe et al. [
14] as a heuristic optimization algorithm based on the social behaviour of the chimp population. The ChOA has the advantages of fewer parameters, easier implementation, and higher stability than other types of heuristic optimization algorithms. Although different heuristic optimization algorithms adopt different approaches to the search, the common goal is mostly to explore the balance between population diversity and search capacity; convergence accuracy and speed are guaranteed while avoiding premature maturity. Since the ChOA was proposed, researchers have used various strategies to improve its performance and further apply it to practical problems. Khishe et al. [
58] proposed a weighted chimp optimization algorithm (WChOA), which uses a position-weighted equation in the individual update position to improve the convergence speed and help jump out of the local optimum. Kaur et al. [
59] proposed a novel algorithm that fuses ChOA and sine–cosine functions to solve the problem of poor balance during development and applied it to the engineering problems of vessel pressure, clutch brakes, and digital filters design, etc. Jia et al. [
60] initialized the population through highly destructive polynomial mutation and then used the beetle antenna search algorithm on weak individuals to obtain visual ability and improve the ability of the algorithm to jump out of the local optimum. Houssein et al. [
61] used the opposition-based learning strategy and the Lévy flight strategy in ChOA to improve the diversity and optimization ability of the population in the search space and applied the proposed algorithm to image segmentations. Wang et al. [
62] proposed a novel binary ChOA. Hu et al. [
63] used ChOA to optimize the initial weights and thresholds of extreme learning machines and then applied the proposed model to COVID-19 detection to improve the prediction accuracy. Wu et al. [
64] combined the improved ChOA [
60] and support vector machines (SVM) and proposed a novel SVM model that outperforms other methods in classification accuracy.
In summary, there are numerous heuristic optimization algorithms and improvement mechanisms, each with its advantages; however, the No-free-Lunch (NFL) theorem [
65] logically proves that no single population optimization algorithm can be used to solve all kinds of optimization problems and that it is suitable for solving some optimization problems, but reveals shortcomings for solving others. Therefore, to improve the performance and applicability of ChOA and explore a more suitable method for solving practical optimization problems, an improved ChOA (called RL-ChOA) is proposed in this paper.
The main contributions of this paper are summarized as follows:
A new ChOA framework is proposed that does not affect the configuration of the traditional ChOA.
Use Tent chaotic map to initialize the population to improve the diversity of the population.
A strategy of Refraction learning of light is proposed to prevent the population from falling into local optima.
In comparing RL-ChOA with five other state-of-the-art heuristic algorithms, the experimental results show that the RL-ChOA is more efficient and accurate than other algorithms in most cases.
The experimental comparison of two engineering design optimization problems shows that RL-ChOA can be applied more effectively to practical engineering problems.
The rest of this paper is organized as follows:
Section 2 introduces the preliminary knowledge, including the original ChOA and the principles of light refraction.
Section 3 illustrates the proposed RL-ChOA in detail.
Section 4 and
Section 5 detail the experimental simulations and experimental results, respectively.
Section 6 discusses two engineering case application analyses of RL-ChOA. Finally,
Section 7 concludes this paper.
5. Experimental Results
In this section, to verify the algorithm’s performance proposed in this paper, it is compared with five state-of-the-art algorithms, namely the Grey Wolf Algorithm Based on L/’evy Flight and Random Walk Strategy (MGWO) [
69], improved Grey Wolf Optimization Algorithm based on iterative mapping and simplex method (SMIGWO) [
70], Teaching-learning-based Optimization Algorithm with Social Psychology Theory (SPTLBO) [
71], original ChOA, and WChOA.
Table 2,
Table 3 and
Table 4 present the statistical results of the best value, average value, worst value, and standard deviation obtained by the six different algorithms on the three types of test problems. In order to accurately get statistically reasonable conclusions,
Table 5,
Table 6 and
Table 7 use Friedman’s test [
72] to rank three different types of benchmark test functions. In addition, Wilcoxon’s rank-sum test [
73] is a nonparametric statistical test that can detect more complex data distributions.
Table 8 shows the Wilcoxon’s rank-sum test for independent samples at the
p = 0.05 level of significant difference [
68]. The symbols “+”, “=”, and “−” indicate that RL-ChOA is better than, similar to, and worse than the corresponding comparison algorithms, respectively. In
Section 5.2, we analyze and illustrate the convergence of RL-ChOA on 23 widely used test functions. In
Section 5.3, the different values of parameters
k and
in the light refraction learning strategy are analyzed.
5.1. Experiments on 23 Widely Used Test Functions
As shown in
Table 2, in the unimodal benchmark test functions (
F1–
F7): the overall performance of the proposed algorithm RL-ChOA is better than other algorithms on the unimodal benchmark problem. RL-ChOA can perform best compared to other algorithms except for three benchmark test functions (
F5–
F7) and find the optimal value in four benchmark test functions (
F1–
F4). The SMIGWO achieves the best performance on two benchmark test functions (
F5 and
F6). The WChOA obtains the best solution on the one benchmark test function (
F7). In addition, KEEL software [
74] was used for the Friedman rank, and the results are shown in
Table 6 RL-ChOA achieved the best place.
As can be observed from
Table 3, in the multi-modal benchmark test functions (
F8–
F13): RL-ChOA can obtain the best solution on the four benchmark test functions (
F8–
F11). SPTLBO can obtain the best solution on the one benchmark test function (
F12). WChOA can get the best solution on the three benchmark test functions (
F9,
F11, and
F12). SPTLBO can obtain the best solution on the one benchmark test function (
F13). RL-ChOA outperforms other algorithms overall, and in
Table 7, RL-ChOA ranks first in the Friedman rank.
As can be observed from
Table 4, in the fixed-dimension multi-modal benchmark functions (
F14–
F23): SPTLBO can get the best solution on the six benchmark test functions (
F14–
F19). MGWO can get the best solution on the four benchmark test functions (
F16–
F19 and
F23). SMIGWO can obtain the best solution on the five benchmark test function (
F16,
F17,
F19,
F21, and
F22); RL-ChOA cannot perform well on these test functions, but RL-ChOA outperforms both original ChOA and WChOA in overall performance. As shown in
Table 5, RL-ChOA ranks third in the Friedman rank.
Wilcoxon’s rank-sum test was used to verify the significant difference between RL-ChOA and the other five algorithms. The statistical results are shown in
Table 8. The results prove that
= 0.05, a significant difference can be observed in all cases, and RL-ChOA outperforms 12, 11, 7, 15, and 19 benchmark functions of SMIGWO, MGWO, SPTLBO, ChOA, and WChOA, respectively.
Table 7.
Average rankings of the algorithms (Friedman) on multi-modal benchmark functions.
Table 7.
Average rankings of the algorithms (Friedman) on multi-modal benchmark functions.
Algorithm | Ranking |
---|
SMIGWO | 3.75 |
MGWO | 3.52 |
SPTLBO | 2.75 |
ChOA | 4.96 |
WChOA | 3.46 |
RL-ChOA | 2.56 |
Table 8.
Test statistical results of Wilcoxon’s rank-sum test.
Table 8.
Test statistical results of Wilcoxon’s rank-sum test.
Benchmark | RL-ChOA vs. SMIGWO | RL-ChOA vs. MGWO | RL-ChOA vs. SPTLBO | RL-ChOA vs. ChOA | RL-ChOA vs. WChOA |
---|
H | p-Value | Winner | H | p-Value | Winner | H | p-Value | Winner | H | p-Value | Winner | H | p-Value | Winner |
---|
F1 | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + |
F2 | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + |
F3 | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + |
F4 | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + | 1 | 1.21 | + |
F5 | 1 | 5.49 | - | 1 | 1.17 | - | 1 | 1.78 | - | 1 | 1.78 | - | 1 | 2.99 | + |
F6 | 1 | 3.02 | - | 1 | 1.33 | - | 1 | 4.50 | - | 1 | 1.09 | + | 1 | 2.01 | - |
F7 | 1 | 1.87 | + | 0 | 5.79 | - | 1 | 1.04 | - | 1 | 3.51 | + | 1 | 2.37 | - |
F8 | 1 | 3.69 | + | 0 | 2.58 | + | 1 | 2.23 | + | 0 | 3.55 | + | 1 | 3.02 | + |
F9 | 1 | 1.20 | + | 0 | 1.61 | + | 0 | NaN | = | 1 | 1.21 | + | 0 | NaN | = |
F10 | 1 | 1.21 | + | 1 | 3.17 | + | 1 | 8.99 | + | 1 | 1.21 | + | 1 | 1.17 | + |
F11 | 1 | 5.58 | + | 0 | 3.34 | + | 0 | NaN | = | 1 | 1.21 | + | 0 | 3.34 | + |
F12 | 1 | 3.69 | - | 1 | 1.69 | - | 1 | 3.02 | - | 1 | 2.87 | + | 1 | 6.28 | + |
F13 | 1 | 3.02 | - | 1 | 3.02 | - | 1 | 3.02 | - | 1 | 5.57 | - | 1 | 2.79 | + |
F14 | 1 | 1.02 | + | 1 | 7.29 | + | 1 | 4.31 | + | 0 | 8.19 | = | 1 | 3.20 | + |
F15 | 1 | 3.02 | - | 1 | 1.11 | + | 1 | 3.02 | - | 0 | 7.17 | - | 1 | 8.89 | + |
F16 | 1 | 3.02 | = | 1 | 4.08 | = | 1 | 1.99 | = | 0 | 6.31 | = | 1 | 3.02 | + |
F17 | 1 | 5.49 | = | 1 | 2.87 | = | 1 | 1.21 | = | 0 | 2.58 | = | 1 | 3.02 | + |
F18 | 1 | 1.17 | + | 1 | 4.44 | = | 1 | 2.78 | = | 0 | 5.40 | = | 1 | 3.85 | = |
F19 | 1 | 2.37 | = | 1 | 2.03 | = | 1 | 1.22 | = | 0 | 9.94 | = | 1 | 3.02 | + |
F20 | 1 | 7.96 | + | 0 | 6.63 | + | 1 | 9.79 | - | 1 | 1.17 | + | 1 | 3.02 | + |
F21 | 1 | 7.12 | - | 1 | 8.35 | - | 1 | 3.02 | - | 0 | 7.96 | + | 1 | 7.39 | + |
F22 | 1 | 3.02 | - | 1 | 3.02 | - | 1 | 3.02 | - | 1 | 1.78 | + | 1 | 3.02 | + |
F23 | 1 | 3.02 | - | 1 | 3.02 | - | 1 | 3.02 | - | 0 | 3.11 | + | 1 | 3.02 | + |
+/−/= | 12/8/3 | 11/8/4 | 7/10/6 | 15/3/5 | 19/2/2 |
5.2. Convergence Analysis
To analyze the convergence of the proposed RL-ChOA,
Figure 6 shows the convergence curves of SMIGWO, MGWO, SPTLBO, ChOA, WChOA, and RL-ChOA with the increasing number of iterations. In
Figure 6, the
x-axis represents the number of algorithm iterations, and the
y-axis represents the optimal value of the function. It can be observed that the convergence speed of RL-ChOA is significantly better than that of other algorithms because the learning strategy, based on light refraction, can enable the optimal individual to find a better position in the solution space and lead other individuals to approach this position quickly. RL-ChOA has apparent advantages in unimodal benchmark functions and multi-modal benchmark functions. However, despite poor performance on the fixed-dimensional multi-modal benchmark function RL-ChOA, it outperforms the original ChOA and the improved ChOA-based WChOA. RL-ChOA is sufficiently competitive with other state-of-the-art heuristic algorithms in terms of overall performance.
5.3. Parameter Sensitivity Analysis
The refraction learning of light as described in
Section 3.3, and the parameters
and
k in Equation (18), are the keys to improving the performance of RL-ChOA. In this subsection, to investigate the sensitivity of parameters
and
k, a series of experiments are conducted, and it is concluded that RL-ChOA performs best when the values of parameters
and
k are in
. We tested RL-ChOA with different
: 1, 10, 100 and 1000, and different
k: 1, 10, 100 and 1000. The benchmark functions are the same as those selected in
Section 4.1. The parameter settings are the same as in
Section 4.3.
Table 5 summarizes the mean and standard deviation of the test function values for RL-ChOA using different
and
k combinations. As shown in
Table 9, when parameters
and
k are set to 100 and 100, respectively, RL-ChOA outperforms other parameter settings in most test functions.
5.4. Remarks
According to the above results: (1) RL-ChOA has excellent performance on the unimodal benchmark test functions and the multi-modal benchmark test function because the best individual uses the light refraction-based learning strategy to improve the algorithm’s global exploration ability. In addition, Tent chaos sequence was introduced to increase the diversity of the population and improve search accuracy and convergence speed. However, the performance needs to improve on fixed-dimension multi-modal benchmark functions. (2) RL-ChOA outperforms the original ChOA and the improved WChOA based on ChOA, whether on the multi-modal benchmark test functions, the unimodal benchmark test functions, or the fixed-dimension multi-modal benchmark functions.
Table 9.
Experimental results of RL-ChOA using different combinations of k and .
Table 9.
Experimental results of RL-ChOA using different combinations of k and .
Function | = 1, k = 1 | = 10, k = 10 | = 100, k = 100 | = 1000, k = 1000 |
---|
Mean | St.dev | Mean | St.dev | Mean | St.dev | Mean | St.dev |
---|
F1 | 2.9363 | 9.7403 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
F2 | 3.6343 | 4.0394 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
F3 | 1.0063 | 5.4903 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
F4 | 3.1650 | 7.3138 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
F5 | 2.8968 | 6.6944 | 2.8975 | 5.5369 | 2.8732 | 3.0079 | 2.8974 | 5.6955 |
F6 | 4.9292 | 3.0194 | 4.2253 | 6.1582 | 4.1277 | 5.4895 | 4.2277 | 7.4070 |
F7 | 2.6141 | 2.0053 | 4.1955 | 1.2170 | 2.5403 | 2.5588 | 3.5305 | 3.8636 |
F8 | −6.1749 | 3.5394 | −6.0819 | 3.4058 | −6.2563 | 4.2758 | −6.1964 | 4.3861 |
F9 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
F10 | 1.2375 | 4.7283 | 8.8818 | 0.0000 | 8.8818 | 0.0000 | 8.8818 | 0.0000 |
F11 | 9.1245 | 2.8466 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
F12 | 7.1215 | 2.2662 | 6.1241 | 1.5222 | 5.9170 | 1.5623 | 6.1394 | 1.8421 |
F13 | 2.7715 | 9.6490 | 2.9987 | 4.0780 | 2.9988 | 3.1025 | 2.9989 | 4.7321 |
F14 | 1.2980 | 6.0103 | 1.0710 | 2.5954 | 1.0123 | 5.8488 | 1.2268 | 5.5093 |
F15 | 1.3967 | 8.3618 | 1.3964 | 9.0421 | 1.3763 | 7.3362 | 1.4112 | 7.8338 |
F16 | −1.0316 | 4.0525 | −1.0316 | 2.1053 | −1.0316 | 3.7042 | −1.0316 | 2.9538 |
F17 | 3.9821 | 3.8502 | 3.9820 | 4.5274 | 3.9822 | 2.9042 | 3.9816 | 2.7264 |
F18 | 3.0002 | 2.7370 | 3.0003 | 3.9395 | 3.0002 | 3.5075 | 3.0003 | 4.1256 |
F19 | −3.8555 | 2.2133 | −3.8551 | 2.0785 | −3.8551 | 1.9819 | −3.8553 | 2.0139 |
F20 | −3.2830 | 1.9182 | −3.2941 | 1.4701 | −3.2940 | 1.5499 | −3.2893 | 1.7582 |
F21 | −2.9032 | 2.0892 | −3.6353 | 1.9805 | −4.2423 | 1.5862 | −3.3565 | 2.0556 |
F22 | −2.1130 | 1.9578 | −4.3551 | 1.5668 | −4.4875 | 1.4273 | −4.0791 | 1.7789 |
F23 | −5.0784 | 2.5185 | −4.9409 | 7.5430 | −5.0880 | 1.9410 | −4.9393 | 7.5488 |