Next Article in Journal
New Versions of Locating Indices and Their Significance in Predicting the Physicochemical Properties of Benzenoid Hydrocarbons
Previous Article in Journal
RAISE: Rank-Aware Incremental Learning for Remote Sensing Object Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Normal Distributed Dwarf Mongoose Optimization Algorithm for Global Optimization and Data Clustering Applications

by
Fahd Aldosari
1,*,
Laith Abualigah
2,* and
Khaled H. Almotairi
3
1
Computer and Information Systems College, Umm Al-Qura University, Makkah 21955, Saudi Arabia
2
Faculty of Computer Sciences and Informatics, Amman Arab University, Amman 11953, Jordan
3
Computer Engineering Department, Umm Al-Qura University, Makkah 21955, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Symmetry 2022, 14(5), 1021; https://doi.org/10.3390/sym14051021
Submission received: 3 April 2022 / Revised: 28 April 2022 / Accepted: 5 May 2022 / Published: 17 May 2022

Abstract

:
As data volumes have increased and difficulty in tackling vast and complicated problems has emerged, the need for innovative and intelligent solutions to handle these difficulties has become essential. Data clustering is a data mining approach that clusters a huge amount of data into a number of clusters; in other words, it finds symmetric and asymmetric objects. In this study, we developed a novel strategy that uses intelligent optimization algorithms to tackle a group of issues requiring sophisticated methods to solve. Three primary components are employed in the suggested technique, named GNDDMOA: Dwarf Mongoose Optimization Algorithm (DMOA), Generalized Normal Distribution (GNF), and Opposition-based Learning Strategy (OBL). These parts are used to organize the executions of the proposed method during the optimization process based on a unique transition mechanism to address the critical limitations of the original methods. Twenty-three test functions and eight data clustering tasks were utilized to evaluate the performance of the suggested method. The suggested method’s findings were compared to other well-known approaches. In all of the benchmark functions examined, the suggested GNDDMOA approach produced the best results. It performed very well in data clustering applications showing promising performance.

1. Introduction

Meta-heuristic optimization is a sophisticated problem-based algorithmic design that creates optimization methods by combining multiple operators and search techniques [1,2]. The heuristic is a strategy that tries to find the best solution (optimal) [3]. In the cost estimating and artificial intelligence disciplines, meta-heuristics are used to solve difficult real-world issues, such as data clustering challenges, and other classic optimization problems [4,5]. Any optimization issue is unique, and thus necessitates a variety of meta-heuristic approaches to deal with the circumstances, constraints, and variables of the problem at hand [6,7]. To find the best approach, such challenges necessitate the development of a sophisticated meta-heuristic optimizer that can handle each problem and usage separately [8,9,10]. Meta-heuristic optimization is now in demand for various uses, including designing a microgrid with an energy system [11], data mining [12,13], wind power forecasting [14], structural engineering [15], biological sequences [16], parameter extraction for photovoltaic cells [17], transportation, and finance [18,19,20,21]. There is a need to reduce decision values, especially in structures with parameters.
Some examples of these algorithms are the Dwarf Mongoose Optimization Algorithm (DMOA) [22], Generalized Normal Distribution Optimization (GND) [23], Arithmetic Optimization Algorithm (AOA) [24], Aquila Optimizer (AO) [25], Group Search Optimizer (GSO) [26], Reptile Search Algorithm (RSA) [27], Gradient-based Optimizer (GBO) [28], Ebola Optimization Search Algorithm (EOSA) [29], Bird Mating Optimizer (BMO) [30], Flower Pollination Algorithm (FPA) [31], Lion Pride Optimizer (LPO) [32], Darts Game Optimizer (DGO) [33], Multi-level Cross Entropy Optimizer (MCEO) [34], Crystal Structure Algorithm (CSA) [35], Stochastic Paint Optimizer (SPO) [36], Golden Eagle Optimizer (GEO) [37], Avian Navigation Optimizer (ANO) [38], Crow Search Algorithm (CSA) [39], Grey Wolf Optimizer (GWO) [40], Fitness Dependent Optimizer (FDO) [41], Artificial Hummingbird Algorithm (AHA) [42], Dice Game Optimizer (DGO) [43], Political Optimizer (PO) [44], Flying Squirrel Optimizer (FSO) [45], Cat and Mouse Based Optimizer (CMBO) [46], Starling Murmuration Optimizer (SMO) [47], Orca Predation Algorithm (OPA) [48], and others [49,50].
Creating a collection of clusters from supplied data items is known as data clustering—one of the most typical data analyses and statistic approaches [51,52]—in other words, how to find symmetric and asymmetric objects [53]. Classifiers, diagnostic imaging, time series, computer vision, data processing, market intelligence, pattern classification, image classification, and data mining are just a few of the clustering applications [54,55]. The clustering procedure aims to divide the provided objects into a predetermined number of clusters with related members belonging to the same group (maximization) [56,57,58]. Dissimilar individuals in multiple groupings, on the other hand, belong to separate groups (minimization). Partitional clustering, the method employed in this research, aims to divide a large number of data items into a collection of non-overlapping clusters without using nested structures. The cluster’s heart is the centroid, and each data object is initially assigned to the centroid that is closest to it [59,60]. Centroids are adjusted based on existing assignments and by tweaking a few parameters. Some examples of data clustering applications that use optimization methods [61,62] are described below.
A thorough overview of meta-heuristic techniques for clustering purposes is presented in the literature as in [63], highlighting their methods in particular. Due to their adequate capacity to address machine learning challenges, particularly text clustering difficulties, the artificial intelligence (AI) techniques are acknowledged as excellent swarm-based technologies. For example, a unique heuristic technique based on the Moth-Flame Optimization (MFO) is proposed in [64] to handle data clustering difficulties. Various tests have been undertaken from Irvine Machine Learning Repository benchmark datasets to verify the effectiveness of the suggested method. Over twelve datasets, the suggested method was compared to five state-of-the-art techniques. The suggested technique outperformed the competition on ten datasets and was equivalent to the other two. An examination of experimental outcomes confirmed the efficacy of the recommended strategy. Moreover, a unique technique was presented in [65] based on data clustering efficiency envelopes (EDCO). Regardless of whether or not the camera model was in the database, the new EDCO technique was able to recognize it. The results showed that the EDCO method effectively differentiated unidentified source photos from known image data. The query image classified as known was linked to the origin sensor. The proposed technique was able to efficiently discriminate between photos from past and present camera models, even in severe instances.
A novel data clustering technique was proposed in [66] based on the Whale Optimization Algorithm (WOA). The effectiveness of the proposed approach was verified using 14 UCI machine learning library sample datasets. Experimental data and numerous statistical tests have validated the efficacy of the recommended technique. A simplex technique to increase bacterial colony optimization (BCO) exploring capacity called SMBCO was described in [67]. The suggested SMBCO method was utilized to tackle the data clustering challenge. Efficient machine learning datasets were used to examine the superiority of the proposed SMBCO approach. The outcomes of the clustering technique were evaluated using objective value and computing time. Compared to a traditional method with a convergence rate, the SMBCO model achieved excellent accuracy, according to the findings of trials. In [68], a beneficial approach called SIoMT was presented for regularly identifying, aggregating, evaluating, and maintaining essential data on possible patients. The SIoMT approach, in particular, is commonly utilized with dispersed nodes for data group analysis and management. The capacity and effectiveness of the suggested SIoMT technique have been well-established compared to equivalent techniques after assessing different aspects by solution of various IoMT scenarios.
According to the literature [69], the existing procedures can provide good outcomes in certain circumstances but not in others. As a result, there is a pressing need for a new strategy capable of dealing with a wide range of complicated issues. The “no free lunch theorem” inspired us to look for and develop a new approach to dealing with such complex challenges. This work provides a novel optimization approach for solving optimization issues. The suggested approach is known as GNDDMOA, and is based on the use of the fundamental methods of the Generalized Normal Distribution Optimization (GND) and Dwarf Mongoose Optimization Algorithm (DMOA), followed by the Opposition-based Learning Mechanism (OBL). The proposed methods follow the transition techniques by defining a condition that determines which technique will be used. This design is recommended to prevent the problem of rapid convergence while maintaining the diversity of potential solutions. The Opposition-based Learning (OBL) Mechanism is then activated in response to a transition technique circumstance. This phase is used to look for a new search area in order to prevent being stuck in the local search region. To validate the efficiency of the suggested strategy, two sets of experiments are used: twenty-three benchmark functions and eight data clustering challenges. The suggested methods’ outcomes on the studied issues are compared to those of other well-known optimization approaches, including the Aquila Optimizer (AO), Ebola Optimization Search Algorithm (EOSA), Whale Optimization Algorithm (WOA), Sine Cosine Optimizer (SCA), Dragonfly Algorithm (DA), Grey Wolf Optimizer (GWO), Particle Swarm Optimizer (PSO), Reptile Search Algorithm (RSA), Arithmetic Optimization Algorithm (AOA), Generalized Normal Distribution (GND), and Dwarf Mongoose Optimization Algorithm (DMOA). The results showed that the suggested technique can identify new optimal solutions for both tested issues. It produced good results in terms of global search capabilities and convergence speed in all of the situations studied. The main contributions of this paper are given as follows.
  • A novel hybrid method is proposed to tackle the weaknesses of the original search methods, and is applied to solve various complicated optimization problems.
  • The proposed method is called GNDDMOA, which is based on using the original Generalized Normal Distribution Optimization (GND) and Dwarf Mongoose Optimization Algorithm (DMOA), followed by the Opposition-based Learning Mechanism (OBL).
  • The proposed GNDDMOA method was tested to solve twenty-three benchmark mathematical problems. Moreover, a set of eight data clustering problems was used to validate the performance of the GNDDMOA.
The remainder of this paper is organized as follows: The background and techniques of the algorithm are provided in Section 2. The suggested Generalized Normal Distribution Dwarf Mongoose Optimization Algorithm is demonstrated in Section 3. Section 4 contains the experimental details and analysis. The conclusion and future work direction are described in Section 5.

2. Background and Algorithms

2.1. Generalized Normal Distribution Optimization (GND)

The following is the architecture of the classic Generalized Normal Distribution Optimization (GND) [23].

2.1.1. Inspiration

The standard distribution rule, which is a crucial mechanism for representing natural phenomena, was motivated by GNDO. The value of the distribution is calculated as follows: x performs a possibility distribution with area factor ( μ ) and balance parameter ( δ ), and its potential weight function is:
f ( x ) = 1 2 π δ e x p ( ( x μ ) 2 2 δ 2 )
Figure 1 shows the potential values for the utilized parameters (i.e., mu and delta) in Equation (1).

2.1.2. Local Search (Exploitation)

Based on the present placements of all solutions, local search contributes to positive solutions nearby the search space. Equation (2) represents the generalized distribution optimal for development.
υ i t = μ i + δ i × η , i = 1 , 2 , 3 , , N
where υ i t is the direction of monitoring of the ith solution at the tth iteration, μ i is the average value of the ith solution, δ i is the standard deviation value and η is the portion of the punishment. The values of μ i , δ i and η can be determined as follows.
μ i = 1 3 + x i t + x b e s t t + M
δ i = 1 3 [ ( x i t μ ) 2 ] + ( x b e s t t μ ) 2 + ( M μ ) 2
η i = l o g ( λ 1 ) × c o s ( 2 π λ 2 ) , i f a b l o g ( λ 1 ) × c o s ( 2 π λ 2 + π ) , o t h e r w i s e
where a, b, λ 1 and λ 2 are random numbers, x b e s t t is the best obtained values, and M is the average of the candidate solutions. M is determined using Equation (6).
M = i = 1 N x i t N

2.1.3. Global Search (Exploration)

Global search is a technique for exploring a search space worldwide in order to find promising locations, as seen below.
ν i t = x i t + β × ( | λ 3 | × ν 1 ) + ( 1 β ) × ( | λ 4 | × ν 2 )
where λ 3 and λ 4 are handled by the normal distribution, β is random value, and ν 1 and ν 2 are two areas of values determined by Equation (8).
ν 1 = x i t x p 1 t , i f f ( x i t ) < f ( x p 1 t ) x p 1 t x i t , o t h e r w i s e
ν 2 = x p 2 t x p 3 t , i f f ( x p 2 t ) < f ( x p 3 t ) x p 3 t x p 2 t , o t h e r w i s e
where p 1 , p 2 , and p 3 are three random numbers [1 N], which meet p 1 p 2 p 3 i.

2.1.4. The Updating Mechanism of GND

The following mathematical depicts the GND’s update process.
x i t + 1 = ν i t + 1 if f ( ν i t ) < f ( x i t ) x i t o t h e r w i s e
The GND technique is described in Algorithm 1.
Algorithm 1: Pseudo-code of the GND.
1:
Input: Set the conditions and solutions for the algorithm.
2:
 
3:
while (t < T) do
4:
   
5:
   Determine the best solutions by calculating the Fitness Function ( F F ).
6:
   
7:
   for (i = 1 to S o l u t i o n s ) do
8:
     
9:
     if  α > 0.5 then
10:
        
11:
        Compute the average position by selecting the current best.
12:
        
13:
        Update the solutions’ positions using Equations (2) and (10).
14:
        
15:
     else
16:
        Update the solutions’ positions using Equations (7)–(10).
17:
        
18:
     end if
19:
     
20:
   end for
21:
   
22:
   t = t + 1
23:
   
24:
end while
25:
 
26:
Output: Return the most effective solution (x).

2.2. Dwarf Mongoose Optimization Algorithm (DMOA)

The original Dwarf Mongoose Optimization Algorithm (DMOA) design is presented [22]. The suggested DMOA replicates the dwarf mongoose’s compensating behavioral response, which is modeled as follows.

2.2.1. Alpha Group

The efficiency of each solution is calculated after the population has been initiated. Equation (11) calculates the likelihood value, and the alpha female is chosen based on this likelihood.
α = f i t i i = 1 n f i t i
The n b s relates to the number of mongooses in the α . Where, b s is the number of babysitters, p e e p is the vocalization of the dominant female that maintains the family on track [22]. The solutions updating mechanism is given as follows.
X i + 1 = X i + p h i p e e p
where p h i is a distributed random number. The sleeping mound is as provided in Equation (13) but after every repetition, where phi is a uniformly distributed random integer [1, 1].
s i m = f i t i + 1 f i t i m a x [ | f i t i + 1 f i t i | ]
Equation (14) gives the average number of the sleeping mound discovered.
φ = i = 1 n s i m n
Once the babysitting exchange criterion is fulfilled, the algorithm advances to the scouting stage, when the next food supply or resting mound is considered.

2.2.2. Scout Group

In the scout group part, if the family forages quite far, they will come across a good sleeping mound. The scout mongoose is simulated by Equation (15).
X i + 1 = X i C F p h i r a n d [ X i M ] , i f φ i + 1 > φ i X i + C F p h i r a n d [ X i M ] , o t h e r w i s e
where, r a n d is a random value in range [0, 1], C F value is calculated by Equation (16), and M value is calculated by Equation (17).
C F = ( 1 i t e r M a x i t e r ) ( 2 i t e r M a x i t e r )
M = i = 1 n X i s m i X i
Babysitters are generally inferior group members that stay with the youngsters and are cycled on a routine basis to enable the alpha female (mother) to conduct the rest of the squad on daily hunting expeditions.
The DMOA technique is described in Algorithm 2.
Algorithm 2: Pseudo-code of the DMOA.
1:
Input: Set the conditions and solutions for the algorithm.
2:
 
3:
Initialize the algorithm parameters and solutions.
4:
 
5:
while (iter < M a x i t e r ) do
6:
   
7:
   for (i = 1 to S o l u t i o n s ) do
8:
     
9:
     Calculate the Fitness Function ( F F ) of Mongoose.
10:
     
11:
     Fix time counter (C).
12:
     
13:
     Find the alpha value by using Equation (11).
14:
     
15:
     Conduct a solution by using Equation (12).
16:
     
17:
     Assess the sleeping mound by using Equation (13).
18:
     
19:
     Determine the average of the sleeping mound by using Equation (14).
20:
     
21:
     Determine the movement vector by using Equation (17).
22:
     
23:
     Simulate the scout Mongoose for the next solution by using Equation (15).
24:
     
25:
   end for
26:
   
27:
   t = t + 1
28:
   
29:
end while
30:
 
31:
Output: Return the best solution (x).

2.3. Opposition-Based Learning (OBL) Mechanism

This section introduces the Opposition-based Learning Algorithm (OBL). It is utilized to create a new opposing solution based on the previous one [70].
In the OBL, an opposite solution ( X O ) is presented as a real number. X∈ [ L B , U B ] is determined by Equation (18).
X O = U B + L B X
Opposite value: X = ( X 1 , X 2 , …, X n ) is a within the given range, { X 1 , X 2 , , X D } and X j [ U B j , L B j ], j∈ 1, 2, …, D. This mathematical is utilized by Equation (19).
X j O = U B j + L B j X j , w h e r e j = 1 D .
The fitness function evaluates the two solutions ( X O and X) during the optimization process. The best solution is identified, and the other solution is disregarded.

3. The Proposed Method (GNDDMOA)

This section introduces the suggested GNDDMOA (Generalized Normal Distribution Dwarf Mongoose Optimization Algorithm). Three basic search processes are employed to upgrade the alternatives in the suggested technique procedures. As a result of this strategy, the optimal solution will be more effective in locating a new search area and avoiding local optimum issues, such as premature, rapid, and sluggish convergence. Generalized Normal Distribution Optimization (GND), Dwarf Mongoose Optimization Algorithm (DMOA), and Opposition-based Learning (OBL) Strategies are the key procedures employed. The Standard Generalized Normal Distribution Optimization (GND) and Dwarf Mongoose Optimization Algorithm are used to discovering the best solution and improve their performance.
The Generalized Normal Distribution Optimization search methods are used in the first initialization step, followed by the Dwarf Mongoose Optimization Algorithm (DMOA) in the second initialization step, and the Opposition-based Learning (OBL) technique in the third iteration process. The Dwarf Mongoose Optimization Algorithm is put up in the second optimization period to ensure the GND by regulating the diversity of solutions and the consistency of the search methods (exploration and exploitation). The Opposition-based Learning mechanism aids the GND in the third iteration process, avoids the local optimum conundrum, and strengthens the suggested method’s ability to uncover new search regions.
The primary techniques used in the proposed GNDDMOA method, which employs integrated search techniques, are depicted in Figure 2. The main proposed conditions are used to help handle the search process and avoid the main weaknesses of the original methods, such as being trapped in local optima and the balance between the optimization processes. The number of fitness evaluations is the same as the first method’s criteria. Therefore one fitness evaluation is conducted per iteration. As a result, the suggested GNDDMOA performs one search for every repeat from the used technicians: GND, DMOA, or OBL. As a result, the suggested GNDDMOA is intended to address the core approaches’ major flaws and inadequacies in order to identify plausible solutions to the presented optimization and data clustering challenges.

Complexity of the Proposed GGNDDMOA

The complexity of the proposed GGNDDMOA depends on the complexity of traditional GGN, DMOA, and OBL. The total complexity is given as:
O ( G G N ) = O ( ( N × D + N × D + N ) × M a x i t e r )
O ( D M O A ) = O ( ( N × D + N × D + N ) × M a x i t e r )
Therefore, the complexity of GGNDDMOA is given as:
O ( A E O D E ) = O ( M a x i t e r × ( N × D + N × D + N × D + ( ( N K × D + N K × D + N K ) + ( N N K ) × D )
The best case of the proposed GGNDDMOA is as follows:
O ( G G N D D M O A ) = O M a x i t e r × ( N l o g N + N × D + N K × D + N K )
The worst case of the proposed GGNDDMOA is as follows:
O ( G G N D D M O A ) = O M a x i t e r × ( N 2 + N × D + N K × D + N K )
where N K is the number of solutions.

4. Experiments and Results

This section presents the experiments that were conducted to test the performance of the proposed method and to compare it with other methods. The experiments are divided into two main parts: benchmark functions and data clustering problems.

4.1. Experiments 1: Benchmark Functions Problems

The findings of the functions that were tested, as well as their explanations, are presented in this section. The obtained GNDDMOA findings were compared to those of well-known optimization methods, such as Aquila Optimizer (AO) [25], Salp Cosine Algorithm (SSA) [71], Particle Swarm optimizer (PSO) [72], Generalized Normal Distribution (GND) [23], Ebola Optimization Search Algorithm (AOSA) [29], Dragonfly Algorithm (DA) [73], Reptile Search Algorithm (RSA) [27], Whale Optimization Algorithm (WOA) [74], Grey Wolf Optimizer (GWO) [75], Arithmetic Optimization Algorithm [24], and Dwarf Mongoose Optimization Algorithm (DMOA) [22]. The suggested method’s performance was validated using the Friedman ranking test and the Wilcoxon ranking test. Using the Matlab program, Windows 10, and 16 GB RAM, all tests were performed 20 times [76] with the same number of iterations (1000).

4.1.1. Details of the Tested Benchmark Function Problems

Table 1 shows the system parameters for the algorithms that were tested. Table 2 shows the outcomes of the test functions that were tested.

4.1.2. Test Function Problems

Figure 3 shows the research function issues’ qualitative findings (F1–F13). Each row has four key sub-figures: function topology, first-dimension trajectory, average fitness values, and convergence curves. In virtually all of the examined scenarios, it is evident that the recommended strategy provided the optimum result. The optimization technique is quite efficient, as evidenced by the trajectory of the selected dimension, which alters the position values substantially.
The population size is examined in Table 3 to determine the appropriate number of solutions to employ in the suggested technique. The best size was 50 since it received the highest ranking. As indicated in Table 4, the first 13 benchmark functions (F1–F13) were assessed using ten dimensions. Compared to existing comparable methodologies, the suggested GNDDMOA method yielded better results in this table. AO, EO, AOA, GWO, PSO, WOA, GND, SCA, SSA, ALO, and DA were placed second and third, respectively. Almost all of the examined functions yielded promising results using the suggested technique. We examined the first of the 13 benchmark functions and compared them to previous approaches—the suggested GNDDMOA method yielded more accurate results. The GNDDMOA suggested approach produced excellent or outstanding results in virtually all of the high-dimensional functions examined. Table 5 shows the results of the second ten benchmark functions (F14–F23). The suggested GNDDMOA approach also outperformed other comparable methods in this table. PSO, SSA, GWO, ALO, EO, AO, GND, DA, WOA, SCA, and AOA were placed second and third, respectively. In practically every function examined, the recommended technique yielded the best results. Furthermore, the suggested strategy outperformed the SSA, DA, and EO methods in the Wilcoxon ranking test. The Wilcoxon ranking test revealed that the suggested technique outperformed SSA, DA, EO, and GND in the first benchmark case (F1). The final ranking is presented in Figure 4.
The convergence behavior of the comparison approaches is shown in Figure 5 to depict the performance curves clearly. Specifically, the suggested GNDDMOA approach smoothly accelerated the best solutions ahead. It definitely found the best solution in all of the challenges it was tested on (F1–F23). Furthermore, most test scenarios indicated that the proposed GNDDMOA avoided the primary flaws previously identified, such as premature convergence. In addition, as in the previous four test instances, the convergence stability was clearly visible. As a consequence of the acquired data, we determined that the suggested approach functioned very well and produced highly comparable outcomes to those of traditional techniques and other well-established approaches.

4.2. Experiments 2: Data Clustering Problems

A second phase of experiments was carried out to tackle eight data clustering difficulties and is described in this section. Table 6 contains explanations of the data clustering challenges that were evaluated. The suggested GNDDMOA’s findings were compared to those of well-known optimization methods, such as Aquila Optimizer (AO) [25], Particle Swarm optimizer (PSO) [72], Artificial Gorilla Troops Optimizer (AGTO) [77], Ebola Optimization Search Algorithm (EOSA) [29], Reptile Search Algorithm (RSA) [27], Generalized Normal Distribution (GND) [23], and Dwarf Mongoose Optimization Algorithm (DMOA) [22]. The suggested method’s performance was validated using the Friedman ranking test and the Wilcoxon ranking test. Using Matlab software, Windows 10, and 16 GB RAM, all tests were performed 20 times with the same number of iterations (1000).

Results and Discussion

The results of the proposed GNDDMOA on data clustering issues are reported in this section. The results of the methods compared employing eight data clustering tasks are shown in Table 7. In solving real-world data clustering challenges, the suggested technique showed promising results. In all of the scenarios that were examined, it yielded the best outcomes. The suggested GNDDMOA was ranked #1 in the Friedman ranking test, followed by PSO, GWO, AO, AOA, AGTO, GND, WOA, and AOVA. Furthermore, the Wilcoxon ranking test revealed that the suggested technique outperformed AO, PSO, GWO, AVOA, WOA, and GND in the first dataset (Cancer). Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15 demonstrate the best values for the centroids achieved using the suggested approach.
The convergence behavior of the comparison algorithms employing the investigated data clustering issues is depicted in Figure 6. Specifically, the suggested GNDDMOA approach smoothly accelerated the best solutions ahead. It clearly achieved the best solution in all of the tested problems. In addition, the majority of the test scenarios showed that the proposed GNDDMOA avoided prior fundamental flaws, such as premature convergence. Convergence stability was also observed, just as it was in the initial test scenarios. As a consequence of the obtained findings, we determined that the proposed approach performed admirably and generated comparable outcomes to the original techniques and other well-established methods. The clustering plot pictures produced by the proposed GNDDMOA are shown in Figure 7, where each dataset was examined using a different number of clusters (i.e., K 2, 4, and 8).
We chose this original method in this study as it has demonstrated its search ability in solving many challenging optimization problems. This is one of the most recent proposed methods not investigated in this domain. The main motivation bind using a new operator in the proposed method was to avoid the observed weaknesses in the original method and to make it more efficient during the optimization process.
The suggested GNDDMOA approach has a strong capacity to discover an appropriate solution to different optimization issues and data clustering, as evidenced by the previous findings and discussion. When the performance of GNDDMOA was compared to that of the classic DMOA approach, it was clear that GND and OBL had a significant impact on the capacity to balance exploration and exploitation, as seen by the excellent quality of the final solution. However, because it relies on OBL to boost processing time, the created approach still needs considerable refinement, particularly in time computation.

5. Conclusions and Potential Future Work

Recent advances in data volumes and the growth of complexity in tackling vast and complicated problems have necessitated advanced and intelligent technologies to address these issues. These approaches are usually modified procedures that enable them to cope with complex issues. Data clustering is one of the most frequent applications in the data mining industry. It is used to split a large number of data items into numerous clusters, each with several instances. The clustering method’s fundamental goal is to discover coherent clusters, with each group containing related items.
This research offers a fresh and inventive way of solving a collection of issues that require sophisticated methods to solve, based on a set of operators from several intelligent optimization algorithms. Three primary components are employed in the suggested technique (GNDDMOA) based on a unique transition mechanism to organize the executions of the used methods throughout the optimization process to address the significant flaws of the original methods. Dwarf Mongoose Optimization Algorithm (DMOA), Generalized Normal Distribution Optimization (GNF), and Opposition-based Learning Strategy are three of these strategies (OBL). The suggested transition method is utilized to implement the primary components that have been used. The suggested strategy is intended to solve the issue of premature coverage and unbalanced search strategies. The suggested method’s performance was validated using twenty-three benchmark functions and eight data clustering challenges. The proposed method’s results were compared to several other well-known methods. The suggested GNDDMOA approach produced the best results in benchmark functions and data clustering challenges in all of the evaluated scenarios. In comparison to the previous comparative methodologies, it produced good results.
The proposed method can solve other complex optimization problems in the future, such as condition monitoring, classification tasks, parameter selection, extraction of features, design issues, text grouping problems, packet headers, repairs and rehabilitation planning, and extensive medical data scheduling. In addition, a thorough examination of the suggested approach may be carried out to determine the primary reasons for the failure to identify the best solution in all circumstances.

Author Contributions

F.A.: Conceptualization, supervision, methodology, formal analysis, resources, data curation, writing–original draft preparation; L.A.: conceptualization, supervision, writing–review and editing, project administration; K.H.A.: conceptualization, supervision, methodology, formal analysis, resources, data curation, writing–original draft preparation. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4210128DSR01).

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Data is available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fakhouri, H.N.; Hudaib, A.; Sleit, A. Multivector particle swarm optimization algorithm. Soft Comput. 2020, 24, 11695–11713. [Google Scholar] [CrossRef]
  2. Jouhari, H.; Lei, D.; Al-qaness, M.A.; Elaziz, M.A.; Damaševičius, R.; Korytkowski, M.; Ewees, A.A. Modified Harris Hawks optimizer for solving machine scheduling problems. Symmetry 2020, 12, 1460. [Google Scholar] [CrossRef]
  3. Abualigah, L.M.Q. Feature Selection and Enhanced Krill Herd Algorithm for Text Document Clustering; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  4. Abualigah, L.; Diabat, A.; Elaziz, M.A. Improved slime mould algorithm by opposition-based learning and Levy flight distribution for global optimization and advances in real-world engineering problems. J. Ambient. Intell. Humaniz. Comput. 2021, 1–40. [Google Scholar] [CrossRef]
  5. Abu Khurma, R.; Aljarah, I.; Sharieh, A.; Abd Elaziz, M.; Damaševičius, R.; Krilavičius, T. A review of the modification strategies of the nature inspired algorithms for feature selection problem. Mathematics 2022, 10, 464. [Google Scholar] [CrossRef]
  6. Hassan, M.H.; Kamel, S.; Abualigah, L.; Eid, A. Development and application of slime mould algorithm for optimal economic emission dispatch. Expert Syst. Appl. 2021, 182, 115205. [Google Scholar] [CrossRef]
  7. Wang, S.; Liu, Q.; Liu, Y.; Jia, H.; Abualigah, L.; Zheng, R.; Wu, D. A Hybrid SSA and SMA with Mutation Opposition-Based Learning for Constrained Engineering Problems. Comput. Intell. Neurosci. 2021, 2021, 6379469. [Google Scholar] [CrossRef]
  8. Attiya, I.; Abd Elaziz, M.; Abualigah, L.; Nguyen, T.N.; Abd El-Latif, A.A. An Improved Hybrid Swarm Intelligence for Scheduling IoT Application Tasks in the Cloud. IEEE Trans. Ind. Inform. 2022. [Google Scholar] [CrossRef]
  9. Wu, D.; Jia, H.; Abualigah, L.; Xing, Z.; Zheng, R.; Wang, H.; Altalhi, M. Enhance Teaching-Learning-Based Optimization for Tsallis-Entropy-Based Feature Selection Classification Approach. Processes 2022, 10, 360. [Google Scholar] [CrossRef]
  10. Damaševičius, R.; Maskeliūnas, R. Agent State Flipping Based Hybridization of Heuristic Optimization Algorithms: A Case of Bat Algorithm and Krill Herd Hybrid Algorithm. Algorithms 2021, 14, 358. [Google Scholar] [CrossRef]
  11. Kharrich, M.; Abualigah, L.; Kamel, S.; AbdEl-Sattar, H.; Tostado-Véliz, M. An Improved Arithmetic Optimization Algorithm for design of a microgrid with energy storage system: Case study of El Kharga Oasis, Egypt. J. Energy Storage 2022, 51, 104343. [Google Scholar] [CrossRef]
  12. Abualigah, L.; Almotairi, K.H.; Abd Elaziz, M.; Shehab, M.; Altalhi, M. Enhanced Flow Direction Arithmetic Optimization Algorithm for mathematical optimization problems with applications of data clustering. Eng. Anal. Bound. Elem. 2022, 138, 13–29. [Google Scholar] [CrossRef]
  13. Ezugwu, A.E.; Ikotun, A.M.; Oyelade, O.O.; Abualigah, L.; Agushaka, J.O.; Eke, C.I.; Akinyelu, A.A. A comprehensive survey of clustering algorithms: State-of-the-art machine learning applications, taxonomy, challenges, and future research prospects. Eng. Appl. Artif. Intell. 2022, 110, 104743. [Google Scholar] [CrossRef]
  14. Al-qaness, M.A.; Ewees, A.A.; Fan, H.; Abualigah, L.; Abd Elaziz, M. Boosted ANFIS model using augmented marine predator algorithm with mutation operators for wind power forecasting. Appl. Energy 2022, 314, 118851. [Google Scholar] [CrossRef]
  15. Mahajan, S.; Abualigah, L.; Pandit, A.K.; Altalhi, M. Hybrid Aquila optimizer with arithmetic optimization algorithm for global optimization tasks. Soft Comput. 2022, 26, 4863–4881. [Google Scholar] [CrossRef]
  16. Hussein, A.M.; Rashid, N.A.; Abdulah, R. Parallelisation of maximal patterns finding algorithm in biological sequences. In Proceedings of the 2016 3rd International Conference on Computer and Information Sciences (ICCOINS), Kuala Lumpur, Malaysia, 15–17 August 2016; pp. 227–232. [Google Scholar]
  17. Abbassi, A.; Ben Mehrez, R.; Bensalem, Y.; Abbassi, R.; Kchaou, M.; Jemli, M.; Abualigah, L.; Altalhi, M. Improved Arithmetic Optimization Algorithm for Parameters Extraction of Photovoltaic Solar Cell Single-Diode Model. Arab. J. Sci. Eng. 2022, 1–17. [Google Scholar] [CrossRef]
  18. De la Torre, R.; Corlu, C.G.; Faulin, J.; Onggo, B.S.; Juan, A.A. Simulation, optimization, and machine learning in sustainable transportation systems: Models and applications. Sustainability 2021, 13, 1551. [Google Scholar] [CrossRef]
  19. Hussein, A.M.; Abdullah, R.; AbdulRashid, N.; Ali, A.N.B. Protein multiple sequence alignment by basic flower pollination algorithm. In Proceedings of the 2017 8th International Conference on Information Technology (ICIT), Amman, Jordan, 17–18 May 2017; pp. 833–838. [Google Scholar]
  20. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Ewees, A.A.; Abualigah, L.; Abd Elaziz, M. MTV-MFO: Multi-Trial Vector-Based Moth-Flame Optimization Algorithm. Symmetry 2021, 13, 2388. [Google Scholar] [CrossRef]
  21. Fan, C.L. Evaluation of Classification for Project Features with Machine Learning Algorithms. Symmetry 2022, 14, 372. [Google Scholar] [CrossRef]
  22. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf mongoose optimization algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Jin, Z.; Mirjalili, S. Generalized normal distribution optimization and its applications in parameter extraction of photovoltaic models. Energy Convers. Manag. 2020, 224, 113301. [Google Scholar] [CrossRef]
  24. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  25. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization Algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  26. He, S.; Wu, Q.H.; Saunders, J. Group search optimizer: An optimization algorithm inspired by animal searching behavior. IEEE Trans. Evol. Comput. 2009, 13, 973–990. [Google Scholar] [CrossRef]
  27. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  28. Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  29. Oyelade, O.N.; Ezugwu, A.E.S.; Mohamed, T.I.; Abualigah, L. Ebola Optimization Search Algorithm: A New Nature-Inspired Metaheuristic Optimization Algorithm. IEEE Access 2022, 10, 16150–16177. [Google Scholar] [CrossRef]
  30. Askarzadeh, A. Bird mating optimizer: An optimization algorithm inspired by bird mating strategies. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 1213–1228. [Google Scholar] [CrossRef]
  31. Hussein, A.M.; Abdullah, R.; AbdulRashid, N. Flower Pollination Algorithm With Profile Technique For Multiple Sequence Alignment. In Proceedings of the 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT), Amman, Jordan, 9–11 April 2019; pp. 571–576. [Google Scholar]
  32. Wang, B.; Jin, X.; Cheng, B. Lion pride optimizer: An optimization algorithm inspired by lion pride behavior. Sci. China Inf. Sci. 2012, 55, 2369–2389. [Google Scholar] [CrossRef]
  33. Dehghani, M.; Montazeri, Z.; Givi, H.; Guerrero, J.M.; Dhiman, G. Darts game optimizer: A new optimization technique based on darts game. Int. J. Intell. Eng. Syst 2020, 13, 286–294. [Google Scholar] [CrossRef]
  34. MiarNaeimi, F.; Azizyan, G.; Rashki, M. Multi-level cross entropy optimizer (MCEO): An evolutionary optimization algorithm for engineering problems. Eng. Comput. 2018, 34, 719–739. [Google Scholar] [CrossRef]
  35. Khodadadi, N.; Azizi, M.; Talatahari, S.; Sareh, P. Multi-Objective Crystal Structure Algorithm (MOCryStAl): Introduction and Performance Evaluation. IEEE Access 2021, 9, 117795–117812. [Google Scholar] [CrossRef]
  36. Kaveh, A.; Talatahari, S.; Khodadadi, N. Stochastic paint optimizer: Theory and application in civil engineering. Eng. Comput. 2020, 1–32. [Google Scholar] [CrossRef]
  37. Pan, J.S.; Lv, J.X.; Yan, L.J.; Weng, S.W.; Chu, S.C.; Xue, J.K. Golden eagle optimizer with double learning strategies for 3D path planning of UAV in power inspection. Math. Comput. Simul. 2022, 193, 509–532. [Google Scholar] [CrossRef]
  38. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. QANA: Quantum-based avian navigation optimizer algorithm. Eng. Appl. Artif. Intell. 2021, 104, 104314. [Google Scholar] [CrossRef]
  39. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. CCSA: Conscious neighborhood-based crow search algorithm for solving global optimization problems. Appl. Soft Comput. 2019, 85, 105583. [Google Scholar] [CrossRef]
  40. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  41. Abdullah, J.M.; Ahmed, T. Fitness dependent optimizer: Inspired by the bee swarming reproductive process. IEEE Access 2019, 7, 43473–43486. [Google Scholar] [CrossRef]
  42. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  43. Dehghani, M.; Montazeri, Z.; Malik, O.P. DGO: Dice game optimizer. Gazi Univ. J. Sci. 2019, 32, 871–882. [Google Scholar] [CrossRef] [Green Version]
  44. Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
  45. Azizyan, G.; Miarnaeimi, F.; Rashki, M.; Shabakhty, N. Flying Squirrel Optimizer (FSO): A novel SI-based optimization algorithm for engineering problems. Iran. J. Optim. 2019, 11, 177–205. [Google Scholar]
  46. Dehghani, M.; Hubálovskỳ, Š.; Trojovskỳ, P. Cat and Mouse Based Optimizer: A New Nature-Inspired Optimization Algorithm. Sensors 2021, 21, 5214. [Google Scholar] [CrossRef] [PubMed]
  47. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar] [CrossRef]
  48. Jiang, Y.; Wu, Q.; Zhu, S.; Zhang, L. Orca predation algorithm: A novel bio-inspired algorithm for global optimization problems. Expert Syst. Appl. 2021, 188, 116026. [Google Scholar] [CrossRef]
  49. Sharma, B.; Hashmi, A.; Gupta, C.; Khalaf, O.I.; Abdulsahib, G.M.; Itani, M.M. Hybrid Sparrow Clustered (HSC) Algorithm for Top-N Recommendation System. Symmetry 2022, 14, 793. [Google Scholar] [CrossRef]
  50. Alotaibi, Y. A New Meta-Heuristics Data Clustering Algorithm Based on Tabu Search and Adaptive Search Memory. Symmetry 2022, 14, 623. [Google Scholar] [CrossRef]
  51. Ahmadi, R.; Ekbatanifard, G.; Bayat, P. A Modified Grey Wolf Optimizer Based Data Clustering Algorithm. Appl. Artif. Intell. 2021, 35, 63–79. [Google Scholar] [CrossRef]
  52. Esmin, A.A.; Coelho, R.A.; Matwin, S. A review on particle swarm optimization algorithm and its variants to clustering high-dimensional data. Artif. Intell. Rev. 2015, 44, 23–45. [Google Scholar] [CrossRef]
  53. Vats, S.; Sagar, B.B.; Singh, K.; Ahmadian, A.; Pansera, B.A. Performance evaluation of an independent time optimized infrastructure for big data analytics that maintains symmetry. Symmetry 2020, 12, 1274. [Google Scholar] [CrossRef]
  54. Abualigah, L. Group search optimizer: A nature-inspired meta-heuristic optimization algorithm with its results, variants, and applications. Neural Comput. Appl. 2021, 33, 2949–2972. [Google Scholar] [CrossRef]
  55. Abualigah, L.; Gandomi, A.H.; Elaziz, M.A.; Hussien, A.G.; Khasawneh, A.M.; Alshinwan, M.; Houssein, E.H. Nature-inspired optimization algorithms for text document clustering—A comprehensive analysis. Algorithms 2020, 13, 345. [Google Scholar] [CrossRef]
  56. Jung, Y.; Park, H.; Du, D.Z.; Drake, B.L. A decision criterion for the optimal number of clusters in hierarchical clustering. J. Glob. Optim. 2003, 25, 91–111. [Google Scholar] [CrossRef]
  57. Huang, D.; Wang, C.D.; Lai, J.H.; Kwoh, C.K. Toward multidiversified ensemble clustering of high-dimensional data: From subspaces to metrics and beyond. IEEE Trans. Cybern. 2021, 1–14. [Google Scholar] [CrossRef] [PubMed]
  58. Huang, D.; Wang, C.D.; Peng, H.; Lai, J.; Kwoh, C.K. Enhanced ensemble clustering via fast propagation of cluster-wise similarities. IEEE Trans. Syst. Man, Cybern. Syst. 2018, 51, 508–520. [Google Scholar] [CrossRef]
  59. Steinbach, M.; Ertöz, L.; Kumar, V. The challenges of clustering high dimensional data. In New Directions in Statistical Physics; Springer: Berlin/Heidelberg, Germany, 2004; pp. 273–309. [Google Scholar]
  60. Steinley, D. K-means clustering: A half-century synthesis. Br. J. Math. Stat. Psychol. 2006, 59, 1–34. [Google Scholar] [CrossRef] [Green Version]
  61. Guo, W.; Xu, P.; Dai, F.; Hou, Z. Harris hawks optimization algorithm based on elite fractional mutation for data clustering. Appl. Intell. 2022, 1–27. [Google Scholar] [CrossRef]
  62. Almotairi, K.H.; Abualigah, L. Hybrid Reptile Search Algorithm and Remora Optimization Algorithm for Optimization Tasks and Data Clustering. Symmetry 2022, 14, 458. [Google Scholar] [CrossRef]
  63. Abualigah, L.; Gandomi, A.H.; Elaziz, M.A.; Hamad, H.A.; Omari, M.; Alshinwan, M.; Khasawneh, A.M. Advances in meta-heuristic optimization algorithms in big data text clustering. Electronics 2021, 10, 101. [Google Scholar] [CrossRef]
  64. Singh, T.; Saxena, N.; Khurana, M.; Singh, D.; Abdalla, M.; Alshazly, H. Data clustering using moth-flame optimization algorithm. Sensors 2021, 21, 4086. [Google Scholar] [CrossRef]
  65. Wang, B.; Wang, Y.; Hou, J.; Li, Y.; Guo, Y. Open-Set source camera identification based on envelope of data clustering optimization (EDCO). Comput. Secur. 2022, 113, 102571. [Google Scholar] [CrossRef]
  66. Singh, T. A novel data clustering approach based on whale optimization algorithm. Expert Syst. 2021, 38, e12657. [Google Scholar] [CrossRef]
  67. Babu, S.S.; Jayasudha, K. A Simplex Method-Based Bacterial Colony Optimization for Data Clustering. In Innovative Data Communication Technologies and Application; Springer: Berlin/Heidelberg, Germany, 2022; pp. 987–995. [Google Scholar]
  68. Huang, S.; Kang, Z.; Xu, Z.; Liu, Q. Robust deep k-means: An effective and simple method for data clustering. Pattern Recognit. 2021, 117, 107996. [Google Scholar] [CrossRef]
  69. Deeb, H.; Sarangi, A.; Mishra, D.; Sarangi, S.K. Improved Black Hole optimization algorithm for data clustering. J. King Saud. Univ. Comput. Inf. Sci. 2020; in press. [Google Scholar] [CrossRef]
  70. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar]
  71. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  72. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings, IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360), Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  73. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  74. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  75. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  76. Jäntschi, L. A test detecting the outliers for continuous distributions based on the cumulative distribution function of the data being tested. Symmetry 2019, 11, 835. [Google Scholar] [CrossRef] [Green Version]
  77. Abdollahzadeh, B.; Soleimanian Gharehchopogh, F.; Mirjalili, S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Int. J. Intell. Syst. 2021, 36, 5887–5958. [Google Scholar] [CrossRef]
Figure 1. Distribution values of μ and δ .
Figure 1. Distribution values of μ and δ .
Symmetry 14 01021 g001
Figure 2. The proposed GNDDMOA method.
Figure 2. The proposed GNDDMOA method.
Symmetry 14 01021 g002
Figure 3. Qualitative results for the tested 13 problems (F1–F13).
Figure 3. Qualitative results for the tested 13 problems (F1–F13).
Symmetry 14 01021 g003aSymmetry 14 01021 g003b
Figure 4. The ranking results of the tested methods overall for the tested functions.
Figure 4. The ranking results of the tested methods overall for the tested functions.
Symmetry 14 01021 g004
Figure 5. Convergence behaviour of the comparative optimization algorithms on the test functions (F1–F23).
Figure 5. Convergence behaviour of the comparative optimization algorithms on the test functions (F1–F23).
Symmetry 14 01021 g005aSymmetry 14 01021 g005bSymmetry 14 01021 g005c
Figure 6. Convergence behaviour of the tested methods for the data clustering applications.
Figure 6. Convergence behaviour of the tested methods for the data clustering applications.
Symmetry 14 01021 g006
Figure 7. Clustering plot images; each color presents a cluster (A group of data objects), and each cycle is a cluster centroid.
Figure 7. Clustering plot images; each color presents a cluster (A group of data objects), and each cycle is a cluster centroid.
Symmetry 14 01021 g007
Table 1. Parameter values of the tested algorithms.
Table 1. Parameter values of the tested algorithms.
No.AlgorithmParameterValue
1AO α 0.1
δ 0.1
2SCA α 0.05
3PSOTopologyFully connected
Cognitive and social constant(C1, C2) 2, 2
Inertia weightLinear reduction from 0.9 to 0.1
Velocity limit10% of dimension range
4GND β 5
5ESOA n e i g h b o r h o o d ≥0.5
6DAw0.2–0.9
s, a, and c0.1
f and e1
7RSA α 0.1
β 0.1
8DMOA C F ( 1 i t e r M a x i t e r ) ( 2 i t e r M a x i t e r )
9WOA α Decreased from 2 to 0
b2
10GWOConvergence parameter (a)Linear reduction from 2 to 0
11AOA α 5
μ 0.5
Table 2. Details of the tested benchmark functions.
Table 2. Details of the tested benchmark functions.
FunctionDescriptionDimensionsRange f min
F1 f ( x ) = i = 1 n x i 2 30[−100,100]0
F2 f ( x ) = i = 0 n | x i | + i = 0 n | x i | 30[−10,10]0
F3 f ( x ) = i = 1 d ( j = 1 i x j ) 2 30[−100,100]0
F4 f ( x ) = m a x i { | x i | , 1 i n } 30[−100,100]0
F5 f ( x ) = i = 1 n 1 [ 100 ( x i 2 x i + 1 ) 2 + ( 1 x i ) 2 ] 30[−30,30]0
F6 f ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 30[−100,100]0
F7 f ( x ) = i = 0 n i x i 4 + random [ 0 , 1 ) 30[−128,128]0
F8 f ( x ) = i = 1 n ( x i s i n ( | x i | ) ) 30[−500,500]−418.9829 × n
F9 f ( x ) = i = 1 n [ x i 2 10 c o s ( 2 π x i ) + 10 ] 30[−5.12,5.12]0
F10 f ( x ) = 20 e x p ( 0.2 1 n i = 1 n x i 2 ) e x p ( 1 n i = 1 n c o s ( 2 π x i ) ) + 20 + e 10, 50, 100, 500[−32,32]0
F11 f ( x ) = 1 + 1 4000 i = 1 n x i 2 i = 1 n c o s ( x i i ) 10, 50, 100, 500[−600,600]0
F12 f ( x ) = π n 10 s i n ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 s i n 2 ( π y i + 1 ) + i = 1 n u ( x i , 10 , 100 , 4 ) ] , where y i = 1 + x i + 1 4 , u ( x i , a , k , m ) K ( x i a ) m if x i > a 0 a x i a K ( x i a ) m a x i 10, 50, 100, 500[−50,50]0
F13 f ( x ) = 0.1 ( s i n 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + s i n 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 1 + s i n 2 ( 2 π x n ) ) + i = 1 n u ( x i , 5 , 100 , 4 ) 10, 50, 100, 500[−50,50]0
F14 f ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 1 2[−65,65]1
F15 f ( x ) = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5,5]0.00030
F16 f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5,5]−1.0316
F17f(x) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 ( 1 1 8 π ) c o s   x 1 + 10 2[−5,5]0.398
F18 f ( x ) = 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) × 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x i + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) 2[−2,2]3
F19 f ( x ) = i = 1 4 c i e x p i = 1 3 a i j ( x j p i j ) 2 3[−1,2]−3.86
F20 f ( x ) = i = 1 4 c i e x p i = 1 6 a i j ( x j p i j ) 2 6[0,1]−0.32
F21 f ( x ) = i = 1 5 ( X a i ) ( X a i ) T + c i 1 4[0,1]−10.1532
F22 f ( x ) = i = 1 7 ( X a i ) ( X a i ) T + c i 1 4[0,1]−10.4028
F23 f ( x ) = i = 1 10 ( X a i ) ( X a i ) T + c i 1 4[0,1]−10.5363
Table 3. The effect of the number of solutions (N) on the performance of the proposed method.
Table 3. The effect of the number of solutions (N) on the performance of the proposed method.
FunMeasureNumber of Solutions
5101520253035404550
F1Best1.96642E-1396.80122E-1791.58485E-1871.90720E-1935.14130E-2061.55611E-2451.26695E-2348.45631E-2491.76900E-2511.42708E-260
Average7.49146E-1665.34097E-1994.43195E-2188.93255E-2191.18897E-2406.62152E-2532.43931E-2691.15685E-2784.27095E-2691.25948E-269
Worst3.93284E-1390.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
STD3.55918E-010.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
p-value0.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+00NaN
h0.00000E+000.00000E+000.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+00NaN
Rank10987653142
F2Best9.93668E-671.58269E-829.01601E-1011.01774E-954.35346E-1153.54669E-1262.19322E-1161.16292E-1191.05639E-1317.56831E-138
Average1.56582E-841.70797E-1048.69897E-1092.03074E-1266.10419E-1268.29742E-1355.51146E-1303.51406E-1407.39072E-1401.42257E-151
Worst1.98734E-663.16538E-821.80320E-1002.03547E-958.70557E-1156.02213E-1264.38642E-1162.32272E-1191.86924E-1318.70103E-138
STD3.55918E-013.55918E-013.55918E-013.55918E-013.55849E-012.83440E-013.55915E-013.55316E-013.01505E-011.00000E+00
p-value0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank10986745231
F3Best4.58629E-1434.55285E-1754.94937E-2002.94241E-1961.79116E-2211.05763E-2153.41754E-2081.27704E-2277.93135E-2266.09709E-249
Average1.14659E-1431.13822E-1751.37181E-2007.36393E-1974.47790E-2222.64407E-2168.54386E-2093.19261E-2281.98284E-2261.52427E-249
Worst3.86318E-1642.04703E-1917.23484E-2035.80950E-2281.48708E-2491.76410E-2569.27567E-2681.38798E-2844.57557E-2841.40038E-276
STD2.29314E-1430.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
p-value3.55910E-010.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+00NaN
Rank10978456231
F4Best9.20923E-753.81391E-885.41710E-1047.53771E-1002.64420E-1134.07453E-1116.07146E-1093.55423E-1182.13585E-1152.02201E-129
Average9.23796E-822.19343E-961.12306E-1041.69127E-1141.45026E-1262.33274E-1315.79408E-1373.91305E-1429.24209E-1471.97404E-142
Worst1.06539E-747.62782E-885.76720E-1041.50754E-995.28838E-1138.14905E-1111.21429E-1087.10846E-1184.27169E-1154.04402E-129
STD1.34578E-013.55917E-011.09369E-013.55917E-013.55917E-013.55918E-013.55918E-013.55918E-013.55918E-011.00000E+00
p-value0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank10987654312
F5Best4.02979E-011.16681E+002.29192E-012.40147E-021.27552E-011.61407E-021.27744E-033.46947E-031.11423E-026.53477E-03
Average2.21845E-028.11654E-043.70412E-031.05013E-031.27427E-041.09878E-043.37243E-059.16706E-041.57951E-038.41616E-04
Worst3.50681E-019.83219E-013.12304E-014.00235E-021.76549E-012.09571E-022.39817E-032.07350E-031.33852E-025.10838E-03
STD6.44811E-025.62746E-022.03838E-014.19535E-012.19636E-014.07410E-011.11725E-013.08701E-015.43875E-011.00000E+00
p-value0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank10497321685
F6Best9.34392E-033.04139E-038.42925E-056.85189E-052.70471E-031.91189E-041.48673E-043.10145E-045.28543E-068.18907E-05
Average9.33127E-049.08173E-051.85031E-052.94020E-061.85377E-056.76566E-054.06236E-055.34303E-061.11750E-073.89272E-06
Worst1.52772E-022.26981E-039.33315E-055.82931E-054.91653E-031.38281E-041.71999E-044.43030E-046.40626E-061.05383E-04
STD2.70891E-014.03955E-029.73885E-018.31629E-013.27138E-012.55365E-015.32473E-013.54824E-011.96931E-011.00000E+00
p-value0.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank10952687413
F7Best3.47280E-049.90233E-044.35350E-045.48256E-042.82678E-041.83194E-041.25068E-046.99226E-055.04918E-042.72650E-04
Average4.99788E-053.70056E-041.40243E-042.33139E-041.84139E-059.93907E-063.66172E-061.83871E-053.68446E-052.03631E-05
Worst2.83772E-045.32924E-042.76696E-043.65341E-043.98101E-042.90931E-041.88798E-045.08652E-055.03998E-043.01183E-04
STD7.30683E-015.74895E-024.56564E-012.88549E-019.69256E-016.84102E-014.38122E-012.32644E-014.58955E-011.00000E+00
p-value0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h1.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank71089421365
F8Best−3.33753E+03−4.17765E+03−4.17624E+03−4.18778E+03−4.18330E+03−4.18900E+03−4.18932E+03−4.18933E+03−4.18627E+03−4.18574E+03
Average−4.18572E+03−4.18977E+03−4.18571E+03−4.18937E+03−4.18957E+03−4.18975E+03−4.18975E+03−4.18981E+03−4.18946E+03−4.18959E+03
Worst1.04094E+031.41633E+019.37572E+001.98355E+006.22921E+005.57735E-013.06828E-014.90764E-013.70456E+006.78176E+00
STD1.54297E-013.42727E-011.51816E-015.84733E-016.15420E-013.74269E-013.32571E-013.31794E-018.94952E-011.00000E+00
p-value0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank92108643175
F9Best0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
Average0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
Worst0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
STD0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
p-valueNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
hNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
Rank1111111111
F10Best8.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-16
Average8.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-16
Worst8.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-168.88178E-16
STD0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
p-valueNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
hNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
Rank1111111111
F11Best0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
Average0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
Worst0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
STD0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00
p-valueNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
hNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
Rank1111111111
F12Best1.45873E-035.91449E-043.29651E-042.27011E-045.48360E-053.51156E-053.17201E-042.77007E-056.51837E-056.89155E-05
Average1.76914E-042.73523E-052.15711E-068.30106E-073.27842E-075.16702E-069.32996E-079.69316E-061.65680E-073.71279E-08
Worst1.70138E-031.07266E-033.69180E-043.06580E-041.03963E-042.84110E-054.39644E-043.16003E-057.06677E-051.26209E-04
STD1.54378E-013.70612E-012.29813E-013.77061E-018.68934E-016.20017E-013.19321E-015.49738E-019.60524E-011.00000E+00
p-value0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+00NaN
Rank10964375821
F13Best5.53361E-032.39400E-032.11151E-043.13435E-046.90242E-042.63506E-044.24278E-041.77736E-041.50778E-041.82857E-05
Average3.27058E-056.49613E-087.24448E-053.18926E-058.11466E-051.57960E-052.06124E-054.22721E-061.28001E-051.58942E-06
Worst9.29554E-032.92219E-031.76388E-045.15306E-045.26253E-043.32359E-047.35246E-042.55784E-041.97214E-041.99712E-05
STD2.80214E-011.55087E-017.27651E-022.95945E-014.33793E-021.91184E-013.11904E-012.60246E-012.29740E-011.00000E+00
p-value0.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank81971056342
Mean ranking7.46155.69236.23085.23084.46153.84623.38462.76923.23082.3077
Final ranking10897654231
Table 4. The results of the comparative algorithms using 13 problems, where the dimension was 10.
Table 4. The results of the comparative algorithms using 13 problems, where the dimension was 10.
FunMeasureComparative Algorithms
AOAOSAWOASCADAGWOPSORSAAOAGNDDMOAGNDDMOA
F1Best3.66474E-876.28801E-053.21225E-283.76938E-055.84033E+027.12737E-195.02886E-041.83136E-041.76129E-232.30248E-029.15299E-927.65646E-100
Average1.06604E-1014.30307E-071.29228E-351.00926E-071.88201E+021.76120E-201.04061E-055.48359E-051.70961E-286.81896E-035.21550E-1503.74215E-113
Worst7.30950E-871.10433E-046.00539E-287.20066E-054.54283E+021.23466E-186.57368E-041.86657E-043.44694E-232.80925E-021.83060E-911.53129E-99
STD3.54699E-012.98206E-013.25850E-013.35452E-014.22649E-022.92178E-011.76890E-019.73873E-023.46231E-011.52280E-013.55918E-011.00000E+00
p-value0.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+001.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank384712691051112
F2Best4.89724E-434.77454E+003.96956E-273.09663E-051.06677E+012.72535E-114.39431E-022.80992E+017.08887E-166.67952E-030.00000E+007.89116E-51
Average1.39561E-525.75179E-034.97945E-323.70599E-076.68627E+009.15704E-122.28442E-041.23367E+012.41130E-163.46259E-030.00000E+005.71912E-57
Worst8.05814E-436.37930E+007.93793E-275.36970E-054.47352E+002.40149E-114.47065E-021.14572E+016.61691E-163.79248E-030.00000E+001.57553E-50
STD2.69836E-011.85064E-013.55851E-012.92625E-013.09675E-036.36925E-029.69065E-022.69704E-037.58720E-021.24805E-023.55153E-011.00000E+00
p-value0.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+001.00000E+000.00000E+001.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+001.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank310471168125912
F3Best4.79212E-923.32533E+025.42403E+033.57621E+002.46313E+033.81558E-071.68031E+003.18836E+031.14540E-102.93259E+023.04393E-623.00510E-95
Average1.83369E-1052.98617E+017.01273E+023.44413E-041.13059E+022.13525E-112.05461E-012.35602E+022.67403E-133.88981E+011.50114E-1167.28663E-111
Worst9.57369E-922.42220E+023.83278E+037.06544E+003.64272E+036.67044E-072.41695E+001.99860E+031.73836E-102.61200E+026.08786E-625.98440E-95
STD3.55706E-013.34816E-022.99445E-023.50457E-012.25005E-012.96192E-012.13779E-011.88221E-022.35652E-016.58496E-023.55918E-011.00000E+00
p-value0.00000E+001.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank381261057114912
F4Best1.82520E-458.56992E+002.10169E+014.76903E+009.85809E+005.86652E-062.46460E-011.52753E+011.62613E-082.77337E+011.44724E-342.32896E-48
Average7.02897E-631.71863E+004.88983E+001.30915E-036.65989E+007.75667E-071.46708E-016.37704E+005.71368E-102.33351E+011.00590E-557.30468E-58
Worst2.23750E-455.43767E+001.09404E+019.43397E+002.76842E+005.06159E-061.02946E-016.03220E+002.41392E-084.42340E+001.88990E-344.61544E-48
STD1.54356E-011.97631E-028.53843E-033.51021E-013.85582E-045.96078E-023.03736E-032.30027E-032.26535E-011.57363E-051.76515E-011.00000E+00
p-value0.00000E+001.00000E+001.00000E+000.00000E+001.00000E+000.00000E+001.00000E+001.00000E+000.00000E+001.00000E+000.00000E+00NaN
h0.00000E+001.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+00NaN
Rank189611571041232
F5Best7.63059E-021.41462E+028.67832E+008.42224E+005.76067E+048.17573E+009.82022E+005.01206E+027.69462E+002.00100E+018.07769E+001.11950E+00
Average1.11823E-048.50166E+008.19771E+007.49378E+001.33381E+037.18559E+002.42007E+009.14896E+007.14788E+008.64746E+007.96299E+001.97408E-02
Worst1.36045E-012.60882E+023.31564E-017.05617E-017.00093E+047.27739E-017.41683E+007.74006E+024.06853E-011.49972E+011.41420E-011.87549E+00
STD3.09681E-013.23327E-012.12504E-043.39807E-041.50937E-014.18569E-046.32655E-022.43822E-014.75339E-044.65449E-023.12976E-041.00000E+00
p-value0.00000E+000.00000E+001.00000E+001.00000E+000.00000E+001.00000E+000.00000E+000.00000E+001.00000E+001.00000E+001.00000E+00NaN
h0.00000E+001.00000E+001.00000E+001.00000E+000.00000E+001.00000E+001.00000E+000.00000E+001.00000E+000.00000E+001.00000E+00NaN
Rank198612531141072
F6Best7.16101E-046.35924E-058.53927E-017.62565E-019.86681E+026.28776E-016.81072E-041.25731E-045.04294E-041.40801E-022.92328E-014.68337E-02
Average3.68695E-061.84166E-083.54606E-013.81830E-012.11887E+022.50486E-011.50804E-062.29902E-058.51116E-055.92111E-031.71011E-011.65676E-04
Worst6.71783E-048.85427E-053.63647E-012.71609E-011.39224E+033.24074E-011.32683E-037.51142E-058.09430E-041.15208E-021.30737E-013.69398E-02
STD4.67495E-024.45456E-024.48949E-031.97194E-032.06160E-011.18082E-024.67053E-024.47482E-024.60417E-021.41414E-011.11768E-021.00000E+00
p-value1.00000E+001.00000E+001.00000E+001.00000E+000.00000E+001.00000E+001.00000E+001.00000E+001.00000E+000.00000E+001.00000E+00NaN
h0.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+00NaN
Rank311011129245786
F7Best9.51783E-044.78235E-021.61513E-022.22028E-023.14808E-012.74473E-037.40612E-025.23991E-011.77363E-031.16011E-012.94161E-042.42579E-03
Average3.33690E-042.05149E-022.60530E-034.94647E-036.80748E-028.92277E-042.46482E-021.93049E-012.55539E-041.45704E-022.18225E-042.12379E-05
Worst7.86901E-042.37821E-022.54077E-022.30969E-024.08434E-012.13145E-035.89461E-023.78711E-011.06952E-039.68720E-026.23943E-053.09651E-03
STD3.91750E-019.11752E-033.24732E-011.40555E-011.76983E-018.70836E-015.13582E-023.31013E-027.04304E-015.75394E-022.17823E-011.00000E+00
p-value0.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+001.00000E+000.00000E+00NaN
Rank496711510123821
F8Best−2.08000E+03−2.62154E+03−2.62206E+03−2.05789E+03−2.07965E+03−2.21242E+03−1.43345E+03−1.83189E+03−2.83761E+03−2.59767E+03−2.16265E+03−2.72281E+03
Average−2.71395E+03−2.92588E+03−3.42047E+03−2.56213E+03−2.57480E+03−2.86014E+03−1.67210E+03−1.90989E+03−3.17102E+03−3.06245E+03−2.47148E+03−4.18837E+03
Worst4.51721E+023.55991E+026.33178E+023.63563E+025.50683E+024.99574E+022.71453E+025.19997E+013.52471E+024.91686E+023.14692E+021.11926E+03
STD3.27785E-018.68764E-018.80624E-013.01609E-013.42211E-014.36837E-016.64359E-021.62880E-018.51325E-018.44559E-013.72473E-011.00000E+00
p-value0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank752986121134101
F9Best0.00000E+004.15395E+011.52453E+011.53654E+007.54795E+011.57739E+002.01489E+013.63158E+013.50552E+003.12363E+010.00000E+000.00000E+00
Average0.00000E+001.49244E+010.00000E+001.76607E-035.37763E+013.53708E-101.73102E+012.88537E+010.00000E+002.15035E+010.00000E+000.00000E+00
Worst0.00000E+002.43899E+011.81732E+011.70572E+001.65598E+011.07558E+002.76307E+008.30448E+005.19150E+001.44516E+010.00000E+000.00000E+00
STD0.00000E+001.43842E-021.44400E-011.21677E-019.79329E-052.61809E-026.51992E-061.23667E-042.25570E-014.96698E-030.00000E+000.00000E+00
p-valueNaN1.00000E+000.00000E+000.00000E+001.00000E+001.00000E+001.00000E+001.00000E+00NaN1.00000E+00NaNNaN
hNaN1.00000E+000.00000E+000.00000E+001.00000E+000.00000E+001.00000E+001.00000E+00NaN1.00000E+00NaNNaN
Rank181712691111011
F10Best3.33702E-107.17754E+007.10543E-153.13535E+001.35367E+013.57782E-103.33791E-011.36900E+012.26485E-137.53102E+008.88178E-168.88178E-16
Average8.88178E-162.01332E+004.44089E-151.13285E-064.19367E+009.52278E-111.62729E-021.16611E+011.86517E-144.88657E+008.88178E-168.88178E-16
Worst6.67402E-108.56325E+001.77636E-156.26793E+006.39114E+002.69934E-105.50823E-011.69705E+002.09269E-131.93049E+000.00000E+000.00000E+00
STD3.55918E-011.44683E-014.23483E-043.55721E-015.46334E-033.79837E-022.71080E-013.60470E-067.44812E-022.33727E-04
p-valueNaN0.00000E+001.00000E+000.00000E+001.00000E+001.00000E+000.00000E+001.00000E+000.00000E+001.00000E+00NaNNaN
hNaN0.00000E+001.00000E+000.00000E+001.00000E+001.00000E+001.00000E+000.00000E+001.00000E+000.00000E+00NaNNaN
Rank194710681251111
F11Best0.00000E+003.40834E-010.00000E+001.09996E-016.62537E+001.43005E-028.46895E+002.48031E-015.17670E-023.84902E+001.34036E-070.00000E+00
Average0.00000E+001.78351E-010.00000E+003.87472E-062.21392E+003.33067E-169.88754E-011.67731E-010.00000E+001.22778E+000.00000E+000.00000E+00
Worst0.00000E+001.87764E-010.00000E+001.17901E-017.17102E+001.69114E-029.56379E+006.69205E-029.71072E-022.29793E+002.67325E-070.00000E+00
STD 1.09591E-02 1.11310E-011.14135E-011.41747E-011.26940E-013.09843E-043.27352E-011.54197E-023.54673E-010.00000E+00
p-valueNaN1.00000E+00NaN0.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+001.00000E+000.00000E+00NaN
hNaN1.00000E+00NaN0.00000E+001.00000E+000.00000E+001.00000E+001.00000E+000.00000E+001.00000E+000.00000E+00NaN
Rank191712610811111
F12Best9.41720E-052.81765E+004.84000E-026.85336E-011.20032E+039.38656E-022.60954E-052.30800E+018.87834E-032.70998E+002.34046E-013.12292E-03
Average6.62188E-063.45206E-013.08534E-021.63366E-013.93375E+004.15713E-021.71903E-076.24429E+003.10498E-054.75743E-012.14231E-011.30755E-04
Worst1.42258E-041.80917E+001.47653E-028.06978E-012.38552E+035.10740E-023.02491E-051.73728E+011.70865E-022.72191E+002.76060E-022.72873E-03
STD6.84908E-022.08119E-029.38965E-041.41834E-013.53098E-011.20966E-026.36979E-023.76918E-025.30617E-019.38540E-022.99628E-061.00000E+00
p-value0.00000E+001.00000E+001.00000E+000.00000E+000.00000E+001.00000E+000.00000E+001.00000E+000.00000E+000.00000E+001.00000E+00NaN
h0.00000E+000.00000E+000.00000E+001.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+00NaN
Rank295711611231084
F13Best7.35740E-044.82548E+005.01857E-017.17458E-014.13579E+023.23160E-012.76664E-031.30596E+011.05096E-015.09412E-018.78013E-017.30682E-03
Average1.15998E-042.40508E-024.41271E-015.02719E-015.63352E+002.00039E-013.34724E-062.83289E-017.32404E-021.33868E-016.94561E-011.66131E-03
Worst6.28719E-049.51159E+008.36796E-023.06934E-016.64956E+021.32406E-015.51633E-031.08041E+013.64024E-024.34549E-011.30520E-016.14166E-03
STD7.73469E-023.50103E-012.25094E-053.59097E-032.59926E-013.10761E-033.13519E-015.21330E-021.83371E-036.02113E-021.10396E-051.00000E+00
p-value0.00000E+000.00000E+001.00000E+001.00000E+000.00000E+001.00000E+000.00000E+000.00000E+001.00000E+000.00000E+001.00000E+00NaN
h0.00000E+000.00000E+001.00000E+001.00000E+001.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+001.00000E+00NaN
Rank249101271856113
Mean ranking2.46157.46155.76927.461511.07696.00006.692310.15383.69239.07694.23082.1538
Final ranking285812671131041
Table 5. The results of the comparative algorithms using 10 problems.
Table 5. The results of the comparative algorithms using 10 problems.
FunMeasureComparative Algorithms
AOAOSAWOASCADAGWOPSORSAAOAGNDDMOAGNDDMOA
F14Best5.89337E+004.41598E+004.67799E+001.99025E+004.92345E+007.57879E+008.31929E+007.09673E+001.74254E+007.83864E+001.26705E+012.72869E+00
Average1.99203E+009.98019E-019.98247E-019.98005E-019.98004E-011.99203E+001.99203E+002.98211E+009.98004E-011.99204E+001.26705E+019.98004E-01
Worst4.81756E+005.58314E+004.24065E+001.14529E+004.21431E+005.89329E+005.03635E+005.68102E+009.49989E-014.82113E+004.26698E-112.03826E+00
STD7.02530E-011.00000E+009.42851E-014.27305E-018.89391E-014.65472E-013.39176E-015.25941E-013.81557E-013.89227E-012.53832E-025.90786E-01
p-value0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+00NaN
h0.00000E+001.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+001.00000E+001.00000E+000.00000E+00NaN
Rank956438711210121
F15Best7.58525E-041.09175E-021.27939E-031.63972E-031.88988E-029.54879E-041.18218E-032.50819E-031.17326E-035.93291E-037.27857E-041.03588E-02
Average4.53794E-047.46758E-047.11094E-047.07216E-041.45206E-024.86480E-049.31592E-041.11374E-035.72401E-047.50631E-045.13638E-043.10950E-04
Worst2.80338E-041.16861E-028.58094E-046.44638E-042.92380E-035.89001E-042.27794E-042.20428E-036.39236E-048.67370E-033.88492E-041.15603E-02
STD2.79600E-011.46932E-018.49304E-013.43761E-012.18920E-056.33241E-019.79867E-012.88880E-011.00000E+003.15707E-012.78679E-011.63668E-01
p-value0.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h1.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+001.00000E+000.00000E+00NaN
Rank287612310115941
F16Best−1.02869E+00−1.03163E+00−1.02301E+00−1.03152E+00−1.03141E+00−1.03163E+00−1.03163E+00−1.03163E+00−1.02633E+00−8.27557E-01−1.03163E+00−1.03163E+00
Average−1.03126E+00−1.03163E+00−1.03163E+00−1.03162E+00−1.03163E+00−1.03163E+00−1.03163E+00−1.03163E+00−1.02901E+00−1.03163E+00−1.03163E+00−1.03163E+00
Worst2.58129E-039.87371E-151.72263E-021.27032E-044.36497E-047.60008E-081.81299E-165.38791E-132.68892E-034.08062E-014.99984E-070.00000E+00
STD2.52010E-017.61031E-037.16879E-018.39819E-039.77015E-037.61129E-037.61031E-037.61031E-031.00000E+003.67578E-017.61257E-037.61031E-03
p-value0.00000E+001.00000E+000.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+000.00000E+001.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank113510871412962
F17Best3.99415E-013.97887E-014.05365E-014.05378E-013.98127E-013.97903E-013.97887E-013.97887E-014.13041E-013.97896E-015.29235E-013.97887E-01
Average3.98035E-013.97887E-013.99112E-013.98206E-013.97888E-013.97896E-013.97887E-013.97887E-013.98193E-013.97888E-014.12552E-013.97887E-01
Worst1.44640E-034.14318E-131.04595E-027.85258E-034.22528E-049.59064E-060.00000E+001.25929E-121.47913E-026.52238E-061.45004E-014.01007E-12
STD1.16385E-018.63612E-024.29229E-013.95404E-019.04356E-028.66080E-028.63612E-028.63612E-021.00000E+008.65045E-021.61967E-018.63612E-02
p-value0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank831110571496121
F18Best3.44851E+003.00000E+009.87565E+003.00141E+003.00065E+003.00029E+003.00000E+003.00000E+003.00000E+003.26884E+007.74330E+003.00001E+00
Average3.00407E+003.00000E+003.00000E+003.00008E+003.00000E+003.00000E+003.00000E+003.00000E+003.00000E+003.00737E+003.00000E+003.00000E+00
Worst7.76055E-011.41966E-121.36240E+011.70135E-039.35550E-042.85816E-044.39626E-151.33214E-111.43374E-073.85323E-019.41076E+001.54716E-05
STD6.92757E-012.12354E-013.69743E-012.14456E-012.13328E-012.12790E-012.12354E-012.12354E-012.12354E-011.00000E+003.78733E-012.12373E-01
p-value0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank114710981521263
F19Best−3.84133E+00−3.86275E+00−3.64827E+00−3.84578E+00−3.83618E+00−3.86155E+00−3.86278E+00−3.70471E+00−3.86278E+00−3.86269E+00−3.63291E+00−3.86278E+00
Average−3.85218E+00−3.86278E+00−3.85240E+00−3.85263E+00−3.85932E+00−3.86267E+00−3.86278E+00−3.85473E+00−3.86278E+00−3.86278E+00−3.84963E+00−3.86278E+00
Worst1.69068E-023.22054E-053.72940E-015.31633E-031.81742E-029.35423E-042.56395E-161.03596E-012.61450E-071.46155E-044.01587E-012.83846E-06
STD4.04851E-022.24773E-027.80405E-013.46468E-024.65339E-022.31623E-022.24614E-021.00000E+002.24616E-022.25150E-027.40971E-012.24627E-02
p-value1.00000E+001.00000E+000.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+000.00000E+00NaN
h0.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
Rank115109761834122
F20Best−2.97869E+00−3.14205E+00−2.58288E+00−1.98700E+00−2.87412E+00−3.19269E+00−3.26255E+00−3.32200E+00−2.68840E+00−3.19684E+00−3.00536E+00−3.28445E+00
Average−3.13489E+00−3.17332E+00−3.06553E+00−2.96751E+00−3.23843E+00−3.32197E+00−3.32200E+00−3.32200E+00−2.96404E+00−3.23734E+00−3.09826E+00−3.32194E+00
Worst1.82769E-014.09992E-024.67708E-011.11711E+005.72002E-019.27180E-026.86430E-022.72436E-102.80164E-013.66425E-028.16237E-027.48147E-02
STD1.33310E-011.84986E-027.12059E-012.68939E-015.81004E-011.41845E-027.27554E-034.00462E-031.00000E+001.13806E-027.28254E-026.27961E-03
p-value0.00000E+001.00000E+000.00000E+000.00000E+000.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+000.00000E+00NaN
h1.00000E+001.00000E+000.00000E+000.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+001.00000E+000.00000E+00NaN
Rank871011532112694
F21Best−9.62511E+00−7.02251E+00−4.89861E+00−4.60773E-01−6.47855E+00−5.72322E+00−7.28270E+00−5.13043E+00−5.73656E+00−5.12949E+00−3.15941E+00−4.53735E+00
Average−1.01371E+01−1.01532E+01−9.72110E+00−4.97294E-01−9.46101E+00−1.01529E+01−9.61493E+00−1.01532E+01−1.01532E+01−1.01532E+01−4.54377E+00−1.01532E+01
Worst6.58852E-013.74736E+003.63166E+007.29407E-023.46228E+003.16672E+002.62111E+003.53432E+003.14963E+003.53518E+001.45724E+003.74398E+00
STD1.33721E-019.13113E-013.28002E-012.00827E-037.23835E-014.76751E-011.00000E+003.65724E-014.79018E-013.65597E-013.32996E-022.74866E-01
p-value0.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+00NaN
h0.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+00NaN
Rank728121069345111
F22Best−1.03070E+01−6.12946E+00−6.16600E+00−2.35260E+00−4.42101E+00−1.03762E+01−6.82053E+00−5.15436E+00−4.91111E+00−4.67134E+00−1.75685E+00−8.49019E+00
Average−1.03779E+01−9.28930E+00−1.03753E+01−4.54666E+00−8.11003E+00−1.04000E+01−1.04029E+01−1.04029E+01−1.04027E+01−1.04029E+01−2.51081E+00−1.04029E+01
Worst1.06205E-012.10659E+002.81174E+001.73039E+002.81545E+003.63122E-024.15562E+003.52810E+003.68926E+003.82108E+005.22559E-013.82550E+00
STD7.44184E-031.00000E+009.84079E-013.23873E-023.68715E-016.87013E-037.76729E-016.51855E-015.87104E-015.28764E-016.88621E-033.21168E-01
p-value1.00000E+000.00000E+000.00000E+001.00000E+000.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+00NaN
h1.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+001.00000E+001.00000E+00NaN
Rank798111061354122
F23Best−1.05001E+01−2.77796E+00−7.31803E+00−2.31496E+00−3.00200E+00−6.47273E+00−5.25031E+00−4.80523E+00−5.48013E+00−7.34055E+00−3.60167E+00−4.72881E+00
Average−1.05362E+01−3.83543E+00−9.85627E+00−3.90787E+00−3.69664E+00−1.05279E+01−1.05238E+01−1.05364E+01−1.05353E+01−9.99912E+00−5.26814E+00−1.05363E+01
Worst2.72183E-027.04984E-012.74611E+001.81341E+004.65029E-014.67868E+003.54893E+003.87825E+003.54674E+002.63311E+001.65106E+003.97103E+00
STD5.33147E-021.54645E-029.90938E-011.99713E-021.75727E-027.57449E-013.80667E-013.20932E-014.31891E-011.00000E+005.28599E-023.14994E-01
p-value0.00000E+001.00000E+000.00000E+001.00000E+001.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+000.00000E+00NaN
h0.00000E+000.00000E+001.00000E+001.00000E+001.00000E+000.00000E+000.00000E+001.00000E+000.00000E+000.00000E+001.00000E+00NaN
Rank311810125614792
Mean ranking7.70005.70008.00009.30008.10005.90003.90005.10005.80007.20009.30001.9000
Final ranking849111062357111
Table 6. UCI benchmark datasets.
Table 6. UCI benchmark datasets.
NumberDatasetFeatures No.Instances No.Classes No.
1Cancer96832
2CMC1014733
3Glass92147
4Iris41503
5Seeds72103
6Heart132702
7Vowels68713
8Water131783
Table 7. The results of the comparative algorithms using eight data clustering problems.
Table 7. The results of the comparative algorithms using eight data clustering problems.
DatasetMetricComparative Algorithms
AOPSOGWOAVOAESOARSAGNDDMOAGNDDMOA
CancerWorst3.1592E+031.2729E+032.9779E+033.4385E+033.6180E+033.5209E+033.5209E+033.5209E+038.4984E+02
Average2.8610E+038.6892E+022.7501E+033.0790E+033.3785E+033.1189E+033.1189E+033.1189E+034.9033E+02
Best2.5863E+035.9737E+022.4492E+032.7293E+033.2551E+032.8825E+032.8825E+032.8825E+032.9166E+02
STD2.2974E+022.5014E+022.6545E+023.4016E+021.3956E+022.7208E+022.7208E+022.7208E+022.3947E+02
p-value1.4395E-018.1589E-076.1859E-028.4297E-019.4260E-021.0000E+002.1026E-071.0000E+00NaN
h01000010NaN
Rank423596661
CMCWorst3.3117E+029.4691E+013.1026E+023.3478E+023.3533E+023.3468E+023.3468E+023.3468E+028.8935E+01
Average3.3004E+027.9613E+013.0519E+023.3406E+023.3477E+023.3372E+023.3372E+023.3372E+027.0075E+01
Best3.2885E+025.7831E+013.0154E+023.3311E+023.3442E+023.3258E+023.3258E+023.3258E+025.3553E+01
STD9.0149E-011.5182E+014.0251E+006.8252E-014.2332E-011.0085E+001.0085E+001.0085E+001.5177E+01
p-value2.9606E-042.9013E-103.1825E-075.5507E-016.3321E-021.0000E+002.1574E-101.0000E+00NaN
h11100010NaN
Rank423895551
GlassWorst3.1476E+013.4991E+013.0785E+013.5166E+013.4278E+013.4991E+013.4714E+003.4991E+011.3134E+01
Average3.0739E+013.4729E+012.9574E+013.4582E+013.3934E+013.4729E+011.8534E+003.4729E+016.7350E+00
Best3.0234E+013.4187E+012.7230E+013.3382E+013.3376E+013.4187E+010.0000E+003.4187E+010.0000E+00
STD4.7670E-013.1958E-011.5139E+006.9570E-013.4400E-013.1958E-011.6312E+003.1958E-015.8029E+00
p-value2.9230E-074.8640E-067.2749E-056.8049E-015.3690E-031.0000E+007.5434E-111.0000E+00NaN
h11101010NaN
Rank473657172
IrisWorst2.1066E+018.1073E+002.1623E+012.4819E+012.4426E+012.4516E+012.4516E+012.4516E+014.6062E+00
Average1.9627E+015.9202E+001.6355E+012.4311E+012.3540E+012.4028E+012.4028E+012.4028E+013.5079E+00
Best1.8000E+013.9083E+001.3000E+012.3598E+012.1822E+012.3571E+012.3571E+012.3571E+011.9558E+00
STD1.3276E+001.6153E+003.5090E+004.5074E-011.1007E+004.6837E-014.6837E-014.6837E-019.7623E-01
p-value1.1362E-049.4435E-091.2775E-033.5987E-013.8755E-011.0000E+001.0597E-101.0000E+00NaN
h11100010NaN
Rank423956661
SeedsWorst4.0056E+011.9804E+013.7522E+015.0098E+015.0353E+015.0100E+011.2279E+015.0100E+015.0100E+01
Average3.8947E+011.5515E+013.6491E+014.9271E+014.9714E+014.9480E+018.2223E+004.9480E+014.9480E+01
Best3.7281E+011.1480E+013.5300E+014.7756E+014.8833E+014.7975E+013.5244E+004.7975E+014.7975E+01
STD1.2860E+003.2178E+007.9416E-018.9025E-017.5150E-018.5910E-013.3228E+008.5910E-018.5910E-01
p-value3.4263E-071.4497E-087.4106E-097.1572E-016.5872E-011.0000E+003.9494E-091.0000E+00NaN
h11100010NaN
Rank423596166
StatlogWorst1.3896E+034.8655E+021.0333E+031.6978E+031.6419E+031.6260E+031.6260E+031.6260E+031.8679E+02
(Heart)Average1.3459E+033.0815E+028.8430E+021.5825E+031.5420E+031.5799E+031.5799E+031.5799E+036.6915E+01
Best1.2824E+030.0000E+007.6551E+021.3121E+031.3846E+031.5203E+031.5203E+031.5203E+030.0000E+00
STD4.3464E+011.8412E+021.1016E+021.6301E+021.0442E+023.7995E+013.7995E+013.7995E+017.2880E+01
p-value1.7572E-053.6100E-079.4855E-079.7338E-014.6736E-011.0000E+001.3358E-101.0000E+00NaN
h11100010NaN
Rank423956661
VowelsWorst1.4185E+021.5349E+021.3929E+021.5329E+021.5277E+021.5349E+021.7669E+011.5349E+021.4131E+01
Average1.3992E+021.5320E+021.3404E+021.5290E+021.5202E+021.5320E+021.3030E+011.5320E+028.6215E+00
Best1.3910E+021.5294E+021.2862E+021.5199E+021.5092E+021.5294E+029.6267E+001.5294E+020.0000E+00
STD1.1351E+002.1804E-015.0427E+005.6209E-019.4799E-012.1804E-013.1373E+002.1804E-015.2679E+00
p-value5.6372E-095.5617E-122.8378E-053.0031E-012.6088E-021.0000E+001.1470E-131.0000E+00NaN
h11101010NaN
Rank473657271
WineWorst3.4409E+031.5310E+033.0087E+033.9512E+033.9760E+033.9109E+033.9109E+033.9109E+037.9668E+02
Average3.2933E+031.1993E+032.7732E+033.7821E+033.8685E+033.8368E+033.8368E+033.8368E+034.8149E+02
Best3.0865E+039.4545E+022.5519E+033.5494E+033.7232E+033.6569E+033.6569E+033.6569E+032.4170E+02
STD1.4475E+022.4770E+021.9708E+021.6486E+021.0963E+021.0308E+021.0308E+021.0308E+022.1264E+02
p-value1.3237E-041.9363E-085.1348E-065.4670E-016.4986E-011.0000E+001.0540E-091.0000E+00NaN
h11100010NaN
Rank423596661
Meanranking43.2536.62576.1254.1256.1251.75
Finalranking432896561
Table 8. Determining centroid of each cluster for the Cancer dataset.
Table 8. Determining centroid of each cluster for the Cancer dataset.
CentroidsComputed Centroids
Att.1Att.2Att.3Att.4Att.5Att.6Att.7Att.8Att.9
Centroid10.0854260.0446150.1491620.0141980.0035530.0378510.0145240.0230290.980817
Centroid20.2309610.1053830.3720210.0312410.0076540.0961510.0470160.0388860.869978
Table 9. Determining centroid of each cluster for the CMC dataset.
Table 9. Determining centroid of each cluster for the CMC dataset.
CentroidsComputed Centroids
Att.1Att.2Att.3Att.4Att.5Att.6Att.7Att.8Att.9
Centroid10.9789870.0887630.0988350.071870.0205240.019940.050780.0912460.001044
Centroid20.9714640.0677760.0838520.158080.0253040.0219370.0632970.0801910.002949
Centroid30.9575270.133050.1478090.0594950.0355470.0312650.0909510.1281380.000842
Table 10. Determining centroid of each cluster for the Glass dataset.
Table 10. Determining centroid of each cluster for the Glass dataset.
CentroidsComputed Centroids
Att.1Att.2Att.3Att.4Att.5Att.6Att.7Att.8Att.9
Centroid10.020610.180120.005400.037090.971180.079730.097620.001130.00588
Centroid20.019990.192450.002960.027620.970560.001670.115230.012480.00063
Centroid30.020820.222490.060710.031690.977110.010720.186370.028310.00620
Centroid40.020770.194720.047110.032840.968110.023120.074610.017590.00514
Centroid50.020540.176330.047100.017500.976250.007420.114190.000530.00095
Centroid60.020600.152550.004870.021050.963730.008080.199670.009510.00413
Centroid70.019830.180760.006180.017930.970140.002810.148930.001890.00092
Table 11. Determining centroid of each cluster for the Iris dataset.
Table 11. Determining centroid of each cluster for the Iris dataset.
CentroidsComputed Centroids
Att.1Att.2Att.3Att.4
Centroid10.788410.548970.223880.03680
Centroid20.804990.510710.251290.03061
Centroid30.696990.321960.543470.18430
Table 12. Determining centroid of each cluster for the Seeds dataset.
Table 12. Determining centroid of each cluster for the Seeds dataset.
CentroidsComputed Centroids
Att.1Att.2Att.3Att.4Att.5Att.6Att.7
Centroid10.691180.615850.034470.234780.141100.097080.22704
Centroid20.650150.640700.038800.246940.144480.150410.23314
Centroid30.583370.652830.041720.257810.139840.242820.25227
Table 13. Determining centroid of each cluster for the Statlog (Heart) dataset.
Table 13. Determining centroid of each cluster for the Statlog (Heart) dataset.
CentroidsComputed Centroids
Att.1Att.2Att.3Att.4Att.5Att.6Att.7Att.8Att.9
Centroid10.156920.073540.131460.258930.108740.013140.259140.081070.03341
Centroid20.117630.060390.116200.238320.075110.010840.241640.036140.02834
Centroid30.140100.062910.124390.272160.091480.010240.251970.050320.03041
Centroid40.137280.072740.112910.372060.149370.048830.236420.068390.03261
Att.10Att.11Att.12Att.13Att.14Att.15Att.16Att.17Att.18
Centroid10.2526330.2967250.5612870.2846960.1312030.0102640.0157660.3337080.346391
Centroid20.1859450.260990.7596010.2501160.0835750.007490.0110780.2135460.221326
Centroid30.2016560.2834470.6795360.2357950.0976560.0051320.0211450.2777280.284671
Centroid40.2136220.37860.5094010.2991770.1931470.0039790.0230870.267790.278233
Table 14. Determining centroid of each cluster for the Vowel dataset.
Table 14. Determining centroid of each cluster for the Vowel dataset.
CentroidsComputed Centroids
Att.1Att.2Att.3Att.4Att.5Att.6Att.7
Centroid10.057600.782710.10012−0.420060.06997−0.016820.15132
Centroid20.009970.097040.02495−0.426390.26321−0.056200.07325
Centroid30.073930.525950.07043−0.376400.05374−0.436330.11511
Centroid40.013770.051080.11038−0.762990.18300−0.197240.33100
Centroid50.015560.588910.06349−0.358100.27445−0.043100.00170
Centroid60.033390.314540.03313−0.335000.160690.016960.10010
Centroid70.071470.841260.04323−0.222720.16595−0.056700.02785
Centroid80.095690.2635260.030861−0.527480.047957−0.25703−0.11063
Centroid90.1015270.256050.091137−0.733730.320599−0.103290.042428
Centroid100.0830550.4061160.087015−0.498770.454723−0.291180.267341
Att.8Att.9Att.10Att.11Att.12Att.13
Centroid10.039710.06178−0.04524−0.04−0.081810.047308
Centroid2−0.113290.08876−0.019360.04510.063302−0.02938
Centroid3−0.125790.433860.0732360.192898−0.090940.002386
Centroid4−0.007380.40670−0.025620.143418−0.091960.004399
Centroid5−0.046790.036920.033270.053071−0.03467−0.03581
Centroid6−0.072810.06521−0.0410.091246−0.002310.021923
Centroid7−0.009080.044100.0092620.01729−0.027070.000293
Centroid80.0848240.1609090.1054060.237928−0.12448−0.18747
Centroid90.0785750.1466840.0482650.1775860.0776590.012697
Centroid10−0.202690.2407480.0444370.094915−0.01742−0.02234
Table 15. Determining centroid of each cluster for the Wine dataset.
Table 15. Determining centroid of each cluster for the Wine dataset.
CentroidsComputed Centroids
Att.1Att.2Att.3Att.4Att.5Att.6Att.7
Centroid10.028110.005370.0048740.045790.2111860.0051780.00431
Centroid20.019520.003840.0034050.0302660.14860.0032320.00222
Centroid30.012550.001600.0021570.015330.0928160.0024580.00248
Att.8Att.9Att.10Att.11Att.12Att.13
Centroid10.0011540.0034440.0080.0022870.0058730.975659
Centroid20.0008670.0020590.0080280.0012820.0034160.988142
Centroid30.0003390.0017940.0048940.0010250.0028430.99559
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Aldosari, F.; Abualigah, L.; Almotairi, K.H. A Normal Distributed Dwarf Mongoose Optimization Algorithm for Global Optimization and Data Clustering Applications. Symmetry 2022, 14, 1021. https://doi.org/10.3390/sym14051021

AMA Style

Aldosari F, Abualigah L, Almotairi KH. A Normal Distributed Dwarf Mongoose Optimization Algorithm for Global Optimization and Data Clustering Applications. Symmetry. 2022; 14(5):1021. https://doi.org/10.3390/sym14051021

Chicago/Turabian Style

Aldosari, Fahd, Laith Abualigah, and Khaled H. Almotairi. 2022. "A Normal Distributed Dwarf Mongoose Optimization Algorithm for Global Optimization and Data Clustering Applications" Symmetry 14, no. 5: 1021. https://doi.org/10.3390/sym14051021

APA Style

Aldosari, F., Abualigah, L., & Almotairi, K. H. (2022). A Normal Distributed Dwarf Mongoose Optimization Algorithm for Global Optimization and Data Clustering Applications. Symmetry, 14(5), 1021. https://doi.org/10.3390/sym14051021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop