Next Article in Journal
On the Continuity Equation in Space–Time Algebra: Multivector Waves, Energy–Momentum Vectors, Diffusion, and a Derivation of Maxwell Equations
Previous Article in Journal
Dynamic Mean–Variance Portfolio Optimization with Value-at-Risk Constraint in Continuous Time
Previous Article in Special Issue
Two Approaches to Estimate the Shapley Value for Convex Partially Defined Games
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Centroidous Method for Determining Objective Weights

by
Irina Vinogradova-Zinkevič
Department of Information Technologies, Vilnius Gediminas Technical University, 10223 Vilnius, Lithuania
Mathematics 2024, 12(14), 2269; https://doi.org/10.3390/math12142269
Submission received: 16 June 2024 / Revised: 10 July 2024 / Accepted: 17 July 2024 / Published: 20 July 2024
(This article belongs to the Special Issue Mathematical Methods for Decision Making and Optimization)

Abstract

:
When using multi-criteria decision-making methods in applied problems, an important aspect is the determination of the criteria weights. These weights represent the degree of each criterion’s importance in a certain group. The process of determining weight coefficients from a dataset is described as an objective weighting method. The dataset considered here contains quantitative data representing measurements of the alternatives being compared, according to a previously determined system of criteria. The purpose of this study is to suggest a new method for determining objective criteria weights and estimating the proximity of the studied criteria to the centres of their groups. It is assumed that the closer a criterion is to the centre of the group, the more accurately it describes the entire group. The accuracy of the description of the entire group’s priorities is interpreted as the importance, and the higher the value, the more significant the weight of the criterion. The Centroidous method suggested here evaluates the importance of each criterion in relation to the centre of the entire group of criteria. The stability of the Centroidous method is examined in relation to the measures of Euclidean, Manhattan, and Chebyshev distances. By slightly modifying the data in the original normalised data matrix by 5% and 10% 100 and 10,000 times, stability is examined. A comparative analysis of the proposed Centroidous method obtained from the entropy, CRITIC, standard deviation, mean, and MEREC methods was performed. Three sets of data were generated for the comparative study of the methods, as follows: the mean value for alternatives with weak and strong differences and criteria with linear dependence. Additionally, an actual dataset from mobile phones was also used for the comparison.

1. Introduction

Multi-criteria decision-making (MCDM) methods have been used for applied problems, alongside other statistical algorithms and machine learning techniques, as illustrated by the number of research publications on this topic in the Web of Science scientific database [1]. Over the period of January 2022 to March 2024, a total of 4132 research papers were published in scientific journals, including survey research articles describing decision-making methods. MCDM methods are preferable in the context of small datasets.
Depending on the nature of the data, subjective or objective methods of determining weight coefficients can be applied; however, regardless of the MCDM method chosen, the condition that the sum of the criteria weights is equal to one remains constant. It should be noted that a subjective method involves expert assessments, whereas an objective method analyses quantitative data. An objective approach analyses the properties of the alternatives under study, such as the measurements, values of technological parameters, duration of the process (time), and cost [2].
In some decision-making situations, extracting subjective preferences is either difficult or inappropriate [3]. According to Pala, when a decision-maker lacks the relevant experience and has no established point of view regarding the aspect of the problem to be solved, objective weights may be advisory rather than mandatory [4].
The range of methods that have been used to study the data structure in order to obtain objective criteria weights is wide and diverse. Objective weights are calculated using the criterion importance through inter criterion correlation (CRITIC) method, which highlights the intensity of the contrast between the presentation of alternatives according to a certain criterion and the conflict of estimation of the contradictory nature of the criteria. The contrast intensity of the corresponding criterion is characterised by a standard deviation [3]. The modelling of the contradictory relationships between the criteria in the CRITIC method has been improved by Krishnan et al. using distance correlation in the distance-CRITIC (D-CRITIC) method [5]. The correlation coefficient and standard deviation (CCSD) method applies an integrated approach to the standard deviation and correlation coefficient [6]. The authors of the CRITIC-M method, Žižović et al., suggest changing the normalisation in CRITIC, arguing that this leads to lower standard deviation values and new insights into the data in the original decision matrix [7]. Statistical measures of standard deviation are applied in the simultaneous evaluation of criteria and alternatives (SECA) method to describe supporting points by studying the variation within and between criteria. This multipurpose nonlinear mathematical model aims to maximise the overall efficiency of each alternative while minimising the deviation of the criteria weights from the supporting points. The SECA method simultaneously calculates general estimates of the effectiveness of the alternatives and criteria weights [8]. The robustness, correlation, and standard deviation (ROCOSD) method distributes weight values with the aim of minimising the total maximum deviation from the ratio of criteria using calculated standard deviations and correlation coefficients [4]. The simple statistical variance method (SV) is also applied to determine the objective criteria weights [9].
The indicator of entropy is often applied to measure the discrepancy in the estimates [4]. As a rule, the concept of uncertainty is considered a synonym for the term entropy [10]. The entropy method was introduced in 1947 by scientists Shannon and Weaver and was later highlighted by Zeleny [11]. Entropy is an important concept in both the social and physical sciences. Podvezko et al. reported that the entropy method was one of the most popular objective methods in the group, as it represents the degree of heterogeneity of criteria values [12]. It has been found that the lower the entropy of a criterion, the more valuable information it contains [11,13]. The integrated approach of entropy-correlated criterion (EWM-CORR) allows for the redistribution of the weights obtained using the entropy method for correlating criteria [13].
The method based on the removal effects of criteria (MEREC) studies the impact of removing each criterion on the effectiveness of the alternatives, which is determined using a simple logarithmic indicator with equal weights [14]. The logarithmic percentage change-driven objective weighting (LOPCOW) method calculates the mean square value of each criterion as a percentage of its standard deviation. The values of the criteria are standardised using min–max normalisation [15]. The CILOS method takes into account the loss of each criterion’s effect when one of the other criteria receives its optimal (highest or lowest) value [16].
Odu has pointed out that neither the subjective nor the objective approaches are ideal. The integrated method of estimating weights overcomes the shortcomings inherent in both approaches and may therefore be the most appropriate for determining the weight coefficients of the criteria [17]. The integrated determination of objective criteria weights (IDOCRIW) method takes into account the integrated estimation of the objective weights of several methods and considers the characteristics of the entropy and CILOS methods [16]. To recalculate the criteria weights based on a combination of subjective and objective views, the Bayes’ theorem has been applied [18]. To reduce the subjectivity of peer assessments, continuous cases of Bayes’ rule were applied using the accumulated experience of assessments expressed by a prior probability distribution. Each assessment is adjusted based on the function of the average posterior probability, according to the expert’s competence [19]. Mukhametzyanov has also expressed support for comprehensive assessment while voicing concerns about the objectivity of assessments made in a technically established way and has highlighted the problems arising from different data structures and the degree of uncertainty. This researcher stressed the importance of a preliminary analysis of the results when choosing the final solution, with due attention to the specific aspects of the applied problem [13]. In some cases, when it can be assumed that all criteria are of equal importance, weights are calculated using the mean weight (MW) method [20]. When there is a large number of criteria, they are usually broken down into smaller subgroups of criteria (from five to seven criteria) [2] to form a hierarchy or system of criteria. Proper structuring of complex issues and explicit consideration of various rules result in better-informed and more effective solutions [21].
Statistics generated for the entire time using the Web of Science scientific database [1] indicate that the 10 most popular areas for the use of objective assessment methods are as follows: medicine (general, internal), clinical neurology, surgery, public environmental occupational health, electrical and electronic engineering, obstetrics and gynaecology, pharmacology, pharmacy, psychiatry, paediatrics, and oncology.
Distance is a fundamental concept in machine learning and neural networks because it enables one to assess the degree of similarity or difference between objects. Euclidean distance is the most preferred measure of distance in practice [22]. Other distance measures include Minkowski, Manhattan (“Taxicab norm” or “City-block”), cosine, Chebyshev (“Maximum Value”, “Lagrange”, and “Chessboard”), Hellinger, angular, Kullback–Leibler, etc. [22,23]. The use of distance is common in various areas of machine learning, including regression, classification, and clustering. Methods, for instance, support vector machine (SVM) for regression problems [24] and classification [25] and the singular value decomposition method (SVD) [26,27], as well as neural networks, in which distance is used as a loss function [28], actively use distance measures. Distances are employed in automatic recommender systems [29] to assess the similarity between rating vectors.
In machine learning, the process of breaking down a dataset into subsets with the same characteristics is called clustering. A clustering method aims to ensure that objects in one subset are not similar to objects in other subsets [30], in which the subsets are called clusters or groups. The most popular non-hierarchical clustering method is K-means, as it uses a simple and rapid approach to data processing [31]. This method was developed by MacQueen in 1967 [32]. The value of K indicates the number of groups into which the objects are to be broken down and is pre-determined by an expert. The K-means algorithm minimises the sum of the distances from each of the objects to their cluster centres [33]. In the classic version, the cluster centre, or centroid, is calculated as the average value of the objects. Each data object is assigned to a cluster based on proximity to the cluster centre, in which proximity is usually measured by the Euclidean distance [31].
This research article suggests a new MCDM method for determining criteria weights by estimating a data array. The importance of criteria is evaluated in relation to the centre of the entire group of criteria. It is assumed that the closer the criterion is to the centre of the group, the more accurately it describes the entire group. This concept is also applied in the case of clustering, since the similarity between the objects is inversely proportional to the distance between them, so the greater the degree of similarity, the smaller the distance [31]. The accuracy of the description of the entire group’s priorities is interpreted as the importance or weight of the criterion.
Although the suggested Centroidous method and the K-means method are similar, they differ greatly from one another. The main difference is the method’s intended use. K-means divides the objects into an optimal number of clusters with similar characteristics; the elbow or ilhouette methods are frequently used to determine the number of clusters [34]. The MCDM Centroidous method, in turn, sets the weights of the criteria, bearing in mind that within the task statement’s structure, the hierarchy of criteria has already been established. The weights in each of the distinct criterion subgroups are determined by the Centroidous. Accordingly, the implementation of the methods differs, and since grouping objects into clusters is a repetitive process, K-means must be programmed. Programming is not necessary to determine the weights of the criterion while using the Centroidous method. After a literature review of MCDM methods, it was determined that the Centroidous method is a novel way to determine the objective weights of criteria.
The theoretical foundation of the proposed Centroidous method repeats the mathematical explanation of other algorithms. According to this method, the core elements that are more important in the context of this group are accumulated in the centre of the group of criteria. The Centroidous method’s logic is predicated on the idea that a criterion’s weight is inversely related to how far it is from the group’s centre. Dudani provided support for the distance-weighted k-nearest-neighbour rule (WKNN), which states that objects (neighbours) that are closer together, like in the Centroidous method, have a higher weight than those that are more distant [35,36]. The geoinformation sciences frequently use the inverse distance weighting (IDW) interpolation method, which employs similar concepts. By considering the values at the closest known points, the IDW method calculates the value of an unknown point. The interpolated value is the weighted average of the observed values. The weight to the observed value is inversely proportional to the distance of the interpolated point to a known point elevated to a degree [37]. In weighted linear regression (WLR), the weights are calculated from the error covariance matrix [38]. WLR typically uses the inverse of the diagonal elements of the covariance error matrix [39]. Therefore, the smaller the error variance, the more weight an observation receives. When using the Centroidous method, objects that are closer to the group centre are given higher weights. The group’s centre is found in closer proximity to a denser cluster of points with smaller dispersion. The following two parameters are used in the density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm to determine the higher point density, which is the number of objects and the distance (radius of a local neighbourhood) at which the core cluster elements should be located. The cluster’s border objects are defined by the condition of having at least one core element. The Centroidous method, similar to the minimum distance classifier method, is robust to variance because it classifies input vectors by calculating their distances/similarity relative to the class centroids (the mean of the class input vectors) [22].

2. Materials and Methods

We consider a data array a i j describing a group of criteria i , where i = 1 n , characterising j alternatives ( j = 1 m ).
The data are normalised as follows, as the distance of n vectors of the m-dimensional space is calculated later:
a ~ i j = a i j j = 1 m a i j .
When the scales of the criteria are different, normalisation is necessary to compute the distance between data points. Distance-based methods are sensitive to the scales (measurements) of the criteria. Criteria with large values dominate over criteria with smaller values, which can lead to incorrect results, whereas data normalisation encourages the accurate determination of the centre of the criterion group.
We now calculate the centre of gravity of a separate group of criteria n from a normalised data array a ~ i j . The centre of the group is a vector c j of m elements, which is calculated as the average of the corresponding criteria i for all j columns of the matrix a ~ i j , as follows:
c j = 1 n i = 1 n a ~ i j .
The Euclidean distance from the centre of the group to each criterion is calculated using the following formula:
d i = j = 1 m ( a ~ i j c j ) 2 .
In the case in which the i -vector of the m dimensional space a ~ i j coincides with the vector c j describing the centre of the criteria group, d ~ i = 1 . Another distance measure can be used to calculate the distance in the Centroidous method. Section 3.2 thoroughly explores the topic of distance selection, examining how the employment of Euclid, Manhattan, and Chebyshev distances in Formula (3) affects the stability of the Centroidous method.
The principle of the Centroidous method lies in the fact that criteria that are closer to the centre of the group to which they belong more accurately reflect the fundamental aspects of this group. That is, the smaller the distance to the centre of the group, the greater the criterion weight. Mathematically, this is expressed as follows:
d ~ i = min i ( d i ) d i ,
i = 1 , , n , d ~ i ϵ [ 0 ; 1 ] .
The criteria weights are calculated using the following formula:
w i = d ~ i i = 1 n d ~ i .

3. Results

To demonstrate how this method works, publicly accessible statistical data on mobile phones were collected from the official website of the Tele2 telecommunications company [40], as shown in Table 1. In the experiments carried out in this research, the phone models were designated as A j , j = 1, ..., 7, and the criteria were designated as C r i , i = 1, ..., 8. Since the purpose of this research is not to advertise mobile phones, we do not specify the names of the models. Mobile phones are compared based on the following criteria:
Cr1.
Price means the full price of the phone in EUR.
Cr2.
Storage means the auxiliary memory available to store user data, specified in GB. A larger memory capacity provides the ability to store a larger number of files, photos, and documents.
Cr3.
Operational memory means the memory that ensures the mobile phone’s operation and in which the executed machine code is stored, specified in GB. A larger memory capacity ensures the faster operation of the device.
Cr4.
Battery capacity (in mAh) means the ability of the battery to provide autonomous operation without additional charging.
Cr5.
Processor sum of frequency means the processor’s frequency, i.e., the speed at which the processor executes instructions. This affects the overall performance of the phone, the speed of calculations, and multitasking. It is specified in GHz.
Cr6.
Front camera determines the image quality and sharpness of the image taken by the front camera. A megapixel (MP) is one million pixels (the dots that make up an image).
Cr7.
Rear primary camera refers to the image quality of the rear primary camera in MP.
Cr8.
Second rear camera refers to the image quality of the second rear camera in MP.
Since the criteria used are expressed in different measurement units, the data from Table 1 need to be normalised before comparison and calculations. For this purpose, we use Equation (1). After normalisation, the sum of the alternative values for each individual criterion is equal to one (Table 2).
The centre of the criteria group is determined using Equation (2) as a vector, c j , where j is the number of alternatives (Table 3).
The next task is to calculate the distances from the centre to each of the criteria. There are eight criteria, and the distances to each are specified in Table 4. We use the Euclidean distance in the suggested algorithm (Equation (3)).
The smaller the distance from the centre of the group, the more important the criterion. We use Equation (4) to calculate the inversely proportional values of the criteria weights and Equation (5) to normalise the values of the criteria weights. After normalisation, the sum of the criteria weights is equal to one (Table 5).
We can then rank the results for the criteria importance (from more to less important) as Cr3 ≻ Cr2 ≻ Cr5 ≻ Cr1 ≻ Cr4 ≻ Cr7 ≻ Cr8 ≻ Cr6. By interpreting the ranked results, we can see that the most important factors are related to the phone’s performance (operational memory (Cr3), memory for storing documents (Cr2), processor sum of frequency (Cr5), price of the device (CR1), and battery capacity (CR4) for the longer operation of the phone). The three criteria describing the phone’s cameras are of lesser importance, i.e., the rear primary camera (Cr7) ≻ second rear camera (Cr8) ≻ front camera (Cr6).
We can display the obtained results graphically by using the principal component analysis (PCA) method to reduce the m-dimensional space (where m is the number of alternatives) to a two-dimensional one. The PCA method was developed by Pearson in 1901 [41]. By reducing the space to two main components, the PCA method can convey the main behaviour, but due to data loss, there may be inaccuracies in the image. We note that Figure 1 gives only an approximate understanding of the locations of the group points, and the exact distance d i or criterion weight w i cannot be determined from the image.
The numbers of the points in Figure 1 correspond to the criterion numbers (Cr). The red point represents the centre of the criteria group. Figure 1 shows the distance from the centre to the points and their positions relative to each other, determining the centre of a group of criteria near a dense collection of points.
From an analysis of the graphical results, we see that the criteria are divided into the following two subgroups: the first contains Cr1 (price), Cr5 (processor sum of frequency), Cr3 (operational memory), and Cr4 (battery capacity), while the second contains Cr2 (storage), Cr7 (rear primary camera), and Cr8 (second rear camera). Criterion Cr6 (front camera) seems to be an anomaly, as it is located far from the centre and the other criteria. A semantic interpretation of the depicted points in Figure 1 indicates that the first subgroup represents technically important criteria, i.e., the basic characteristics of the phone that determine its fast operation, as well as the rather important criterion of price. The second subgroup of criteria relates to user-friendliness, document storage memory, and the quality of the primary cameras. The image quality of the front camera (Cr6) is an individual point that is of the least importance when choosing a mobile phone.
In Section 3.2, which compares objective weighing methods, the final phone selection is determined using the entropy, Centroidous, CRITIC, SD, mean, and MEREC methods.

3.1. Examination of the Stability Dependence of the Centroidous Method on the Distance Measure

If a mathematical model or method is resistant to changing its parameters, it can be used in practice [2]. Statistical modelling with a sequence of random numbers from a given distribution is used to test the model [42]. The stability of the Centroidous method is examined in this subsection in relation to the chosen distance.
Euclidean (3), which is conventionally used in the K-means clustering method, as well as Manhattan (6) and Chebyshev (7), distances are studied.
The formula for calculating the Manhattan distance is the sum of the absolute difference of the criteria vectors to the group centre of gravity c j , as follows:
d i = j = 1 m a ~ i j c j
The Chebyshev distance determines the maximum values of the vectors by calculating the absolute difference between the criteria’s vectors and the group’s centre of gravity, as follows:
d i = max i ( a ~ i j c j ) .
The stability of the method is investigated in the following way:
Step 1. Data processing: Dataset a i j is read. The data are normalised using Formula (1), resulting in matrix a ~ i j .
Step 2. Using Formula (2), the centre of gravity, c j , of the group of criteria is calculated.
Step 3. Distances d i from the group centre c j to each criterion are calculated. The formulas Manhattan (6), Euclidean (3), and Chebyshev (7) are used to determine the distance. The result is the following three distance vectors: d _ e u c i , d _ m a n i , and d _ c e b i .
Step 4. Next, applying Formulas (4) through (5) based on the obtained distances d _ e u c i , d _ m a n i , and d _ c e b i , the weights of the criterion, w i , are determined. The following three vectors of criteria weights are created as a result: w _ e u c i , w _ m a n i , and w _ c e b i .
Step 5. The stage of creating new matrices a ^ ς i j ( ς i t e r a t i o n , ς = 1 , , s ) and calculating the weights of the criteria, w ^ i ς . The matrix numbers a ~ i j change slightly, increasing or decreasing by q%, thus creating a new matrix a ^ i j .
Using a statistical modelling technique, quasi-random numbers for matrices a ^ ς i j are produced from a uniform distribution in the interval [ a ~ i j a ~ i j · q 100 ,   a ~ i j + a ~ i j · q 100 ] for each number. The stability is checked with a q equal 5% и 10%.
Step 6. The actions outlined in stages 2 through 4 are then carried out in a repeated loop s ( s = 100 ,   s = 10,000 ). The previously obtained weight results are combined with new ones into matrices W _ e u c i ς , W _ m a n i ς , and W _ c e b i ς .
The matrices will have s + 1 records, and the first column will have the initial weights w i determined from the original data. The   s columns will match the weights of the created matrices.
Step 7. A quality assessment of the Centroidous method using Euclidean, Manhattan, and Chebyshev distance calculations is carried out. The estimate is based on the obtained values of the weights W _ e u c i ς , W _ m a n i ς , and W _ c e b i ς , where ς = 1 , , s ; i = 1 , , 8 . MRE, RRM-BR, and RRM-AR metrics are used in the model’s quality assessment process.
  • The mean relative error (MRE) metric indicates how the weights deviate on average from the slightly modified matrix w ^ i ς and the primary weights w i . The metric is not defined at w i = 0 , or it may increase at very small values of the initial weights. The following formula determines the MRE:
    M R E = 1 s ς = 1 s | w ^ i ς w i | w i
    Summarising the obtained MRE results for all criteria i , the mean and maximum error values are calculated.
  • The rank repeatability metric (RRM) evaluates the frequency of repetition of the ranks of the weights of the criteria obtained from the initial matrix a ~ i j . Setting the frequency of recurrence of the best rank (RRM-BR) [2,43], which in this instance, corresponds to the greatest value of the criterion’s weight, is one of the practices. Another metric, the rank repeatability metric of all ranks (RRM-AR) [2] tracks the repetition of all the ranks. At first, the RRM is calculated for each criterion. Then they are averaged into one RRM-AR value.
The stability of the proposed Centroidous method is tested on real data that describe mobile phones (Table 1 and Table 2). The calculations of method stability testing were performed in Python (version 3.10.12) using the Google Research Collaboratory.
The results of the criteria w _ e u c i , w _ m a n i , and ω _ c e b i of weights established by the Centroidous method using Euclidean, Manhattan, and Chebyshev distances (Step 3) are presented in Table 6. The smallest value of standard deviation (based on the entire population) of criterion weights was noted in the case of the Manhattan distance—0.019. The greatest difference between the criterion weights is observed when using the Euclidean (0.04) and Chebyshev (0.053) distances.
The weight of the Cr3 criterion is most significant when using the Euclidean and Manhattan distances. The ranked results of the weights for criteria Cr4 and Cr5 differ when these distances are used. Since the weights of the criteria Cr1, Cr2, Cr4, and Cr5 differ only by hundredths, it can be argued that the results for the Euclidean and Manhattan distances are very similar. In contrast, when using the Chebyshev distance, the results are significantly different; Cr3 ranks third, while Cr5 ranks first. When comparing the ranked results of the Chebyshev and Euclidean distances, the criteria Cr4 and Cr7 match.
After receiving the initial weights of the criteria (Table 6), new matrices were created in a cycle s times, changing the initial matrix by q% (steps 5 and 6). Table 7 provides the results of the mean relative error (MRE) metric for 10 repetitions of the testing of stability of the Centroidous method using the Euclidean distance, with q = 5% and s = 100. The smaller the value of the MRE, the more reliable and stable the method is. Intervals of min-max values for all ten repetitions will be indicated in the future.
Table 8 provides MRE intervals when checking the stability of the Centroidous method for 100 iterations (q = 5%) using Euclidean, Manhattan, and Chebyshev distances. A comparison of the mean and maximum MRE values shows that the smallest error is when using the Manhattan distance, although the difference between the former and the Euclidean is negligible. The maximum MRE values when using the Chebyshev distance are almost twice as high as those for other distance measures.
We note that with a larger number of iterations, s = 10,000 (q = 5%), the MRE error interval narrows (Table 9). The trend remained similar to Table 8. The use of the Euclidean and Manhattan distances ensures a more stable behaviour of the method. The use of the Chebyshev distance significantly increases the MRE.
Next, the data change interval was increased to 10% (Table 10 and Table 11). During the comparison of the mean MRE values at s = 100 and q = 10% (Table 10), with changes of 5% (Table 8), the error increased by 49–55% when using the Euclidean distance, 51–52% when using the Manhattan distance, and 51–56% when using the Chebyshev distance. A comparison of the results of Table 9 and Table 11, at s = 10,000, shows that the average MRE error values increased by 51% across all distance measures.
From Table 10 and Table 11, we can see the difference in using the Manhattan and Euclidean distances in the Centroidous method more clearly. As the iterations increase, the error interval narrows. The average MRE error is smaller when using the Manhattan distance; this results in a more stable behaviour of the method.
After summarising the results of stability of the Centroidous method using different values of MRE distance, it can be noted that the Chebyshev distance showed high values of errors. During the testing of stability with a 5% q interval of change, the results of errors for the Euclidean and Manhattan distances are similar. After the data change interval q is increased to 10%, the lowest MRE was observed using the Manhattan distance.
Table 12 and Table 13 use other metrics of the method’s quality. Higher values of the RRM-BR and RRM-AR metrics indicate better method stability. The RRM-BR metric records the repetition of the rank of the best criterion obtained from the initial data matrix (Table 2).
According to the RRM-BR metrics (Table 12), the method is more stable when using the Manhattan distance. A high stability rate is also observed when using the Euclidean distance, above 91%. The use of the Chebyshev distance showed poor results. The stability values of the method are higher with a smaller change in data q. As the verification iterations s increase to 10,000, the stability interval of the method changes by up to 3%.
The RRM-AR metric records the repetition of ranks of all criteria obtained from the initial data matrix. At q = 5%, the best stability result of the method is observed when using the Chebyshev distance. The stability of the Centroidous method when using the Euclidean and Manhattan distances is similar. At q = 10%, the method is more stable when using the Euclidean and Manhattan distances.
Based on the RRM-BR metric values, the Centroidous method is more stable when using the Manhattan and Euclidean distances. During the assessment of stability by repetition of all ranks of criteria (RRM-AR metric), the Chebyshev distance showed the best value at q = 5%. At q = 10%, the method is more stable when using the Euclidean and Manhattan distances.

3.2. Comparison of Centroidous with Other Methods for Calculating Objective Weights

In this study, Centroidous is compared with the following methods: entropy, criteria importance through intercriteria correlation (CRITIC), standard deviation (SD), mean, and the method based on the removal effects of criteria (MEREC).
The entropy method was proposed by Channon in 1948 within the framework of the information theory and is also used to determine objective criterion weights. The method evaluates the structure of the data array and its heterogeneity [16]. According to information theory, the lower the information entropy of a criterion, the greater the amount of information the criterion represents, i.e., the greater the weight this criterion has. For the initial data array, sum normalisation is used. This normalisation does not provide the ability to convert individual negative numbers to positive values. The use of logarithm ln in the entropy method for negative numbers and zero was not determined. Therefore, the entropy method is limited in handling negative numbers and zeros in the original dataset.
The CRITIC method determines the weights of the criteria by analysing the contrast intensity and the conflicting character of the evaluation criteria [3]. Accordingly, the standard deviation and the correlation between the criteria are calculated. The method uses min–max normalisation for the initial data array.
The SD method determines the weights of criteria based on their standard deviations [3]. In order to obtain criterion weights, the values of criteria deviations are divided by the sum of these deviations.
In this study, the mean method determines the weights of the criteria based on their mean values. In order to obtain criterion weights, the average values of the criteria are divided by the sums of these values.
The MEREC method determines the importance of a criterion by temporarily excluding it and analysing changes in the results. The criterion that has a greater impact when excluded is determined as having a greater significance. The MEREC method uses a data transformation (like the SAW method) that determines the best value to be 1. If there are negative values in the maximising criterion or there are zeros in the initial dataset, this will limit the use of the MEREC method due to the uncertain values of the logarithm ln.
In order to compare methods for the determination of objective weights, a mobile phone dataset was used (Table 1). The following criterion weight values were obtained with the use of the entropy, Centroidous, CRITIC, SD, mean, and MEREC methods (Table 14). A graphic representation of the weights is presented in Figure 2.
The determined weights of the criteria vary significantly (Figure 2). Entropy assigned the highest weight to the Cr8 criterion (second rear camera), and MEREC identified this criterion as the second most important. In other methods, such as Centroidous, CRITIC, and mean, this criterion turned out to be second to last in importance, and the SD method identified Cr8 as the criterion with the least weight. Centroidous gave the Cr3 criterion (operational memory) the highest weight, the mean method identified it as the second most important, and the CRITIC and SD methods identified it as the fifth most important. The CRITIC and SD methods assigned the highest weight to the Cr6 criterion (front camera). The mean method assigned the greatest weight to the Cr4 criterion (battery capacity), while MEREC assigned it the least weight. MEREC assigned the highest weight to the Cr1 criterion (price). The SD method has the same values of criterion weights (C1 and C2).
The standard deviation of criterion weights (Figure 3) shows the spread of values. The widest spread of criterion weight values is found in the entropy, CRITIC, and MEREC methods. The SD method has the smallest spread, and the weights differ little from each other (Table 14).
Correlation values can indicate the similarity of the algorithms of the methods used. Table 15 shows a high correlation between the MEREC and entropy methods (0.708), as well as CRITIC and SD methods (0.879). There is also a weak correlation of 0.29 between the mean and Centroidous methods.
Next, the weights of the criteria (Table 14) were used to determine the best alternative, i.e., a mobile phone, by calculating with the use of the simple additive weighting (SAW) [10] method. The values of the SAW method are presented in Table 16, the ranked results of the evaluations are in Table 17.
The best alternative, A1, is clearly determined using the weights obtained by all methods for the determination of objective weights (Table 17). This was due to the fact that the maximising criteria evaluations of alternative A1 itself dominate over other alternatives. The entropy, CRITIC, SD, and mean methods placed the A5 alternative in second place. The ranked results of the Centroidous method are similar to those of the mean method.
In this example, each method showed its uniqueness. The ranked results completely matched only for the CRITIC and SD methods, the weights of which showed a high correlation.
Three sets of data were artificially generated (Table 18) for a more thorough study of the behaviour of the methods. They reflect different problematic issues identified during data analysis. A linear relationship and high correlation were identified among criteria C1, C2, C3, and C4 (Table 19) in the first generated dataset (Table 18).
Next, the weights of the criteria are established (Table 20) using the entropy, Centroidous, CRITIC, SD, mean, and MEREC methods with the use of artificially generated data-1. The entropy and MEREC methods assign the highest weight to criterion C5, the Centroidous and SD methods assign the highest weight to criterion C3, and the CRITIC and mean assign the highest weight to criterion C6 (Figure 4). All criterion weights are clearly defined; there are no zero-weight criteria.
In order to trace the dependence of all weights of the criteria, we will analyse the correlation of the weights presented in Table 21. A high correlation was found between the results of the entropy and MEREC methods—0.894. The average correlation was between entropy and CRITIC (0.656), Centroidous and SD (0.687), and CRITIC and mean (0.503).
The entropy, MEREC, and Centroidous methods had the largest standard deviation of criterion weights. The mean and SD methods had the smallest deviation between the weights (Figure 5).
The second artificially generated dataset, data-2, has slight differences in the means of the alternatives (Table 22). The maximum difference between the means of the alternatives is 1.43.
Table 23 shows the weights of the criteria established using the entropy, Centroidous, CRITIC, SD, mean, and MEREC methods using data array-2. The highest weight was assigned to criterion C3 using the entropy, CRITIC, and MEREC methods, and the Centroidous and mean methods assigned the highest weight to criterion C7. The mean and SD methods have the same criterion values (C1 and C2, C5 and C6).
Figure 6 clearly shows the dominance of criterion C4. The entropy and MEREC methods indicate this more accurately. The remaining criterion values are less prominent.
There is a strong correlation between the values of the criteria of the entropy and MEREC methods—0.871—and the CRITIC and SD methods—0.748. The correlation of the results between the Centroidous and SD methods is average (0.35), and the correlation between the CRITIC and entropy methods is weak (0.146) (see Table 24).
A high value of the standard deviation of criterion weights is observed in the entropy, MEREC, and Centroidous methods (Figure 7). These methods clearly define the weights of the criteria. The weights obtained using the SD method differ little from each other.
The third artificially generated data array, data-3, has large differences in the means of the alternatives (Table 25). The maximum difference between the means of the alternatives is 672.43.
Table 26 shows the weights of the criteria established using the entropy, Centroidous, CRITIC, SD, mean, and MEREC methods using the data-3 array. The greatest weight is assigned to the C1 criterion by the entropy and MEREC methods. The other methods determined the greatest weight based on different criteria, as follows: Centroidous—C6, CRITIC—C4, SD—C5, mean—C2. All weights are defined precisely; there is no repetition of the values of the weights of criteria.
C1 and C5 criteria, determined using the entropy and MEREC methods, stand out in Figure 8. The outline of the mean method is noticeable, as well. A large standard deviation is observed in the entropy and MEREC methods (Figure 9). The other methods distributed the importance between the criteria fairly evenly when a difference in the means of the initial data (Table 25) was large and differences in the standard deviations were not large (Figure 9).
The analysis of correlations of weights for all criteria showed a strong dependence between the entropy and MEREC methods (0.992) and the Centroidous and mean methods (0.731). The average correlation was identified between the CRITIC and mean methods (0.438) and the SD and MEREC methods (0.399) (see Table 27).
The entropy, Centroidous, CRITIC, SD, mean, and MEREC methods have shown their uniqueness when determining the criterion weights. Criterion weights are clearly identified in all datasets. A high correlation between the results of the entropy and MEREC methods (0.71, 0.84, 0.87, 0.99) was revealed in all examples. A high correlation was also observed for the CRITIC and SD methods (0.88, 0.75), and an average correlation was observed for the entropy and CRITIC methods (0.66, 0.15), CRITIC and mean methods (0.44), entropy and SD methods (0.5, 0.48), and SD and MEREC methods (0.4). There were high and average correlations between the results of the Centroidous and mean methods (0.731, 0.29) and the Centroidous and SD methods (0.69, 0.35).
The largest deviations in criterion weights are observed in the entropy, MEREC, Centroidous, and CRITIC methods. Weights with the same values may appear in the mean and SD methods in cases in which the average values of the initial data alternatives differ little from each other.

4. Discussion and Conclusions

At the time of use, statistical data from the Web of Science database indicated that the objective weighting method is the most widely applicable in the context of clinical trials. An analysis of previous research papers dedicated to methods of determining objective weights shows that in most cases, these algorithms calculate statistical indicators, such as the correlation (CRITIC, D-CRITIC, CCSD, CRITIC-M, ROCOSD, EWM-CORR), standard deviation (CCSD, CRITIC-M, SECA, ROCOSD), entropy (entropy, EWM-CORR, IDOCRIW), or logarithmic exponent (MEREC, LOPCOW).
The Centroidous method presented in this study represents a new perspective in the area of MCDM approaches. Centroidous is theoretically substantiated and relies on the concept of centroid clustering. Clustering methods are well-known in machine learning as a group of unsupervised learning methods. It is assumed that each cluster consists of a set of values that are fairly well approximated by the cluster centre [44]. When determining the weights of the criteria, the distance from the criterion to the centre of the cluster is taken into consideration; criteria located near the centre of the group are interpreted as more important, and the most distant criteria of the group have the smallest weights. In a broader context, “single” objects located far from the centre are perceived as outliers.
A literature analysis confirmed the mathematical justification of the Centroidous method proposed in this paper. The idea that the core elements, which are the most important in the context of a given group of criteria, accumulate in the centre of the criteria group resonates with the mathematical justification of other methods. Vaghefi claims that WLR typically uses the reciprocal of the diagonal element of the error covariance matrix [39]. It follows that the smaller the error variance, the more weight an observation receives. In the case of the Centroidous method, greater weight is determined for objects that are closer to the centre of the group. The centre of the group itself is defined as closer to a denser cluster of points, the dispersion of which is smaller. The logic of the Centroidous method, assuming that the weight of a criterion is inversely proportional to the distance of this criterion to the centre of the group, is also confirmed in the WKNN and IDW methods.
The use of distance is common in various areas of machine learning, including regression, classification and clustering. Distance is a basic concept in the fundamental sciences that allows us to determine the degree of similarity or difference between objects. Elen and Avuçlu argue that Euclidean distance is the most favoured distance measure in practice [22]. Based on the literature reviewed, the Centroidous method proposed to use Euclidean distance as the preferred measure.
In this study, a demonstration of the application of the new Centroidous method has been presented based on an example in which eight criteria for choosing a mobile phone were assessed. We note the ease of implementation of the Centroidous method and the possibility of the semantic interpretation of the results. The proposed Centroidous method can be used to calculate objective criteria weights based on a previously defined system of criteria.
In this research, the use of different distance measures in the Centroidous method was examined more thoroughly. For this purpose, the stability of the Centroidous method was tested using the following distance measures: Euclidean, Manhattan, and Chebyshev. A comparison of distance measures was examined on a real mobile phone dataset collected by the author of the paper.
Using the Manhattan distance, the obtained values of the criteria weights were found to be very similar; the standard deviation of the values was 0.019. In the case of using Euclidean and Chebyshev distances, the weights are determined more precisely; there are no equal criteria weights.
The method stability verification indicated Manhattan and Euclidean distances as more preferred measures to be used in the Centroidous method. The RRM-AR metric at q = 10% and s = 10,000 showed 58.59% to 59.11% stability using Euclidean distance and 58.87% to 59.27% using Manhattan distance. According to RRM-BR metric values using Manhattan distance (when q = 5%, s = 10,000), the stability of the Centroidous method is 100%, while using Euclidean distance, the stability is 95.71–96.38%. Comparing the mean and maximum MRE values, the smallest error occurs using the Manhattan distance, although the difference between that and Euclidean is not significant. As the number of iterations increases, the MRE error interval narrows, meaning that the values of the metric are almost unchanged when rechecked.
This paper provides a comprehensive comparison of the Centroidous method with established objective weighting methods entropy, CRITIC, SD, mean, and MEREC. The comparative analysis is performed on a real mobile phone dataset. With the weights obtained from all the methods used in the comparative analysis, the alternatives were evaluated using the SAW method. The determination of the second (rank 2) best alternative A5 coincided with the entropy, CRITIC, SD, and mean methods. The Centroidous method A5 alternative ranked third.
In order to scrutinise the entropy, Centroidous, CRITIC, SD, mean, and MEREC methods, three datasets were artificially generated, which reflect different problematic points detectable in the data analysis. The first generated dataset shows a linear relationship of the four criteria, and data-2 has slight differences in the mean values of the alternatives, the maximum difference of which is 1.43. The third artificially generated dataset, data-3, has large differences in the mean values of the alternatives, the maximum difference between which is 672.43.
The criteria weights were clearly determined using the entropy, Centroidous, CRITIC, SD, mean, and MEREC methods on all datasets. The methods showed their uniqueness in determining the criterion weights; the results of the weights did not have many coincidences. However, a high correlation between the entropy and MEREC methods was found across all datasets (0.71–0.99).
Also, the correlation of the criteria weights set by CRITIC and SD methods (0.88, 0.75) on mobile phones and data-2 datasets should be noted. The Centroidous method results correlate with the determined weights of the mean method on the data-3 dataset (0.73) and the SD method (0.69) on the data-1 dataset.
The entropy, MEREC, Centroidous, and CRITIC methods have the highest standard deviation of the criterion weights. In the mean and SD methods, the weights have small deviations, and this may result in weights with the same values when the mean values of the original data of the alternatives differ slightly.
In previous studies, the author of this article analysed and compared methods for determining subjective weights for AHP and FAHP [42], as well as a number of FAHP methods [2] involving stability calculations. Hence, one area for further research will be a comparative analysis of methods for determining objective weights using a stability verification algorithm, including the Centroidous method. Another area for further research is the division of one large group of criteria into subgroups using clustering methods. In general, criteria are broken down into subgroups by experts, according to their semantic interpretation. It is assumed that the automatic division of criteria into subgroups based on the distance between the criteria or the density can provide helpful information for a more accurate determination of the criteria weights.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Web of Science. Clarivate. Available online: https://webofscience.clarivate.cn/wos/woscc/basic-search (accessed on 28 May 2024).
  2. Vinogradova-Zinkevič, I. Comparative sensitivity analysis of some fuzzy AHP methods. Mathematics 2023, 11, 4984. [Google Scholar] [CrossRef]
  3. Diakoulaki, D.; Mavrotas, G.; Papayannakis, L. Determining objective weights in multiple criteria problems: The CRITIC method. Comput. Oper. Res. 1995, 22, 763–770. [Google Scholar] [CrossRef]
  4. Pala, O. A new objective weighting method based on robustness of ranking with standard deviation and correlation: The ROCOSD method. Inf. Sci. 2023, 636, 118930. [Google Scholar] [CrossRef]
  5. Krishnan, A.R.; Kasim, M.M.; Hamid, R.; Ghazali, M.F. A modified CRITIC method to estimate the objective weights of decision criteria. Symmetry 2021, 13, 973–992. [Google Scholar] [CrossRef]
  6. Wang, Y.M.; Luo, Y. Integration of correlations with standard deviations for determining attribute weights in multiple attribute decision making. Math. Comput. Model. 2010, 51, 1–12. [Google Scholar] [CrossRef]
  7. Žižović, M.; Miljković, B.; Marinković, D. Objective methods for determining criteria weight coefficients: A modification of the CRITIC method. Decis. Mak. Appl. Manag. Eng. 2020, 3, 149–161. [Google Scholar] [CrossRef]
  8. Keshavarz-Ghorabaee, M.; Amiri, M.; Zavadskas, E.K.; Turskis, Z.; Antucheviciene, J. Simultaneous evaluation of criteria and alternatives (SECA) for multi-criteria decision-making. Informatica 2018, 29, 265–280. [Google Scholar] [CrossRef]
  9. Liu, S.; Chan, F.T.S.; Ran, W. Decision making for the selection of cloud vendor: An improved approach under group decision-making with integrated weights and objective/subjective attributes. Expert Syst. Appl. 2016, 55, 37–47. [Google Scholar] [CrossRef]
  10. Hwang, C.L.; Yoon, K. Methods for multiple attribute decision making. In Multiple Attribute Decision Making; Springer: Berlin/Heidelberg, Germany, 1981; pp. 58–191. [Google Scholar]
  11. Wu, R.M.X.; Zhang, Z.; Yan, W.; Fan, J.; Gou, J.; Liu, B.; Gide, E.; Soar, J.; Shen, B.; Fazal-e-Hasan, S.; et al. A comparative analysis of the principal component analysis and entropy weight methods to establish the indexing measurement. PLoS ONE 2022, 17, e0262261. [Google Scholar] [CrossRef] [PubMed]
  12. Podvezko, V.; Zavadskas, E.K.; Podviezko, A. An extension of the new objective weight assessment methods CILOS and IDOCRIW to fuzzy MCDM. Econ. Comput. Econ. Cybern. Stud. Res. 2020, 2, 59–75. [Google Scholar] [CrossRef]
  13. Mukhametzyanov, I. Specific character of objective methods for determining weights of criteria in MCDM problems: Entropy, CRITIC and SD. Decis. Making Appl. Manage. Eng. 2021, 4, 76–105. [Google Scholar] [CrossRef]
  14. Keshavarz-Ghorabaee, M.; Amiri, M.; Zavadskas, E.K.; Turskis, Z.; Antucheviciene, J. Determination of objective weights using a new method based on the removal effects of criteria (MEREC). Symmetry 2021, 13, 525–545. [Google Scholar] [CrossRef]
  15. Ecer, F.; Pamucar, D. A novel LOPCOW-DOBI multi-criteria sustainability performance assessment methodology: An application in developing country banking sector. Omega 2022, 112, 102690. [Google Scholar] [CrossRef]
  16. Zavadskas, E.K.; Podvezko, V. Integrated determination of objective criteria weights in MCDM. Int. J. Inf. Technol. Decis. Mak. 2016, 15, 267–283. [Google Scholar] [CrossRef]
  17. Odu, G.O. Weighting methods for multi-criteria decision making technique. J. Appl. Sci. Environ. Manag. 2019, 23, 1449–1457. [Google Scholar] [CrossRef]
  18. Vinogradova, I.; Podvezko, V.; Zavadskas, E.K. The recalculation of the weights of criteria in MCDM methods using the Bayes approach. Symmetry 2018, 10, 205. [Google Scholar] [CrossRef]
  19. Vinogradova-Zinkevič, I. Application of Bayesian approach to reduce the uncertainty in expert judgments by using a posteriori mean function. Mathematics 2021, 9, 2455. [Google Scholar] [CrossRef]
  20. Jahan, A.; Mustapha, F.; Sapuan, S.M.; Ismail, M.Y.; Bahraminasab, M. A framework for weighting of criteria in ranking stage of material selection process. Int. J. Adv. Manuf. Technol. 2012, 58, 411–420. [Google Scholar] [CrossRef]
  21. Yazdani, M.; Zaraté, P.; Zavadskas, E.K.; Turskis, Z. A combined compromise solution (CoCoSo) method for multi-criteria decision-making problems. Manag. Decis. 2019, 57, 2501–2519. [Google Scholar] [CrossRef]
  22. Elen, A.; Avuçlu, E. Standardized Variable Distances: A distance-based machine learning method. Appl. Soft Comput. 2021, 98, 106855. [Google Scholar] [CrossRef]
  23. Bigi, B. Using Kullback-Leibler Distance for Text Categorization. ECIR 2003. Lecture Notes in Computer Science. In Advances in Information Retrieval; Sebastiani, F., Ed.; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2633. [Google Scholar] [CrossRef]
  24. Zhang, F.; O’Donnell, L.J. Support vector regression. In Machine Learning; Mechelli, A., Vieira, S., Eds.; Academic Press: Cambridge, MA, USA, 2020; Chapter 7; pp. 123–140. [Google Scholar] [CrossRef]
  25. Lee, L.H.; Wan, C.H.; Rajkumar, R.; Isa, D. An enhanced Support Vector Machine classification framework by using Euclidean distance function for text document categorization. Appl. Intell. 2012, 37, 80–99. [Google Scholar] [CrossRef]
  26. Zhang, H.; Liu, Y.; Lei, H. Localization from Incomplete Euclidean Distance Matrix: Performance Analysis for the SVD–MDS Approach. IEEE Trans. Signal Process. 2019, 67, 2196–2209. [Google Scholar] [CrossRef]
  27. Zhang, X.; Lu, W.; Pan, Y.; Wu, H.; Wang, R.; Yu, R. Empirical study on tangent loss function for classification with deep neural networks. Comput. Electr. Eng. 2021, 90, 107000. [Google Scholar] [CrossRef]
  28. Torres-Huitzil, C.; Girau, B. Fault and Error Tolerance in Neural Networks: A Review. IEEE Access 2017, 5, 17322–17341. [Google Scholar] [CrossRef]
  29. Sharma, S.; Rana, V.; Malhotra, M. Automatic recommendation system based on hybrid filtering algorithm. Educ. Inf. Technol. 2022, 27, 1523–1538. [Google Scholar] [CrossRef]
  30. Frahling, G.; Sohler, C. A fast K-means implementation using coresets. Int. J. Comput. Geom. Appl. 2008, 18, 605–625. [Google Scholar] [CrossRef]
  31. Erisoglu, M.; Calis, N.; Sakallioglu, S. A new algorithm for initial cluster centers in K-means algorithm. Pattern Recognit. Lett. 2011, 32, 1701–1705. [Google Scholar] [CrossRef]
  32. MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Berkley Symposium on Mathematical Statistics and Probability; Le Cam, L.M., Neyman, J., Eds.; University of California Press: Berkeley, CA, USA, 1967; Volume 1, pp. 281–297. [Google Scholar]
  33. Barakbah, A.R.; Kiyoki, Y. A pillar algorithm for K-means optimization by distance maximization for initial centroid designation. In Proceedings of the IEEE Symposium on Computational Intelligence and Data Mining, Nashville, TN, USA, 30 March–2 April 2009; pp. 61–68. [Google Scholar] [CrossRef]
  34. Saputra, D.M.; Saputra, D. Effect of Distance Metrics in Determining K-Value in K-Means Clustering Using Elbow and Silhouette Method. In Proceedings of the Sriwijaya International Conference on Information Technology and Its Applications (SICONIAN 2019), Palembang, Indonesia, 16 November 2019; Atlantis Press: Amsterdam, The Netherlands, 2020; pp. 341–346. [Google Scholar] [CrossRef]
  35. Dudani, S. The Distance Weighted k-Nearest-Neighbor Rule. IEEE Trans. Syst. Man Cybern. 1975, 6, 325–327. [Google Scholar] [CrossRef]
  36. Gou, J.; Du, L.; Zhang, Y.; Xiong, T. A New Distance-weighted k-nearest Neighbor Classifier. J. Inf. Comput. Sci. 2012, 9, 1429–1436. [Google Scholar]
  37. Lu, G.Y.; Wong, D.W. An adaptive inverse-distance weighting spatial interpolation technique. Comput. Geosci. 2008, 34, 1044–1055. [Google Scholar] [CrossRef]
  38. Kay, S. Fundamentals of Statistical Processing, Volume I: Estimation Theory; Prentice Hall PTR: Hoboken, NJ, USA, 1993. [Google Scholar]
  39. Vaghefi, R. Weighted Linear Regression. Towards Data Science. 4 February 2021. Available online: https://towardsdatascience.com/weighted-linear-regression-2ef23b12a6d7 (accessed on 4 July 2024).
  40. Mobile Telephones. Available online: https://tele2.lt/privatiems/mobilieji-telefonai (accessed on 8 April 2024).
  41. Pearson, K. On lines and planes of closest fit to systems of points in space. Philos. Mag. 1901, 2, 559–572. [Google Scholar] [CrossRef]
  42. Vinogradova-Zinkevič, I.; Podvezko, V.; Zavadskas, E.K. Comparative assessment of the stability of AHP and FAHP methods. Symmetry 2021, 13, 479. [Google Scholar] [CrossRef]
  43. Vinogradova, I. Multi-Attribute Decision-Making Methods as a Part of Mathematical Optimization. Mathematics 2019, 7, 915. [Google Scholar] [CrossRef]
  44. Ding, H.; Huang, R.; Liu, K.; Yu, H.; Wang, Z. Randomized greedy algorithms and composable coreset for k-center clustering with outliers. arXiv 2023, arXiv:2301.02814. [Google Scholar] [CrossRef]
Figure 1. Criteria weights and the centre of the criteria group in two-dimensional space.
Figure 1. Criteria weights and the centre of the criteria group in two-dimensional space.
Mathematics 12 02269 g001
Figure 2. Graphical representation of mobile phone criteria weights determined using different methods.
Figure 2. Graphical representation of mobile phone criteria weights determined using different methods.
Mathematics 12 02269 g002
Figure 3. Standard deviation of criteria weights from Table 14 determined using different methods.
Figure 3. Standard deviation of criteria weights from Table 14 determined using different methods.
Mathematics 12 02269 g003
Figure 4. Graphical representation of data-1 criteria weights determined using different methods.
Figure 4. Graphical representation of data-1 criteria weights determined using different methods.
Mathematics 12 02269 g004
Figure 5. Standard deviation of criteria weights from Table 20 determined using different methods.
Figure 5. Standard deviation of criteria weights from Table 20 determined using different methods.
Mathematics 12 02269 g005
Figure 6. Graphical representation of data-2 criteria weights determined using different methods.
Figure 6. Graphical representation of data-2 criteria weights determined using different methods.
Mathematics 12 02269 g006
Figure 7. Standard deviation of criteria weights from Table 23 determined using different methods.
Figure 7. Standard deviation of criteria weights from Table 23 determined using different methods.
Mathematics 12 02269 g007
Figure 8. Graphical representation of data-3 criteria weights determined using different methods.
Figure 8. Graphical representation of data-3 criteria weights determined using different methods.
Mathematics 12 02269 g008
Figure 9. Standard deviation of criteria weights from Table 26 determined using different methods.
Figure 9. Standard deviation of criteria weights from Table 26 determined using different methods.
Mathematics 12 02269 g009
Table 1. Initial data of mobile phones.
Table 1. Initial data of mobile phones.
No.CriterionA1A2A3A4A5A6A7
Cr1Price, EUR1498.841099.00698.92718.84529.00368.92158.92
Cr2Storage, GB51225612812825612864
Cr3Operational memory, GB121288884
Cr4Battery capacity, mAh5000490045003900500050005000
Cr5Processor sum of frequency, GHz11.5910.657.108.164.754.404.30
Cr6Front camera, MP12121012323213
Cr7Rear primary camera, MP200505050505050
Cr8Second rear camera, MP5010121012122
Table 2. Normalised data of mobile phones.
Table 2. Normalised data of mobile phones.
CriterionA1A2A3A4A5A6A7
Cr10.2950.2170.1380.1420.1040.0730.031
Cr20.3480.1740.0870.0870.1740.0870.043
Cr30.2000.2000.1330.1330.1330.1330.067
Cr40.1500.1470.1350.1170.1500.1500.150
Cr50.2270.2090.1390.1600.0930.0860.084
Cr60.0980.0980.0810.0980.2600.2600.106
Cr70.4000.1000.1000.1000.1000.1000.100
Cr80.4630.0930.1110.0930.1110.1110.019
Table 3. Vector c j (centre of the criteria group).
Table 3. Vector c j (centre of the criteria group).
c j 0.2730.1550.1160.1160.1410.1250.075
Table 4. Distance d i from the criteria to the centre of the group c j .
Table 4. Distance d i from the criteria to the centre of the group c j .
d 1 d 2 d 3 d 4 d 5 d 6 d 7 d 8
0.1070.1060.0900.1480.1070.2620.1500.212
Table 5. Distances d ~ i and criteria weights w i .
Table 5. Distances d ~ i and criteria weights w i .
d ~ 1 d ~ 2 d ~ 3 d ~ 4 d ~ 5 d ~ 6 d ~ 7 d ~ 8
0.8420.8511.0000.6110.8470.3440.6000.426
w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8
0.1520.1540.1810.1110.1530.0620.1090.077
Table 6. Criterion weights determined using the Centroidous method applying different distance measures.
Table 6. Criterion weights determined using the Centroidous method applying different distance measures.
Cr1Cr2Cr3Cr4Cr5Cr6Cr7Cr8
Euclidean0.1520.1540.1810.1110.1530.0620.1090.077
Manhattan0.1300.1320.1590.1310.1300.0880.1210.108
Chebyshev0.1820.1500.1550.0920.2080.0640.0890.059
Table 7. MRE of the criteria weights, d i calculated using Euclidean distance, q = 5% 100 iterations, 10 repetitions.
Table 7. MRE of the criteria weights, d i calculated using Euclidean distance, q = 5% 100 iterations, 10 repetitions.
12345678910
Cr10.03400.03100.03300.03000.03200.03200.03400.03400.03700.0320
Cr20.04700.05000.04800.05100.04700.04500.05200.05200.05100.0440
Cr30.03800.04100.04000.04200.04300.04300.04500.04000.03900.0420
Cr40.02400.02300.02500.02200.02200.02500.02200.02400.01900.0260
Cr50.03100.02900.03300.03400.03000.03200.03200.03300.03300.0330
Cr60.01600.01900.01900.02000.02000.01900.01900.01900.01800.0190
Cr70.04600.04200.04600.04600.04400.05300.04600.04800.04900.0420
Cr80.04000.04200.04000.03600.04200.04200.03900.03900.04100.0440
mean0.03450.03460.03550.03510.03500.03640.03610.03610.03590.0353
max0.04700.05000.04800.05100.04700.05300.05200.05200.05100.0440
Table 8. MRE interval of the criteria weights, q = 5% 100 iterations, 10 repetitions.
Table 8. MRE interval of the criteria weights, q = 5% 100 iterations, 10 repetitions.
EuclideanManhattanChebyshev
Cr10.03–0.0370.033–0.040.066–0.075
Cr20.044–0.0520.029–0.040.085–0.1
Cr30.038–0.0450.036–0.0490.052–0.061
Cr40.019–0.0260.026–0.0310.032–0.041
Cr50.029–0.0340.029–0.0330.057–0.068
Cr60.016–0.020.016–0.0210.025–0.031
Cr70.042–0.0530.029–0.0370.06–0.08
Cr80.036–0.0440.026–0.0310.048–0.06
mean0.035–0.0360.030–0.0330.057–0.06
max0.044–0.0530.037–0.0490.085–0.1
Table 9. MRE interval of the criteria weights, q = 5% 10,000 iterations, 10 repetitions.
Table 9. MRE interval of the criteria weights, q = 5% 10,000 iterations, 10 repetitions.
EuclideanManhattanChebyshev
Cr10.032–0.0330.035–0.0360.069–0.07
Cr20.047–0.0480.036–0.0370.091–0.093
Cr30.04–0.0410.04–0.0410.057–0.058
Cr40.024–0.0240.029–0.030.035–0.036
Cr50.032–0.0330.032–0.0330.062–0.063
Cr60.019–0.020.019–0.0190.028–0.029
Cr70.046–0.0470.032–0.0320.068–0.07
Cr80.042–0.0430.029–0.030.057–0.058
mean0.036–0.0360.032–0.0320.059–0.059
max0.047–0.0480.04–0.0410.091–0.093
Table 10. MRE interval of the criteria weights, q = 10% 100 iterations, 10 repetitions.
Table 10. MRE interval of the criteria weights, q = 10% 100 iterations, 10 repetitions.
EuclideanManhattanChebyshev
Cr10.055–0.070.065–0.080.103–0.119
Cr20.088–0.1030.06–0.0850.17–0.213
Cr30.069–0.0890.07–0.0880.091–0.124
Cr40.042–0.0540.048–0.0590.063–0.078
Cr50.051–0.070.054–0.0710.099–0.126
Cr60.034–0.0410.034–0.0410.047–0.058
Cr70.08–0.0990.053–0.0670.116–0.147
Cr80.072–0.0930.052–0.0630.102–0.124
mean0.064–0.0740.057–0.0650.102–0.118
max0.088–0.1030.071–0.0880.17–0.213
Table 11. MRE interval of the criteria weights, q = 10% 10,000 iterations, 10 repetitions.
Table 11. MRE interval of the criteria weights, q = 10% 10,000 iterations, 10 repetitions.
EuclideanManhattanChebyshev
Cr10.064–0.0650.069–0.070.115–0.118
Cr20.092–0.0940.072–0.0740.191–0.194
Cr30.079–0.080.077–0.0780.112–0.114
Cr40.047–0.0480.056–0.0570.069–0.07
Cr50.063–0.0650.064–0.0660.108–0.111
Cr60.038–0.0390.037–0.0380.054–0.055
Cr70.093–0.0950.064–0.0640.141–0.144
Cr80.085–0.0860.057–0.0580.115–0.117
mean0.071–0.0710.062–0.0630.114–0.115
max0.093–0.0950.077–0.0780.191–0.194
Table 12. RRM-BR metric interval, 10 repetitions.
Table 12. RRM-BR metric interval, 10 repetitions.
qIterationsEuclideanManhattanChebyshev
5%10091–98%100–100%77–85%
10,00095.71–96.38%100–100%77.37–79.34%
10%10063–77%97–100%33–53%
10,00069.89–71.03%98.68–98.93%44.41–46.26%
Table 13. RRM-AR metric interval, 10 repetitions.
Table 13. RRM-AR metric interval, 10 repetitions.
qIterationsEuclideanManhattanChebyshev
5%10064–70.88%64–68.5%66.12–71.88%
10,00065.83–66.3%66.11–66.56%69.38–69.98%
10%10056.75–62.88%56.62–63.25%49.88–54.75%
10,00058.59–59.11%58.87–59.27%51.62–52.17%
Table 14. Objective weights of mobile phone criteria determined using different methods.
Table 14. Objective weights of mobile phone criteria determined using different methods.
MethodsCr1Cr2Cr3Cr4Cr5Cr6Cr7Cr8
minmaxmaxmaxmaxmaxmaxmax
Entropy0.1470.1710.0410.0030.0630.1060.1720.297
Centroidous0.1520.1540.1810.1110.1530.0620.1090.077
CRITIC0.0820.0680.0890.1750.1270.2900.0940.075
SD0.1140.1140.1160.1290.1390.1520.1270.110
Mean0.1290.1000.1740.2380.1250.1050.0440.085
MEREC0.2860.1240.0360.0100.0550.1320.1700.187
Table 15. Correlation of weights of mobile phone criteria determined using different methods.
Table 15. Correlation of weights of mobile phone criteria determined using different methods.
EntropyCentroidousCRITICSDMeanMEREC
Entropy1.000−0.392−0.407−0.457−0.7600.708
Centroidous−0.3921.000−0.588−0.4150.290−0.193
CRITIC−0.407−0.5881.0000.8790.203−0.267
SD−0.457−0.4150.8791.0000.015−0.331
Mean−0.7600.2900.2030.0151.000−0.609
MEREC0.708−0.193−0.267−0.331−0.6091.000
Table 16. Estimates of mobile phones, using SAW method.
Table 16. Estimates of mobile phones, using SAW method.
MethodsA1A2A3A4A5A6A7
Entropy0.3060.1150.1020.0990.1370.1300.110
Centroidous0.2330.1440.1130.1130.1360.1300.130
CRITIC0.2010.1310.1080.1110.1650.1630.121
SD0.2350.1360.1100.1110.1460.1410.122
Mean0.2070.1440.1160.1140.1450.1420.132
MEREC0.2440.1060.1000.0990.1400.1450.166
Table 17. Ranked values of alternative estimates determined using the SAW method.
Table 17. Ranked values of alternative estimates determined using the SAW method.
MethodsA1A2A3A4A5A6A7
Entropy1467235
Centroidous1276345
CRITIC1476235
SD1476235
Mean1367245
MEREC1567432
Table 18. Artificially generated data-1 with high correlation between criteria.
Table 18. Artificially generated data-1 with high correlation between criteria.
A1A2A3A4A5A6A7
C1100020002500999197011502100
C25015020049.914765160
C37.0712.2514.147.0612.128.0612.65
C453.7060.1891.4436.0347.5247.17100.00
C523.0039.351.401.7429.797.1323.70
C60.890.660.590.390.600.390.03
Table 19. Correlation between data criteria in data-1.
Table 19. Correlation between data criteria in data-1.
C1C2C3C4C5C6
C11.0001.0000.9980.7520.230−0.176
C21.0001.0000.9980.7520.230−0.176
C30.9980.9981.0000.7350.279−0.192
C40.7520.7520.7351.0000.028−0.399
C50.2300.2300.2790.0281.0000.225
C6−0.176−0.176−0.192−0.3990.2251.000
Table 20. Objective weights of data-1 criteria determined using different methods.
Table 20. Objective weights of data-1 criteria determined using different methods.
MethodsC1C2C3C4C5C6
Entropy0.0750.1580.0450.0780.4400.203
Centroidous0.2430.1550.2950.1480.0740.086
CRITIC0.1230.1230.1270.1630.2150.249
SD0.1760.1760.1810.1630.1680.136
Mean0.1620.1620.1730.1480.1570.198
MEREC0.1130.1670.0810.1590.3820.098
Table 21. Correlation of weights of data-1 criteria determined using different methods.
Table 21. Correlation of weights of data-1 criteria determined using different methods.
EntropyCentroidousCRITICSDMeanMEREC
Entropy1.000−0.7710.656−0.255−0.0020.894
Centroidous−0.7711.000−0.8130.687−0.104−0.611
CRITIC0.656−0.8131.000−0.8640.5030.359
SD−0.2550.687−0.8641.000−0.6250.075
Mean−0.002−0.1040.503−0.6251.000−0.444
MEREC0.894−0.6110.3590.075−0.4441.000
Table 22. Artificially generated data-2 with a slight difference between mean values.
Table 22. Artificially generated data-2 with a slight difference between mean values.
A1A2A3A4A5A6A7A8A9
C1454325435
C2544245443
C3334455434
C4423524445
C5554455533
C6435335445
C7454555434
Mean4.143.864.003.713.714.864.143.434.14
Table 23. Objective weights of data-2 criteria determined using different methods.
Table 23. Objective weights of data-2 criteria determined using different methods.
MethodsC1C2C3C4C5C6C7
Entropy0.1970.1560.1010.2510.1060.1190.069
Centroidous0.1310.1330.1560.0950.1490.1670.169
CRITIC0.1220.1150.1400.1700.1600.1560.136
SD0.1330.1170.1480.1410.1640.1640.134
Mean0.1540.1540.1090.1360.1630.1220.163
MEREC0.1610.1600.1540.2080.0920.1360.089
Table 24. Correlation of weights of data-2 criteria determined using different methods.
Table 24. Correlation of weights of data-2 criteria determined using different methods.
EntropyCentroidousCRITICSDMeanMEREC
Entropy1.000−0.9440.146−0.293−0.0410.871
Centroidous−0.9441.000−0.1840.350−0.055−0.812
CRITIC0.146−0.1841.0000.748−0.2780.060
SD−0.2930.3500.7481.000−0.343−0.330
Mean−0.041−0.055−0.278−0.3431.000−0.451
MEREC0.871−0.8120.060−0.330−0.4511.000
Table 25. Artificially generated data-3 with large differences in mean values.
Table 25. Artificially generated data-3 with large differences in mean values.
A1A2A3A4A5A6A7A8A9
C14005400032500040350
C2544245443
C3334455434
C4423524445
C5700567045700533
C6435335445
C74054555554347
Mean165.143.86675.863.713.71824.869.293.4316.71
Table 26. Objective weights of data-3 criteria determined using different methods.
Table 26. Objective weights of data-3 criteria determined using different methods.
MethodsC1C2C3C4C5C6C7
Entropy0.4490.0100.0060.0150.3540.0070.158
Centroidous0.0700.1700.1560.1380.0970.1920.177
CRITIC0.1090.1330.1830.2000.1420.1220.112
SD0.1400.1090.1380.1320.1740.1530.154
Mean0.0690.2060.1450.1820.1080.1630.127
MEREC0.4500.0260.0290.0410.3090.0200.124
Table 27. Correlation of weights of data-3 criteria determined using different methods.
Table 27. Correlation of weights of data-3 criteria determined using different methods.
EntropyCentroidousCRITICSDMeanMEREC
Entropy1.000-0.859−0.4950.477−0.9010.992
Centroidous−0.8591.0000.044−0.2230.731−0.896
CRITIC−0.4950.0441.000−0.2300.438−0.460
SD0.477−0.223−0.2301.000−0.5960.399
Mean−0.9010.7310.438−0.5961.000−0.895
MEREC0.992−0.896−0.4600.399−0.8951.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vinogradova-Zinkevič, I. Centroidous Method for Determining Objective Weights. Mathematics 2024, 12, 2269. https://doi.org/10.3390/math12142269

AMA Style

Vinogradova-Zinkevič I. Centroidous Method for Determining Objective Weights. Mathematics. 2024; 12(14):2269. https://doi.org/10.3390/math12142269

Chicago/Turabian Style

Vinogradova-Zinkevič, Irina. 2024. "Centroidous Method for Determining Objective Weights" Mathematics 12, no. 14: 2269. https://doi.org/10.3390/math12142269

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop