Next Article in Journal
Identification of Key Habitats of Bowhead and Blue Whales in the OSPAR Area of the North-East Atlantic—A Modelling Approach towards Effective Conservation
Previous Article in Journal
Climate Change and Tidal Hydrodynamics of Guadalquivir Estuary and Doñana Marshes: A Comprehensive Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SAPEVO-PC: Integrating Multi-Criteria Decision-Making and Machine Learning to Evaluate Navy Ships

by
Igor Pinheiro de Araújo Costa
1,2,*,
Arthur Pinheiro de Araújo Costa
3,
Miguel Ângelo Lellis Moreira
1,2,
Marcos Alexandre Castro Junior
4,
Daniel Augusto de Moura Pereira
5,
Carlos Francisco Simões Gomes
2 and
Marcos dos Santos
2,3
1
Operational Research Department, Naval Systems Analysis Center (CASNAV), Rio de Janeiro 20091-000, Brazil
2
Production Engineering Department, Fluminense Federal University (UFF), Niteroi 24210-346, Brazil
3
Systems and Computing Department, Military Institute of Engineering (IME), Rio de Janeiro 22290-270, Brazil
4
Postgraduate Department of Accounting Sciences, State University of Rio de Janeiro (UERJ), Rio de Janeiro 20950-000, Brazil
5
Production Engineering Department, Federal University of Campina Grande (UFCG), Campina Grande 58428-830, Brazil
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(8), 1444; https://doi.org/10.3390/jmse12081444
Submission received: 16 July 2024 / Revised: 7 August 2024 / Accepted: 18 August 2024 / Published: 21 August 2024
(This article belongs to the Section Marine Environmental Science)

Abstract

:
The selection of a navy ship is essential to guarantee a country’s sovereignty, deterrence capabilities, and national security, especially in the face of possible conflicts and diplomatic instability. This paper proposes the integration of concepts related to multi-criteria decision making (MCDM) methodology and machine learning, creating the Simple Aggregation of Preferences Expressed by Ordinal Vectors—Principal Components (SAPEVO-PC) method. The proposed method proposes an evolution of the SAPEVO family, allowing the inclusion of qualitative preferences, and adds concepts from Principal Component Analysis (PCA), aiming to simplify the decision-making process, maintaining precision and reliability. We carried out a case study analyzing 32 warships and ten quantitative criteria, demonstrating the practical application and effectiveness of the method. The generated rankings reflected both subjective perceptions and the quantitative performance data of each ship. This innovative integration of qualitative data with a quantitative machine learning algorithm ensures comprehensive and robust analyses, facilitating informed and strategic decisions. The results showed a high degree of consistency and reliability, with the top and bottom rankings remaining stable across different decision-makers’ perspectives. This study highlights the potential of SAPEVO-PC to improve decision-making efficiency in complex, multi-criteria environments, contributing to the field of marine science.

1. Introduction

Military research and development in the modern world are not only two of the main factors in building the military power of a state, strengthening its strategic independence, and ensuring technological advantages, but they also represent essential factors in the scientific, technological, and economic progress of the state [1].
The relationship between the arms race and strategic stability has been meaningful for states to increase military power and the consequent effects on their rivalries with other states [2]. According to [3], armaments are fundamental to military power, emphasizing large naval, air, and land weapons, nuclear weapons, and ballistic missile capability. Also, Gilkova [1] has stated that the leading countries with regard to spending on military research and development not only dominate the creation and development of weapons, military equipment, and ammunition, but are also currently the most developed countries in the world in the field of robotics, information, and other types of civilian technology.
In this context, selecting a warship is crucial for any nation with significant maritime interests, as this choice directly reflects the country’s defense capabilities, power projection, and national security [4]. This process requires the meticulous analysis of strategic, budgetary, and technological demands.
Additionally, choosing a warship has implications beyond the purely military scope, influencing a country’s diplomatic and geopolitical relations [5], mainly due to the enormous growth in maritime traffic activities [6]. The decision may signal a commitment to maintaining regional peace and stability, or reflect a nation’s capability to deter potential threats. Thus, selecting a warship becomes a strategic and political issue of considerable magnitude, with long-lasting implications for national security and prestige, as discussed by [7], which emphasizes how naval capabilities can impact international perceptions and a nation’s status. Therefore, choosing a warship involves technical, economic, political, and strategic considerations that require a holistic and well-grounded approach.
Given the complexity surrounding most military problems, in the early days of World War II, operations research (OR) emerged as a scientific approach to decision-making in the military sphere [8]. Within the armed forces (AF), a routine OR technique employed is the multi-criteria decision making (MCDM) methodology, employed to refer to a collection of formal methodologies that aim to explicitly incorporate multiple criteria in assisting stakeholders and groups in exploring significant decisions [9]. These decisions typically involve conflicting objectives, ambiguous, non-repeatable uncertainties, and allocating costs and benefits among various individuals, businesses, groups, and organizations [10,11].
Greco et al. [12] have stated that the fundamental elements of MCDM consist of a set of alternatives (or observations), a minimum of two criteria (or variables), and the involvement of at least one decision-maker (DM). When considering a finite set of alternatives, the DM may encounter three primary types of problem: choice problems entail selecting a subset that contains the best alternatives, ranking problems involve arranging the alternatives from the best to the worst, and sorting problems entail categorizing the alternatives into pre-defined and ordered groups [13].
The complexity of the decision is enhanced by the need to balance multiple criteria, such as combat capability, operational range, and cost efficiency. It characterizes the problem as a typical multi-criteria decision scenario, which can be effectively addressed by implementing MCDM concepts. These methods are designed to structure and solve complex decision-making problems where conflicting criteria must be evaluated simultaneously [14].
Costa et al. [15] carried out an extensive review of the literature on MCDM in the military field, and highlighted several recent studies, such as the choice of a military hospital care vessel to support the fight against the COVID-19 pandemic [4], ranking policing strategies [16,17], the selection of drones to be acquired for use in naval warfare [18] and public security [19], and the classification of aircraft [20,21], military bases [22] and military personnel [23].
A complex factor in decision problems is usually the volume of data generated, which has grown exponentially with new communication technologies such as 5G and the rapid development of the Internet of things (IoT) [24]. Thus, according to [25], it is necessary to develop tools that provide a rapid analysis of a large number of data to support the decision-making process.
The explosion of data in today’s era, driven by technology such as 5G and the Internet of things, underscores the need for agile tools to analyze vast amounts of information and support complex decision-making processes [25]. In this sense, it is increasingly imperative that researchers develop and apply methodologies that facilitate the analysis of vast datasets. This quest requires integrating more advanced and efficient data analysis techniques to extract meaningful insights and support complex decision-making processes.
In this scenario, principal component analysis (PCA) gains much importance, as it allows the analysis of observations described by quantitative interdependent variables to extract meaningful information, representing them as a set of new orthogonal variables [26]. PCA is an unsupervised machine learning (ML) technique [27] that aims to identify a relatively small number of factors representing the behavior set of original interdependent variables [28].
Still, in this context, several authors have pointed out that the number of pairwise evaluations represents a significant challenge in multi-criteria methods. Saaty [29], known for developing the Analytic Hierarchy Process (AHP) method, recognizes that directly comparing criteria and alternatives in a large set can be costly and susceptible to inconsistencies. Likewise, Greco et al. [12], in a comprehensive analysis of multi-criteria approaches, highlighted the practical and cognitive difficulty of dealing with a large volume of comparisons, emphasizing the need for methods that effectively deal with this complexity.
As a contribution to mitigating this problem that is so relevant today, this paper proposes a new MCDM method called Simple Aggregation of Preferences Expressed by Ordinal Vectors—Principal Components (SAPEVO-PC), which integrates concepts from multi-criteria analysis and advanced ML techniques. The proposed method reduces the number of pairwise evaluations to obtain the criteria weights, using PCA concepts to evaluate the performance of the alternatives. The methodology above can be useful for dealing with vast datasets and extracting significant insights.
The SAPEVO-PC method stands out for its ability to handle large volumes of data efficiently and its capacity to provide robust and reliable decision support. Integrating advanced ML techniques into the MCDM framework permits an analysis that can uncover deeper insights and patterns within the data. The method enables the analysis of qualitative data (obtaining criteria weights) with less cognitive effort and quantitative data (analysis of the alternatives’ performance), using each approach’s advantages.
Given the above, this paper proposes a methodology for evaluating warships based on concepts related to MCDM and ML. We analyzed data from the military, economic, and strategic spheres of 32 ships from different countries’ navies and evaluated them using 10 quantitative criteria. The SAPEVO-PC method stands out for the following characteristics related to the decision-making context:
  • Criteria weights through expert opinion: Allow the determination of criteria weights considering the opinions of multiple decision-makers, with a reduced number of pairwise comparisons required;
  • Identification of Correlations: By identifying correlations between the original variables, this verifies correlated military capabilities, providing insights into how different attributes of warships report to each other;
  • Data Reduction: Creating factors that represent the original criteria provides structural data reduction, simplifying complex datasets into more manageable and interpretable components without losing essential information;
  • Performance Indicators: Developing consolidated rankings by creating performance indicators based on generated factors guarantees a comprehensive and robust evaluation framework. It allows for a more accurate assessment of warship capabilities.
By proposing and applying the SAPEVO-PC method, this research offers a scientifically robust and replicable methodology that can be applied to various tactical, operational, and strategic challenges in the maritime domain and broader scientific fields. The ability to incorporate the opinions of multiple decision-makers and reduce the number of assessments required increases the efficiency and effectiveness of the decision-making process, making it a valuable tool for both academic research and practical applications in defense strategy and naval engineering.
SAPEVO-PC provides an innovative approach to warship evaluation by combining MCDM and ML techniques, and improving the selection process’s accuracy and reliability. This method stands out, especially in the context of military problems, as it can deal with complex criteria and strategic considerations involved in choosing warships.
This work can contribute to academia and society by presenting a model that integrates MCDM and ML concepts. The proposed framework has a robust dual nature and can be replicated in tactical, operational, and strategic applications for military and civil environments.

2. Materials and Methods

MCDM methodologies constitute a comprehensive set of formal approaches designed to explicitly integrate multi-criteria consideration into problem-solving, supporting stakeholders and groups in exploring important decisions [9]. These methodologies involve essential elements, such as actions (alternatives), at least two criteria, and a DM. The main issues addressed include the alternatives’ selection, ordering, or orderly ranking [12].
Over the years, several MCDM models have been proposed, concentrated in two major groups of methodologies: aggregation methods using a single synthesis/compensatory criterion (American school), and outranking/non-compensatory methods (French/European school) [30].
Based on utility theory, the first group considers two types of relationship between variables and indifference [31]. In this specific scenario, the relations of incomparability are not considered, and the transitivity between preferences is assumed [5]. It means that if alternative A is preferred over alternative B and B is preferred over C among the main methodologies belonging to this group, we highlight:
  • AHP, proposed by [29]: method which structures complex problems into a hierarchy of objectives, criteria, and sub-criteria, allowing pairwise comparisons of alternatives and the synthesis of priorities;
  • MACBETH (Measuring Attractiveness by a Categorical-Based Evaluation Technique) [32]: this uses qualitative judgments to construct an attractiveness scale for different alternatives;
  • MAUT (Multiple Attribute Utility Theory), developed by [10]: this method uses utility functions to model the decision-makers’ preferences across multiple criteria, helping to identify the best alternative based on the maximization of expected utility;
  • SMART (Simple Multi-Criteria Attribute Rating Technique), proposed by [33]: this employs rating scales to score and weigh alternatives against criteria, facilitating decision-making;
  • TODIM (Multi-criteria Interactive Decision-making), presented by [34]: this interactive method considers the prospective behavior of decision-makers, utilizing prospect theory;
  • TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution), implemented by [35]: this method identifies the best alternative based on its proximity to the ideal solution; and
  • UTA (Additive Utility Theory) [36]: this method uses additive utility functions to model decision-makers’ preferences.
The second group of MCDM methods is characterized by the outranking relationship between alternatives, marked by non-transitivity relationships between preferences. The methods of this group extend a basic set of situations of preference relations based on four types: indifference relationship, weak preference, strict preference, and incomparability relationship [37]. The two main approaches that make up this class are considered method families:
  • ELECTRE (Elimination and Choice Translating Reality for Enrichment Evaluation), proposed by [38]: it is used to solve complex decision problems by constructing outranking matrices, which help compare alternatives based on multiple criteria. Various versions of the ELECTRE method, such as ELECTRE I, II, III, IV, and TRI are adapted to different types of problems and contexts; and
  • PROMETHEE (Preference Ranking Method for Enrichment Evaluation), developed by [39]: the method is based on constructing preference flows, allowing direct comparison between alternatives. PROMETHEE I and II are the best-known versions, with the former used for partial ranking and the latter for complete ranking.

2.1. SAPEVO Method

The Simple Aggregation of Preferences Expressed by Ordinal Vectors (SAPEVO), introduced by [40], takes an approach that handles input data in an ordinal manner when evaluating variables in a specific context. This method facilitates the processing of data that reflects the subjectivity of the decision-maker. To deal with multiple decision-makers, the methodology called Simple Aggregation of Preferences Expressed by Ordinal Vectors—Multi Decision-makers (SAPEVO-M), proposed by [41], offers a scenario-focused assessment with the participation of multiple decision-makers. In addition to improving the previous model, this approach reinforces its consistency.
The distinguishing feature of this model lies in its ability to transform preferences in an ordinal manner. This process is used to derive preference relationships between the alternatives in each criterion and to determine the criteria weights, resulting in the definition of the corresponding degrees of priority [41]. SAPEVO-M procedures are conducted in two stages:
  • Transform ordinal preferences of criteria into a vector of criteria weights; and
  • Integrate the vector criteria of different decision-makers.
Due to the relevance of the method, several variants have emerged over time, configuring a family of MCDM methods. Table 1 presents the SAPEVO family’s methods and their main characteristics.
Compensatory methods allow compensation between criteria, which means that a high performance in one criterion can compensate for the low performance of an alternative in another criterion. In contrast, non-compensatory methods, such as ELECTRE and PROMETHEE, do not allow such compensations. Non-compensatory logic establishes outranking relationships between pairs of alternatives, in which alternative A outranks an alternative B if it is at least as good as B in the criterion under consideration, and this is the idea of central expression expressed by the concept of outranking [44].
Ordination methods rank alternatives from best to worst; classification methodologies distribute alternatives into pre-defined and ordered categories [13]. Data used in MCDM can also be classified as cardinal or ordinal. The first are those that have information about the magnitude of the differences between alternatives, allowing arithmetic operations such as addition, subtraction, and multiplication to be carried out. Ordinal data only indicate the order or ranking of alternatives without providing information about the differences’ magnitude.
Analyzing the data in Table 1, the methods belonging to the SAPEVO family constitute a comprehensive set of approaches encompassing the main typologies of MCDM problems, such as sorting, selection, and classification. However, despite the diversity of applications and variations within the SAPEVO family, all these methods share a fundamental characteristic: the performance of evaluations alongside the criteria and alternatives.
Equation (1) shows the number of pairwise evaluations required for the application of the original SAPEVO methods:
P = n · n 1 2
P represents the number of pairwise evaluations required to obtain the weights of each criterion and the scores of each alternative based on the qualitative criteria; n corresponds to the number of criteria or alternatives analyzed, with P being calculated for each analysis (of the criteria and alternatives, separately).
There is a notable increase in pairwise evaluations in decision scenarios involving many criteria and alternatives. This increase can result in considerable cognitive strain on the decision-makers, making the decision-making process more challenging and subject to judgment inconsistencies. The accumulation of peer evaluations can overwhelm decision-makers, making it difficult to maintain consistency and accuracy in comparisons. In this sense, an important innovation of SAPEVO-PC is the significant reduction in pairwise comparisons required during the assessment process.
By incorporating the SAPEVO-PC method, the evaluation process becomes more efficient, reducing the cognitive load of decision-makers and increasing the overall reliability of the results. This efficiency is achieved without compromising the comprehensiveness and accuracy of assessments.
Another significant innovation is the integration of ML algorithms with MCDM techniques. This combination improves analytical capabilities and provides a more robust framework for decision support in complex problems.
Another innovative feature of SAPEVO-PC is the integration of ML algorithms into MCDM analysis. This integration helps reduce the complexity of high-dimensional data while preserving essential patterns and relationships. Furthermore, the methodology proposed in this article ensures that both qualitative expert opinions and quantitative data-based insights are used effectively in the decision-making process. The result is a more reliable and accurate assessment of alternatives, providing a comprehensive evaluation framework.
The SAPEVO-PC method stands out for its ability to effectively deal with complex and multidimensional data sets. This feature is crucial in the maritime domain, where decision-making often involves multiple criteria and diverse stakeholder perspectives. Combining qualitative and quantitative analyses, the method’s two-phase approach guarantees a balanced and complete assessment. These innovations fill existing gaps in the literature and offer a useful tool for decision support in complex problems.

2.2. SAPEVO-PC Method

This topic describes the concepts and steps for applying the SAPEVO-PC method. The method has two distinct phases: predominantly qualitative and quantitative. The first phase consists of obtaining the criteria weights, in which the qualitative opinions of multiple decision-makers are considered to determine the relative importance of each criterion. This phase ensures that expert opinions and preferences are effectively integrated into decision-making. The second phase involves evaluating the performance of alternatives using ML algorithms, specifically PCA. This quantitative phase provides data-driven insights to create performance indicators, providing a robust and objective assessment of each alternative. By combining these two phases, SAPEVO-PC offers a comprehensive and balanced approach to multi-criteria decision-making, integrating qualitative input from experts with quantitative analytical rigor.
In the qualitative phase, we must define the criteria called Positive Ideal Element (PIE) and Negative Ideal Element (NIE) to determine the criteria weights. We emphasize that the PIE is the criterion considered the most important, while the NIE is the least important, according to the decision-maker’s point of view.
The pairwise assessments are carried out only by comparing the PIE with each of the other criteria and, similarly, comparing the NIE with each other. The SAPEVO-PC method is based on a seven-point ordinal scale, as shown in Table 2. This scale ranges from 1 (Equality) to 7 (Absolutely best/worst), allowing the decision-maker to establish preferences qualitatively, according to their subjective evaluations.
To make the pairwise comparisons of the criteria about the NIE, symmetrical values will be assigned to those indicated in Table 2, considering that this parameter is the worst among all the criteria under analysis (all grades attributed to the NIE about the others will be negative).
A significant reduction is observed when it comes to the number of parity evaluations in SAPEVO-PC compared to other methods of the SAPEVO family and AHP. Equation (2) represents the number of pairwise evaluations (K) required to obtain the criteria weights and evaluate the alternatives in qualitative criteria using the SAPEVO-PC method.
K = 2 n 3
where n is the number of criteria or alternatives being evaluated.
For comparison purposes, Table 3 shows the number of comparisons required for the application of the classic SAPEVO and SAPEVO-PC methods, considering the number n of criteria:
It is observed that SAPEVO-PC becomes advantageous from 4 evaluation criteria, and this advantage increases as the number of criteria and alternatives increases.

2.2.1. Phase 1: Qualitative Evaluation for Obtaining Criteria Weights

Step 1. Define a set of decision criteria. In this step, the criteria C = {C1; C2;...; Cn} will be assessed in the decision-making process.
Step 2. Identify the criteria considered the best (most desirable, most important) and the worst (least desirable, least important). At this stage, the decision-maker determines which criterion is best or worst without pairwise comparisons.
Step 3. To evaluate the preference of the best general criterion concerning the other criteria (A+) using the 7-point scale presented in Table 2. The result of the evaluation of the precedence of the best criterion (PIE) about the others is presented through the vector represented in Equation (3):
A + = a P 1 , a P 2 , , a P n
where Pj represents the preference of the best PIE criterion over a given criterion j, and n represents the criterion number. It is observed that the element aPP = 1 because it represents the comparison of the element PIE with itself.
Step 4. Establish the preference of all criteria about NIE (A) using the symmetric values (multiplied by −1) in Table 2. The result of the evaluations of precedence of the other criteria about the NIE forms the vector represented in Equation (4):
A = a N 1 , a N 2 , , a N n
where Nj indicates the preference of criterion j over the worst NIE criterion. We emphasize that the element aNN = 1 because it corresponds to the comparison of NIE with itself.
Step 5. Calculate the element A*, represented by the global sum of A and A+ for each criterion, according to Equation (5):
A * = ( a N 1 + a P 1 ) , ( a N 2 + a P 2 ) , , ( a N n + a P n )
This term represents the overall level of precedence for each of the criteria.
Step 6. Calculate the normalized AN precedence value for each criterion according to the vector represented in Equation (6):
A N = a 1 * max a j * min a j * max a j * , a 2 * max a j * min a j * max a j * , , a n * max a j * min a j * max a j *
where a*j indicates the sum of the elements Nj and Pj. It should be noted that the normalized values obtained range from 0 to 1, where the weight of the NIE equals 0 and the PIE receives a maximum weight of 1. However, as in the classic SAPEVO models, this normalization can result in criteria with weights equal to 0, which conceptually makes no sense, as such a criterion would not be considered in the analysis.
Gomes et al. [41] suggested assigning the weight of 1% of the lowest subsequent value to circumvent this limitation. However, these values remain very close to 0 in practice. In this paper, we propose to assign the weight of 20% of the lowest subsequent value to the criterion with the least importance. The results represent cardinal scores of the criteria and alternatives to the qualitative criteria, clarifying the preference relationships for each variable in each evaluation context.
Step 7. Calculate the values of the criteria weights, WS, represented on a percentage scale, according to the vector presented by Equation (7):
W S = A N 1 j = 1 N A N j , A N 2 j = 1 N A N j , , A N n j = 1 N A N j
Steps 1 to 7 are performed for each DM involved in decision-making.
Step 8. Calculate the Group Decision Value, WG, represented on a percentage scale, according to Equation (8):
W G = j = 1 D W S 1 D , j = 1 D W 2 D , , j = 1 D W S n D
WG is the vector of group decision values for the criteria; D is the total number of decision-makers; WS is the weight of the i-th criterion the d-th decision-maker assigns.
In this step, we compute the average of the decision-makers’ evaluations to derive the group decision value. It involves taking the mean of the individual weights assigned by each decision-maker to each criterion. The group decision value provides a collective assessment that incorporates the perspectives of all decision-makers, ensuring a balanced and representative evaluation.

2.2.2. Phase 2: Quantitative Evaluation Integrating MCDM and ML

The second phase of the SAPEVO-PC method is conducted using PCA principles. The concepts and details of PCA will be explained and detailed in this section, as well as the integration of this ML technique with the algorithm proposed in this article.
Initially, we have to address factor analysis, which is useful when seeking to analyze variables that present relatively high correlation coefficients, with the establishment of new variables that capture the collective behavior of the original variables [28], summarizing the data and generating hypotheses [45]. Fávero and Belfiore [46] define each new variable as a factor grouping variables based on established criteria. Therefore, factor analysis is a multivariate technique that seeks to identify several factors representing the behavior set of interdependent original variables [46].
Among the techniques for determining factors, PCA is the most widely used since it assumes that uncorrelated factors can be extracted from linear combinations of the original variables. In other words, PCA allows, based on a set of original variables correlated with each other, another set of variables (factors) to be determined, resulting from the linear combination of the original set [28].
In a historical context, Pearson [47] and Spearman [48] are the founders of factor analysis. While the first author developed the mathematical model of what is conventionally called correlation, the second proposed a methodology to evaluate the inter-relationships between variables. Decades later, Hotelling [49] proposed the term Principal Component Analysis as the analysis that establishes components from the maximization of the variance of original data.
In order to mathematically implement PCA, Fávero and Belfiore [28] initially suggested a database with several observations, n, and, for each observation, i (i = 1,..., n), values corresponding to each of the k metric variables X. To extract factors from the k variables, it is necessary to define the matrix of ρ correlations with the values of Pearson’s linear correlation between each pair of variables, as illustrated in the matrix (9).
ρ = 1 ρ 12 ρ 21 1 ρ 1 k ρ 2 k ρ k 1 ρ k 2 1
The correlation matrix ρ is symmetric concerning the main diagonal, with values equal to 1. ρ 12 represents the Pearson correlation between X1 and X2, calculated using Equation (10) [28]:
ρ 12 = i = 1 n X 1 i X 1 ¯ . X 2 i X 2 ¯ i = 1 n X 1 i X 1 ¯ 2 . i = 1 n X 2 i X 2 ¯ 2
X 1 ¯ and X 2 ¯ correspond to the means of the variables X1 and X2.
Pearson’s correlation consists of a measure of the degree of the linear relationship between two metric variables, ranging from −1 to 1 [50]. Values close to one of these limits correspond to the existence of a linear relationship between the two variables under analysis. Thus, they can significantly contribute to extracting a single factor [46]. In contrast, a value close to 0 indicates that the linear relationship between the two variables is practically non-existent [51].
For a correct extraction of factors based on original variables, the matrix of ρ correlations must present relatively high and statistically significant values [46,52]. According to Hair et al. [53], if a considerable amount of values is below 0.30, there is a preliminary indication that factor analysis may be inappropriate. In this sense, Fávero and Belfiore [46] have stated that the researcher must use the Kaiser–Meyer–Olkin (KMO) statistic and Bartlett’s sphericity test to verify the overall adequacy of factor extraction.
The KMO statistic ranges from 0 to 1, providing the proportion of variance common to all variables in the sample, possibly attributed to a common factor [54]. According to [28], values closer to 1 indicate that the variables share a very high percentage of variance with high Pearson correlations. On the other hand, values close to 0 result from low Pearson correlation indices, which may indicate that factor analysis will not be adequate. Obtaining the KMO statistic proposed by Kaiser [55] is possible using Equation (11):
K M O = l = 1 k c = 1 k ρ l c 2 l = 1 k c = 1 k ρ l c 2 + l = 1 k c = 1 k φ l c 2 , l c
where l and c correspond to the rows and columns of the correlation matrix ρ , respectively. Φ represents the partial correlation coefficients between two variables [28]. Pearson’s correlation coefficients ρ are also known as zero-order correlation coefficients, while φ coefficients are also called higher-order correlation coefficients. Three variables are called first-order correlation coefficients, four second-order correlation coefficients, and so on [46].
When we want to know the correlation between two variables, we use the partial correlation coefficients, disregarding the effects of the other variables present in the sample under analysis [56]. For example, considering three variables X1, X2, and X3, the first-order correlation coefficients can be defined by Equation (12) [28]:
φ a b , c = ρ a b ρ a c · ρ b c 1 ρ a c 2 · ( 1 ρ b c 2 )
where φab,c represents the correlation between Xa and Xb, keeping Xc constant, where a, b, and c can assume values 1, 2, or 3, corresponding to the three variables X1, X2, and X3 under analysis. Equation (13) represents the general expression of a given partial correlation coefficient (second order) for a situation with four variables [28]:
φ a b , c d = φ a b , c φ a d , c · φ b d , c 1 φ a d , c 2 · ( 1 φ b d , c 2 )
where φab,cd corresponds to the correlation between Xa and Xb, with Xc and Xd constant, where a, b, c, and d can take the values 1, 2, 3, or 4, corresponding to the four variables under analysis. According to [28], obtaining correlation coefficients of higher orders (five or more variables) will be based on determining partial correlation coefficients of lower orders. The authors also point out that even if Pearson’s correlation coefficient between two variables is 0, the partial correlation coefficient between them may not be equal to 0 since this depends on the values of Pearson’s correlation coefficients between each of these variables and the others present in the database.
Factor analysis is appropriate if the partial correlation coefficients between the variables are low [46], which means that such variables share a high percentage of variance. Disregarding one or more of them can impair the quality of the extraction of the analyzed factors. In this sense, Table 4 presents, according to [28], a relationship between KMO statistics and the global adequacy of factor analysis.
The sphericity test proposed by Bartlett [57] compares Pearson’s correlation matrix with an identity matrix I of equal dimension. If the values of the principal diagonals of each matrix present differences that are not statistically different from 0 at a given significance level, the extraction of the factors will be inadequate. According to [28], Pearson’s correlations between each pair of variables are statistically equal to 0, making extracting factors from the original variables impossible. Therefore, expression (14) defines the null (H0) and alternative (H1) hypotheses of Bartlett’s sphericity test:
H 0 : ρ = 1 ρ 12 ρ 21 1 ρ 1 k ρ 2 k ρ k 1 ρ k 2 1 = I = 1 0 0 1 0 0 0 0 1
H 1 : ρ = 1 ρ 12 ρ 21 1 ρ 1 k ρ 2 k ρ k 1 ρ k 2 1 I = 1 0 0 1 0 0 0 0 1
The statistic corresponding to this test is a χ2 statistic, represented by Equation (15):
χ B a r t l e t t 2 = n 1 2 . k + 5 6 · l n D
With k . ( k 1 ) 2 degrees of freedom, where n is the sample size, and k is the number of variables. Moreover, D is the determinant of the matrix of ρ correlations [46].
According to [28], Bartlett’s sphericity test, therefore, allows the verification for a given level of statistical significance and some degrees of freedom, whether the total value of the χ2Bartlett statistic is greater than the critical value of the statistic. If this happens, we state that the Pearson correlations between the pairs of variables are statistically different from 0, which means that factors can be extracted based on the original variables.
After verifying the global adequacy of the factor analysis, we must start by defining the factors by principal components, determining the eigenvalues and eigenvectors of the matrix of ρ correlations, and calculating the factor scores [46].
A factor represents the linear combination of original variables. In this sense, for k variables, a maximum number of k factors (F1, F2,..., Fk) is analogous to the maximum number of clusters that can be defined based on a sample with n observations. Thus, for k variables, we have Equation (16):
F 1 i = s 11 · X 1 i + s 21 · X 2 i + + s k 1 · X k i F 2 i = s 12 · X 1 i + s 22 · X 2 i + + s k 2 · X k i F k i = s 1 k · X 1 i + s 2 k · X 2 i + + s k k · X k i
The terms are called factorial scores, representing a linear model’s parameters that relate a given factor to the original variables [28]. According to the expression, it is possible to obtain these scores by determining the eigenvalues and eigenvectors of the matrix of ρ correlations (17).
ρ = 1 ρ 12 ρ 21 1 ρ 1 k ρ 2 k ρ k 1 ρ k 2 1
In which the matrix of correlations k × k, according to Fávero and Belfiore [28], presents k eigenvalues λ221 ≥ λ22 ≥... ≥ λ2k), obtained through Equation (18):
det λ 2 I ρ = 0 , w i t h   λ 1 2 + λ 2 2 + + λ k 2 = k
where I is the identity matrix, also with dimensions k × k.
Thus, Fávero and Belfiore [46] define the matrix of eigenvalues L2 according to the expression (19):
Λ 2 = λ 1 2 0 0 λ 2 2 0 0 0 0 λ k 2
To define the eigenvectors of the matrix ρ based on the eigenvalues, we must solve the systems of Equation (20) for each eigenvalue λ221 ≥ λ22 ≥... ≥ λ2k):
λ k 2 1 ρ 12 ρ 21 λ k 2 1 ρ 1 k ρ 2 k ρ k 1 ρ k 2 λ k 2 1 . v 1 k v 2 k v k k = 0 0 0     ( λ k 2 1 ) · v 1 k ρ 12 · v 2 k ρ 1 k · v k k = 0 ρ 21 · v 1 k + ( λ 2 2 1 ) · v 2 k ρ 2 k · v k k = 0 ρ k 1 · v 1 k ρ k 2 · v 2 k + ( λ k 2 1 ) · v k k = 0
Thus, Favero and Belfiore [28] state that the factorial scores of each factor are calculated by determining the eigenvalues and eigenvectors of the matrix ρ. It is possible to define the vectors of the factorial scores by the expression (21) for the k-th factor [46].
S k = S 1 k S 2 k S k k = v 1 k λ k 2 v 2 k λ k 2 v k k λ k 2
According to [28], the factorial scores of each factor are standardized by the respective eigenvalues. Therefore, the factors calculated by Equation (16) should be obtained by multiplying each factorial score by the corresponding original variable, standardized using the Z-scores procedure. Thus, it is possible to calculate the k-th factor by Equation (22):
F k i = v 1 k λ k 2 · Z X 1 i + v 2 k λ k 2 · Z X 2 i + + v k k λ k 2 · Z X k i
where ZXi corresponds to the standardized value of each variable X for a given observation i. All the extracted factors present, among themselves, Pearson correlations equal to 0; that is, they are orthogonal to each other [28]
The innovative integration of the WS weights, obtained in phase 1, with the PCA values occurs in calculating the factor scores. The original criteria weights (WS) are combined with the PCA-derived values in this process. Specifically, each weight of the original criteria arising from phase 1 is multiplied by the corresponding value, ensuring that qualitative expert opinions and strictly quantitative data-based values are incorporated into the final assessment.
This integration allows for a more comprehensive and accurate assessment of alternatives, taking advantage of the strengths of both methods. Considering qualitative and quantitative data, the factor equation can be represented by Equation (23):
F k i = v 1 k λ k 2 · Z X 1 i · W S 1 + v 2 k λ k 2 · Z X 2 i · W S 2 + + v k k λ k 2 · Z X k i · W S k
where WS represents the weight of each criterion obtained in phase 1; k represents the number of original criteria under analysis.
Fávero and Belfiore [46] point out that it is fundamental to establish a criterion to obtain the appropriate amount of factors representing the original variables. Although there are several ways to obtain such a quantity, an analysis must be made based on the magnitude of the eigenvalues calculated from the matrix of ρ correlations [28].
Considering that the eigenvalues represent the percentage of variance shared by the original variables for the formation of each factor, Fávero and Belfiore [28] point out that lower percentages of variance shared by the original variables form the factors extracted based on smaller eigenvalues. In this sense, factors extracted from eigenvalues of less than one cannot represent the behavior of even one original variable [46].
Among the ways of determining the number of factors, Ruscio and Roche [58] highlight the latent root criterion or Kaiser criterion, in which only the factors corresponding to eigenvalues more significant than one are considered. In this work, we used PCA factor analysis, in which the first factor F1, formed by the highest percentage of variance shared by the original variables, is also called the main factor [28].
After establishing the factors, it is possible to define the factor loadings, represented by Pearson correlations between the original variables and each factor generated [59]. Table 5 presents the factor loadings for each variable–factor pair.
Based on the latent root criterion (only eigenvalues greater than 1), we assume that the factor loadings between the factors corresponding to eigenvalues of less than one and all the original variables are low, as they will have already presented higher loads with factors previously extracted from higher eigenvalues [46]. Similarly, original variables that share a small portion of variance with the other variables will only present high factor loadings in a single factor. If this occurs for all original variables, there will be no significant differences between the correlation matrices ρ and identity I, making the χ2Bartlett statistic very low and, therefore, the factor analysis will be inappropriate [28].
Since the factor loadings are the Pearson correlations between each variable and each factor, the sum of the squares of these loads in each row of Table 5 will always equal one because each variable shares part of its percentage of variance with all k factors. In addition, the sum of the variance percentages will be 100% [28].
However, if several factors less than k are extracted based on the latent root criterion, each row’s sum of squares of the factor loadings will not equal 1. In this case, the sum is called commonality, which corresponds to the total shared variance of each variable in all factors extracted from eigenvalues greater than 1 [26]. Therefore, Fávero and Belfiore [28] present the Equation (24):
c k 1 2 + c k 2 2 + = c o m m o n a l i t i e s   X k
The analysis of commonalities aims to verify if any variable does not share a significant percentage of variance with the extracted factors [46]. Fávero and Belfiore [28] state that, although no cutoff point defines a given commonality as being high or low because the sample size can interfere with this judgment, the verification of considerably low commonalities compared to the others may suggest the reconsideration of the inclusion of the respective variable in the factor analysis.
Thus, after defining the factors based on the scores, the factor loadings will equal the estimated parameters of a multiple linear regression model, having as dependent variables a standardized ZX and, as explanatory variables, the factors themselves. The adjustment coefficient R2 of each model will equal the commonality of the original variable [28].
On the other hand, the sum of the squares of the factor loadings in each column of Table 2 will be equal to the respective eigenvalue. That happens due to the ratio between each eigenvalue and the total number of variables corresponding to the percentage of variance shared by all k original variables for forming each factor [28], as illustrated by Equation (25):
c 1 k 2 + c 2 k 2 + + c k k 2 = λ k 2
Finally, it is possible to create a performance ranking between observations. In this sense, a widely accepted procedure is the criterion of weighted sum and ordering, based on the sum, for each observation, of the values obtained from all factors that have eigenvalues greater than 1, weighted by the respective percentages of shared variance [28]. The equation for the final score (Pi) of an alternative i is given by Equation (26):
j = 1 k F i j · V a r j
where: Pi is the final score of an alternative i; Fij is the value of factor j for an alternative i; Varj is the percentage of the shared variance of factor j; k is the total number of factors considered (those with eigenvalues greater than 1). This criterion considers the performance referring to all the original variables because the consideration only of the main factor may not consider, for example, the upbeat performance obtained in a given variable that eventually shares a considerable percentage of variance with the second factor [46].
The flowchart presented in Figure 1 summarizes the SAPEVO-PC methodology.
In summary, a set of data containing alternatives and criteria is initially used. Then, the qualitative criteria are evaluated by multiple decision-makers (represented by “N” DMs). The results of these assessments are integrated with the factor scores, allowing the extraction of factors using statistical analyses such as the KMO Test and the Bartlett Sphericity Test to ensure data adequacy. Finally, a performance assessment of the alternatives is carried out, considering the factors calculated by integrating qualitative and quantitative assessments.

2.3. Related Studies

The selection of warships is a complex process requiring advanced decision-making tools. Recent studies have extensively utilized MCDM methodologies to address this complexity [60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76]. Vukić et al. [60] used the AHP method to analyze and prioritize the selection criteria and types of vehicle to monitor navigation safety. Tenório et al. [62] introduced the THOR method to aid the Brazilian Navy in selecting warships amidst budget constraints, effectively quantifying uncertainties. Similarly, Manap and Voulvoulis [63] developed a risk-based framework for selecting sediment-dredging strategies, integrating environmental, economic, and social risks.
The importance of sustainability in naval decisions is emphasized by Pesce et al. [64], who applied MCDM to select sustainable alternatives for cruise ships and assess environmental sustainability in maritime engineering. Gumusay et al. [65] used AHP and GIS to evaluate site suitability for marina construction, combining environmental, economic, and social factors. Maceiras et al. [61] applied a multi-criteria methodology to identify the most promising alternative fuel among those under development for possible implementation on military ships.
Hybrid MCDM models have been extensively studied by several people. Balin et al. [66] combined fuzzy AHP and TOPSIS to select critical gas turbine components, demonstrating the robustness of hybrid approaches. Similarly, Monprapussorn et al. [67] integrated MCDM and GIS to optimize hazardous waste-transport routes. Bayraktar and Nuran [68] applied the TOPSIS method to improve marine energy efficiency, emphasizing considering multiple criteria such as fuel consumption and emissions.
The application of MCDM to maritime search and rescue operations was explored by Wielgosz and Malyszko [69], and Malyszko [70], highlighting criteria such as effectiveness and safety. Santos et al. [71] employed AHP-Gaussian and Pearson’s correlation to rank warships for the Brazilian navy, aligning with strategic defense programs.
Considering environmental and safety criteria, Stavrou et al. [64] applied MCDM to locate ship-to-ship transfer operations. Malyszko [73] proposed a multi-criteria assessment for search and rescue units, emphasizing the role of decision-support systems. Hansson et al. [74] evaluated the need for alternative marine fuels to reduce greenhouse gas emissions from maritime transport. They employed life cycle analysis (LCA) to assess the environmental impact of alternative fuels, highlighting the importance of sustainable fuel options but not specifically applying MCDM for ship selection. Similarly, Ren and Lützen [75] described a methodology for applying LCA in the maritime sector to evaluate the environmental performance of shipping operations, focusing on environmental impacts without directly addressing MCDM for ship selection. Baesens et al. [76] applied deep learning convolutional neural networks to classify a set of warship images automatically.
The reviewed studies consistently underscore the efficacy of MCDM methods for warship selection and other strategic decisions in naval engineering. Integrating hybrid techniques, such as AHP, TOPSIS, and fuzzy logic is widely recognized as robust for handling complexity and uncertainties in decision-making processes [62,63,66].
The literature review reveals that MCDM approaches significantly enhance the decision-making processes for warship selection and other strategic defense decisions by systematically integrating multiple criteria. Adopting hybrid techniques and fuzzy logic increases the robustness and precision of decision models.
The literature review reveals that previous applications predominantly utilize classical MCDM methods such as AHP, THOR, and various hybrid MCDM approaches. However, these studies lack the integration of deeper mathematical and statistical analyses through ML algorithms to provide more reliability and robustness to the decision-making process. In this context, the methodology proposed in this paper is innovative, as it proposes integrating ML techniques and MCDM models into a real and complex problem related to maritime science and engineering.
Our methodology uniquely combines the qualitative strengths of the MCDM approach with the quantitative rigor of ML techniques, especially PCA. Analyzing real data from 32 military ships provides a robust framework for comprehensive and accurate assessments, filling a gap in the literature.
Studies by [59,61] and others have demonstrated the utility of MCDM methods in various maritime and defense-related applications. However, integrating ML principles, particularly PCA, adds a new dimension to our approach. PCA reduces data complexity while preserving essential standards, facilitating more accurate performance assessments of warships. This integration addresses the limitations of traditional MCDM methods by increasing analytical depth and providing an understanding of the factors that influence ship performance.
The SAPEVO-PC method is aligned with the literature trend of developing decision-support methods that integrate PCA and MCDM techniques, as cited in [77,78]. However, SAPEVO-PC offers innovative advantages, such as integrating state-of-the-art MCDM and ML techniques, while allowing the DM’s opinion to be considered in the decision-making process. This feature promotes more accurate and coherent decision-making aligned with the institutions’ strategic objectives.
Our proposed method stands out for its ability to effectively deal with complex and multidimensional datasets, which is crucial in the maritime domain. This innovative approach increases the accuracy of decision-making and provides a replicable model for addressing tactical, operational, and strategic challenges in marine science and engineering.

3. Methodology

According to the classification proposed by [79], this research can be quantitative, combining case studies and mathematical modelling [80]. This study focuses on evaluating and establishing a ranking of warships, considering the primary vessels available in the world’s navies. This economic–military evaluation incorporates tactical, operational, and strategic aspects to ensure a comprehensive assessment. By utilizing robust mathematical and statistical methods, this research aims to provide a replicable and objective framework for analyzing the capabilities of different warships.
The warship dataset analyzed in this study is highly relevant as it has been meticulously compiled from reliable sources that provide detailed and accurate data on military ships, such as [81,82], ensuring the accuracy and relevance of information. These sources were selected for their credibility and the comprehensiveness of the data offered.
Quantitative data on warships were manually extracted from the identified sources. This process involved reading reports in detail and carefully transcribing relevant information into a central spreadsheet. After initial collection, the data were validated by military experts to ensure accuracy and reliability. This validation process included cross-checking information with multiple sources and correcting any inconsistencies identified.
The dataset includes a comprehensive range of criteria such as displacement, speed, armament, and technological capabilities, ensuring a robust and complete assessment of each warship. By leveraging these reliable sources, the dataset reflects the current state of naval capabilities and provides a solid basis for analyzing and ranking the world’s most advanced warships. Table 6 presents the criteria analyzed in our study, as well as a description of each of them.
These 10 criteria for evaluating warships were carefully considered to cover a wide range of capabilities and operational characteristics, allowing for a robust and diverse analysis. The criteria presented in Table 6 were chosen based on their relevance and direct impact on ship operations and capabilities.
The proposed methodology aims to provide a comprehensive and flexible vision, as each navy may have different objectives when employing a ship, such as operations against natural disasters, social assistance, patrolling, combating piracy, or other tactical, operational, and strategic jobs. For example, payload capacity and displacement are critical for humanitarian assistance operations, while missile range and weapons classification are essential for combat missions. Including diverse criteria allows the method to be adapted to different scenarios and specific needs.
Furthermore, the choice of criteria was limited by the availability of reliable quantitative data in the analyzed databases. We recognize that a complete assessment of a ship’s military strength requires additional, more specific, and confidential information, both quantitative and qualitative. However, the selected criteria provide a solid basis for the comparative analysis proposed in this study.
In this paper, we did not present the specific names of the ships in the analysis based on several important considerations, especially related to the sensitive nature of military data. Military data, including detailed specifications of warships, are highly sensitive and confidential. Disclosing specific information can compromise the operational security of armed forces and expose strategic vulnerabilities. The ships’ explicit identification can reveal details about the capabilities and disposition of naval fleets from different countries.
In this sense, many nations have strict confidentiality policies regarding their military capabilities. Using generic names for the ships respects these policies, ensuring no confidential information is inadvertently disclosed. While the specific names of the ships are not disclosed, the data used in the analysis are real and based on verified information from reliable sources. They include technical and operational specifications such as displacement, speed, cargo capacity, armament, cost, range, crew, year of commissioning, missile range, and radar range.
In addition, this study uses the Soft Systems Methodology (SSM) [83], with the CATWOE analysis depicted in Figure 2, encapsulating the key elements of evaluating Navy Ships using ML and MCDM techniques.
The CATWOE structured critical analysis around the study, focusing on Customers (C), Actors (A), Transformations (T), Worldview (W), Owners (O), and Environmental constraints (E).
The primary objective of this analysis is to present a scientific methodology that can be replicated in this case and various tactical, operational, and strategic problems in the maritime sphere and scientific fields. Using generic names allows the focus to remain on the evaluation criteria and the methodology employed without compromising sensitive information.
Table 7 presents 32 real warships from navies worldwide, evaluated according to the 10 quantitative criteria previously described and presented.
Although a comparison of ships of the same class is more direct and specific, we chose to evaluate a variety of ships of different sizes and characteristics, providing a more comprehensive and generalizable analysis. This study aims to provide a methodology that can be applied to different ships and missions, from large-scale ocean operations to coastal patrols or in difficult-to-access areas. The analysis presented here can be adapted to different scenarios, with certain mission requirements and expanding the criteria to cover additional aspects of ship operation.
The complexity of decisions within the military context necessitates a comprehensive analysis incorporating various facets of military operations, including strategy, operations, logistics, technology, and intelligence. In this regard, integrating opinions from advisors with diverse military profiles is crucial to addressing the complex nature of military operations and high-level decision-making. With this objective, we consulted five distinct profiles of military decision-makers, each tailored to specific specializations within the military framework, as depicted in Table 8:
The aggregation of experts from distinct domains offers a breadth of perspectives and experiences that are pivotal for a thorough understanding of the potential impacts of each decision. Each expert, whether specialized in operational tactics, logistics, intelligence, technology, or strategic planning provides invaluable insights that aid in mitigating the risks associated with individual biases in the decision-making process.
Figure 3 illustrates the proposed methodology in this paper for assessing warships.
Phase 1 begins with a dataset of warships from navies around the world, followed by qualitative criteria evaluation by DMs across various domains such as Operational Tactics, Logistics and Supply Chain, Intelligence and Security, Technology and Innovations, and Strategic Planning and Policy, culminating in group consensus. Phase 2 involves adapting PCA concepts, conducting validation tests, integrating qualitative and quantitative data, performing a performance evaluation of ships, ranking the ships, and conducting a sensitivity analysis considering the DMs’ individual and consensus opinions, leading to the final decision.
We used the R programming language to enable the implementation of the SAPEVO-PC method, as illustrated in the next section.

4. Results and Discussion

This section will detail the results according to the proposed methodology, which follows the SAPEVO-PC steps presented previously. Initially, we have the qualitative phase of the analysis.

4.1. Obtaining Criteria Weights Using a Qualitative Scale

The qualitative analysis began by determining the criteria to be considered in the decision in Step 1, as shown in Table 2. All preference assignments presented in this section were obtained through interviews with military experts. Initially, the Positive Ideal Element (PIE) and Negative Ideal Element (NIE) were identified through a strategic analysis of the criteria. DM 1, an Operational Tactics Advisor, defined the armament as PIE and the cost as NIE.
Table 9 summarizes his preferences. The A+ column represents the preference values assigned by DM 1 for the criterion Armament concerning the others. For this decision-maker, Armament was considered moderately more important (degree 3) compared to displacement, considerably more important (degree 4) than the speed criterion, slightly more important (degree 2) than cargo capacity, equally important (degree 1) to itself, absolutely more important (degree 7) than cost, very important (degree 5) in terms of range, presents very great importance (degree 6) for the crew criterion, and is moderately more important (degree 3) than the criteria year of commissioning, missile range, and radar range.
On the other hand, the A column represents how much the NIE, the cost criterion, is less important than the others, following the same scale of preferences but with symmetric values.
The values A*, AN, and WS were obtained respectively by Equations (4) to (6). In the analysis of the weights assigned to the criteria (WS) by the SAPEVO-PC method, it was observed that the Operational Tactics Advisor (DM 1) significantly prioritized aspects crucial for effectiveness and readiness on the battlefield. Armament, with the highest weight of 16.71%, stood out as the criterion of greatest importance. This focus on armament reflects the advisor’s priority to maximize offensive and defensive capabilities, which is essential for military operations that rely on technological superiority and firepower to achieve tactical objectives effectively. The emphasis on this criterion suggests an approach favoring investment in advanced technologies and weapon systems that can offer a decisive advantage during confrontations.
Besides armament, the Missile Range and Radar Range criteria, with weights of 13.13% and 11.93%, respectively, also receive considerable attention. These criteria are fundamental for the capability of force projection over long distances and advanced surveillance, allowing the tactical advisor to plan and execute operations with a comprehensive understanding of the theatre of operations. Missile Range ensures that forces can precisely strike strategic targets, while Radar Range secures early threat detection, which is essential for maintaining operational security and rapid response capability in conflict environments.
On the other hand, the Cost criterion, with the lowest weight of just 0.95%, reflects the advisor’s stance that cost considerations, while important, are secondary to maintaining superior military capability. This low weight indicates a willingness to invest in cutting-edge equipment and technology, regardless of costs, to ensure that the units under his command have the best tools available. This approach is typical in contexts where operational effectiveness and tactical supremacy are valued more than resource economization, aligning defense investments with long-term strategic goals and readiness requirements.
The results obtained with the SAPEVO-PC method corroborate its reliability in generating weights that effectively represent and quantify the decision-maker’s perceptions. The percentage weights assigned align with the decision-maker’s strategic priorities and tactical considerations, particularly in areas critical to military operations, such as armament, missile range, and radar capabilities. By accurately reflecting these preferences in a quantifiable form, the SAPEVO-PC method increases reliability and adds significant relevance to the decision-making process. This alignment ensures that the derived weights are not arbitrary but are based on the decision-maker’s strategic knowledge and operational realities, thus validating the usefulness and accuracy of the method in complex decision environments.
Continuing with the analysis using the SAPEVO-PC method, the same procedures were carried out for the other DM, with the A+, A, and Ws values consolidated in Table 10.
Analyzing the weights obtained with the application of SAPEVO-PC for the other decision-maker’s values is consistent with the profile of each one, and these were also verified. The WS results for DM 2 reflect a clear prioritization of carrying capacity (16.91%), indicative of a logistics and supply chain consultant’s focus on resource management and distribution efficiency. This consultant also gave significant weight to speed (14.49%) and displacement (13.29%), suggesting an appreciation of logistic vehicles or vessels’ rapid deployment and physical capabilities. Lower weights for costs (6.04%) and crew (4.83%) imply a strategic focus on operational capacity over economic or labor considerations, aligning with a logistics-oriented approach where ensuring material availability and logistical readiness are key criteria.
For DM 3, the radar range received the most weight (16.29%), highlighting the intelligence and security consultant’s emphasis on surveillance and early threat-detection capabilities. It was closely followed by speed and range (13.78%), reflecting the importance of rapid response and broad operational reach in intelligence and security operations. DM 4, with a solid background in technology, attributed the greatest weight to weapons (17.11%), illustrating a significant focus on integrating advanced weapons systems, probably reflecting the value attributed to technological superiority. The year of commissioning also receives a high weight (14.67%), indicating a preference for newer and possibly more technologically advanced platforms. The relatively balanced weights in other criteria, such as missile range (12.22%) and radar range (11.00%), demonstrate a comprehensive approach that values offensive capabilities and surveillance technology.
The strategic planning and policy consultant (DM 5) emphasized costs (16.71%), suggesting a strategic priority on fiscal efficiency and budget management. It was followed by the crew (13.13%), focusing on human resource management and perhaps the importance of well-trained personnel in implementing strategic plans. Lower weights for critical operational capabilities such as missile range (4.77%) and radar range (4.77%) may indicate a broader, more policy-oriented approach where specific technical details are less prioritized against overarching strategic and political considerations.
Adding the individual weights together, we arrive at the weight obtained by the group. In general, weapons have the highest average weight (13.71%), indicating a collective recognition of the importance of firepower and defense capabilities in operational scenarios. This is closely followed by the year of commissioning (11.29%) and range (11.32%), suggesting a shared value placed on modernity and operational range. Consistency in weighting between different consultants for criteria such as radar range (11.60%) and payload capacity (11.01%) further highlights common strategic considerations that transcend individual specializations, highlighting a balanced approach to military personnel capabilities and readiness for group decision-making.
To finalize the analysis of the weights obtained through the SAPEVO-PC method, it is essential to highlight the multi-decision-maker characteristic that the method facilitates. This feature allows for individualized analysis and group aggregation, significantly enhancing the decision-making process’s reliability, transparency, and flexibility. By employing this method, individual decision-makers can input their specific preferences and priorities, ensuring that each one’s perspective is thoroughly represented and considered. Subsequently, these individual assessments are aggregated to form a group consensus, which reflects a collective decision.
This dual approach ensures that each decision-maker’s views are considered, thus increasing the personal relevance and acceptance of the decision outcomes, but also leverages the diverse expertise and insights of the group to form a more balanced and comprehensive decision. Such a methodology is particularly beneficial in complex decision environments where multiple stakeholders are involved and integrating diverse viewpoints can significantly enhance strategic outcomes.
Overall, the SAPEVO-PC method’s capacity to accommodate multiple decision-makers strengthens the decision-making process and fosters a collaborative environment where decisions are not only shared but are also more likely to be supported and implemented effectively due to the inclusive process. This approach inherently boosts the decision’s legitimacy and efficacy, making it a robust tool in strategic decision-making.

4.2. Quantitative Analysis

Before analyzing the alternatives, evaluating the correlation matrix between the original variables is important. Correlation analysis allows us to identify which variables are highly correlated and we can combine them into a single factor [84]. In this sense, the heat map of correlations is a valuable tool for visualizing these correlations and identifying which variables are highly correlated. Figure 4 shows the heat map of the correlations of the 10 variables evaluated in this study.
The correlation matrix uses a color scale to represent the magnitude of correlations, with lighter colors indicating stronger correlations (positive or negative). We highlight the high correlation (0.916) between displacement and load capacity by analyzing the main patterns observed. This correlation makes sense, as ships with larger displacement tend to have greater carrying capacity for weapons, supplies, and other equipment needed for prolonged operations. Other correlations occur between the crew criterion and the displacement (0.93) and load capacity (0.93) criteria, as larger ships generally require more crew for operation and maintenance, justifying the correlation.
The cost is highly correlated with the load capacity. Ships with greater cargo capacity tend to be more expensive due to the need for more materials, advanced technology and infrastructure to support the greater weight and volume. Furthermore, autonomy and load capacity have a high and positive brightness. This is justified because ships with greater cargo capacity can carry more fuel and supplies, allowing greater autonomy on long-range missions. The speed shows a negative shine with the gearbox, although weak. It may indicate that heavier ships tend to be slower due to their weight and inertia.
The insights generated by the heatmap make much sense when evaluating the characteristics of warships, which gives relevance and reliability to the dataset analyzed in this article. The strong positive correlation between variables such as displacement, cargo capacity, armament, and crew aligns with the expectation that larger ships are better equipped and require more personnel for operation and maintenance. Likewise, the correlations observed between cost, autonomy, and cargo capacity reflect an intuitive understanding that more robust ships with greater autonomy require significant investments. These coherent correlation patterns provide a solid basis for applying SAPEVO-PC, ensuring that the factors formed are representative and meaningful. This overview validates data quality and improves confidence in subsequent analysis, resulting in robust and applicable insights for strategic decision-making in naval planning.
This diagnosis is a preliminary indication of how factor analysis can group variables into a given factor without substantial loss of their variances. Next, the general adequacy of the factor analysis is verified by applying KMO statistics and Bartlett’s sphericity test. These statistics are essential in determining whether factor analysis is appropriate for the data and whether the variables are highly correlated enough to form significant factors.
Based on the result of the KMO statistic obtained (0.77), the overall adequacy of the factor analysis can be considered satisfactory [54]. The next step is to evaluate the result of Bartlett’s sphericity test, which verifies the null hypothesis that the correlation matrix is an identity matrix.
From the result of Bartlett’s statistic, with a significance level of 0.05 and 45 degrees of freedom, we can say that the Pearson correlation matrix is statistically different from the identity matrix with the same dimension because the p-value of Bartlett’s test (2.47 × 10−37) was lower than the established significance level. Thus, we rejected the null hypothesis, meaning a significant correlation exists between the observed variables. Figure 5 shows how much each original variable contributes to forming the generated main components (PCs).
The graph is divided into facets, each representing a main component (PC1 to PC10). Each facet bar represents an original variable’s contribution to that principal component.
Analyzing PC1, we observed that variables such as Crew, Autonomy, Displacement, and Capacity had significant factor loadings, indicating that PC1 can be interpreted as a component that captures the dimension of size and operational capacity of ships. Furthermore, we verified that, except for speed and year of commissioning, the other variables are represented by PC1.
On the other hand, when evaluating PC2, Armament and Year of Commissioning had positive and significant loadings, suggesting that PC2 is capturing aspects related to the ships’ military power and technological level. PC3 presents highlighted loads for Speed, a criterion that does not present a significant load with PC1 or PC2. In our study, components beyond the third (PC3) were considered less significant in explained variance, although they may still contain specific insights into less evident variations in the data.
The first principal components (PC1, PC2, and PC3) explain most of the variance in the data (around 80%), being the most important for understanding the main dimensions of variation between ships.
The factor-loading graph clearly shows how each variable contributes to the main components. This visualization makes identifying key dimensions of variation in ship data easier and highlights important relationships between variables. Analysis of factor loadings helps to interpret the main components and provides a solid basis for comparative assessment of ships based on multiple criteria.
The next step is to define how many significant components to consider in the analysis. In this work, we used the Kaiser criterion to select eigenvalues greater than 1. In this context, Table 11 presents the eigenvalues and variances associated with each Principal Component.
Evaluating the eigenvalues and considering the Kaiser criterion, we observed that only the first three components (PC1, PC2, and PC3) have eigenvalues greater than or equal to 1. From this result, we conclude that the first three components are relevant to explain the variability of the data. Therefore, we can proceed with extracting the factors from these components. Next, we obtained the factorial scores, as shown in Table 12.
The factorial scores generate the factors in a linear model from the original variables. The scores are the eigenvectors divided by the square root of the respective eigenvalues. The correlation between the original variables and the factors would be the factor loadings. In general, the values of the factorial scores corroborate and quantify the graphical analyses performed in Figure 5.
At this point, the qualitative data (weights obtained in phase 1) are integrated with the factor scores calculated in the quantitative phase, according to Equation (23). To illustrate the multi-decision nature, let us consider the WG weights presented in Table 10. With this, we can calculate the values corresponding to the three factors of each observation in the sequence, already adding the weights obtained by SAPEVO-PC in the qualitative phase and the factor scores from the quantitative phase. Table 13 shows these results.
We emphasize that all the extracted factors present, among themselves, Pearson correlations equal to 0, that is, they are orthogonal to each other [28]. After establishing the factors, it is possible to define the factor loadings, which correspond to the Pearson correlation coefficients between the original variables and each factor. Table 14 presents the factor loadings of the variables analyzed in the present study.
It is possible to observe that 6 of the 10 variables present higher correlations with the first factor. In comparison, the criteria Armament and Speed present higher correlations with the second and third factors. This higher correlation proves the need for three factors for all variables to share significant percentages of variance. With the factor loadings obtained, we can calculate the commonalities, which correspond to the percentage of variance shared by the 10 original variables for the formation of each factor. Table 15 presents the commonalities of each original variable of the dataset:
Evaluating the commonality values referring to the three extracted factors, we observed relatively high values for 8 of the 10 criteria (above 74%). If we consider the factor loadings of all factors, all commonalities would be equal to 1. We can draw a graph for the first three factors, known as a loading plot. We plotted the factor loadings of each variable on each of the orthogonal axes. Thus, Figure 6 shows the factors PC1 and PC2, represented respectively on the x– and y–axes.
Through the analysis of the loading plot, the behavior of the correlations is apparent. Except for the Speed and Year of Commissioning, we noticed that all other variables presented a correlation above 0.5 with PC1 (x–axis) and PC2 (y–axis). On the other hand, the number of Helicopter Carriers presents the highest correlation with the PC2 (axis of the ordinates).
To obtain the final ranking of alternatives, the SAPEVO-PC method uses the criterion of weighted sum and order, according to Equation (26). Table 16 presents the ranking of the 32 ships analyzed in this study, considering 10 quantitative criteria and the opinion of five military experts with different profiles.
Evaluating the results obtained, we verified that Ship 2, which occupies the first position, presents an excellent and exceptional performance across several key criteria. It is extremely large, with a displacement of 100,000 tons, providing a substantial payload capacity of 9000 tons and a good armament rating of 90. These characteristics contribute significantly to its high performance score. Furthermore, Ship 2 has a remarkable speed of 31.5 knots and an excellent range of 12,000 nautical miles, making it highly versatile and capable of long-duration missions. Although it has a relatively high cost and crew, the ship’s capabilities justify the investment.
Ship 6 ranks second, mainly due to its high speed of 32 knots and excellent range of 11,000 nautical miles. Its weapon rating of 80 is also strong, contributing to its defensive and offensive potential. Ship 7, which occupies third position, stands out for its speed of 34 knots, one of the highest among major ships, combined with a moderate displacement of 12,500 tons and a carrying capacity of 4100 tons. Its armament rating of 80 is robust, complementing its combat capabilities. Despite the high cost, the balance between speed, good autonomy and range, and missile range guarantees its final ranking.
Ship 8, in fourth position, is notable for its very strong armament of 112, the highest among the capital ships. It maintains good operational flexibility with a moderate displacement of 12,000 tons and a speed of 30 knots. Her 3800-ton payload and range of 9000 nautical miles are adequate, and the relatively low cost and a small crew are advantageous. Being a modern ship built in 2019, it likely incorporates advanced technology. Ship 5, in fifth place, combines a high displacement with good speed and load capacity. Additionally, it has a range of 8000 nautical miles. Despite the high cost and large crew, the general capabilities, including a good missile range, justify its high position in the ranking.
Regarding the worst place in the ranking, we highlight Ship 31, which occupies the last position as it presents a poor performance in terms of displacement, speed, load capacity, and armament. Despite being a new ship, it has low relative autonomy, and low missile and radar range, which are reasons that justify its poor performance score. Ship 25 is in the 31st position and shows limited displacement and cargo capacity capabilities. Despite good speed, its armament rating and range are reportedly low. Ship 28, in the 30th position, has low displacement, armament, load capacity, and speed, despite an advantageous cost.
Analysis of the final ranking reveals that the flagships exhibit strong performance across several criteria, especially in displacement, carrying capacity, armament, speed, and range. These ships justify their high costs and large crews through their robust capabilities and advanced features. On the other hand, ships in the last positions demonstrate limited capabilities in key areas such as displacement, cargo capacity, armament, and range. The PCA-based methodology, incorporating the weighted sum and ordering criteria, provides a comprehensive and reliable ship performance assessment. This approach ensures that the assessment is grounded in quantifiable data and robust statistical analysis, making it a valuable tool for strategic decision-making in naval engineering and military planning.
A sensitivity analysis was carried out to evaluate the robustness and reliability of the results obtained, considering the weights obtained for each decision-maker, detailed in Table 9 and Table 10. This procedure allows the checking of the consistency and stability of the final ranking in the face of variations in the criteria weights, ensuring that decisions are well founded and reflect multiple perspectives. Table 17 presents the individual and consensus rankings, considering the weights obtained by SAPEVO-PC.
The analysis results show high coherence between the decision-makers and consensus rankings. It is observed that Ship 2 consistently maintained the first position in all rankings, indicating its superior performance in critical criteria, regardless of variations in the weights assigned by decision-makers. Likewise, Ship 6 remained second in all rankings, standing out as another high-performance alternative. Ships 7, 8, and 5 also showed small variations in their positions but consistently remained among the best classified, reinforcing the robustness of the evaluation criteria used.
On the other hand, the ships in the last positions also showed remarkable stability. Ship 31 maintained the last position in all rankings, followed closely by ships 28, 27, and 25, which were also consistently in the last positions. This stability at the ends of the ranking suggests that, regardless of variations in criteria weights, there is clear stability about which ships outperform and underperform. These observations give relevance and reliability to the proposed methodology, showing that the assessments are robust and well founded, even from different decision-makers’ perspectives.
The individual and group rankings were presented to military experts and validated by them. According to experts, the methodology proposed in this article effectively represented the ranking by merging their subjective perceptions with quantitative performance data for each ship. This validation reinforces the reliability and relevance of the SAPEVO-PC method, highlighting its practical usefulness in complex decision-making scenarios in marine science and engineering.
SAPEVO-PC provides a solid framework for warship selection, although it is most useful in scenarios with a significant amount of data and requiring expert input, which may not always be feasible. Compared to classical methods such as AHP or TOPSIS, SAPEVO-PC provides a more integrated and reliable approach by combining MCDM and ML, which increases decision accuracy but at a higher cost in terms of complexity and resource requirements.

5. Conclusions

This paper explored the complexity inherent in problems related to marine science and engineering, highlighting the need for effective approaches to support decision-making in challenging environments. The SAPEVO-PC methodology, presented as an evolution in the SAPEVO family, demonstrated its significant usefulness in evaluating military ships, providing an innovative way of assigning weights to criteria from the decision-maker’s perspective. Introducing Ideal Positive Elements (PIE) and Ideal Negative Elements (NIE) was crucial to this approach, allowing the consideration of qualitative preferences in complex scenarios.
The application of the SAPEVO-PC method, focusing on simplifying the comparison process and substantially reducing the number of peer assessments, proved valuable in dealing with the wide range of criteria and alternatives involved in the proposed case study, consisting of 32 alternatives and 10 criteria. The efficiency and effectiveness of these methodologies offer a notable contribution to the field of decision-making in challenging contexts where precision and agility are imperative. This approach addresses the limitations inherent in traditional multi-criteria methods, such as the high number of peer assessments required, which can be costly and prone to inconsistencies.
Furthermore, integrating MCDM with ML algorithms has significantly advanced analysis and decision-making in complex marine science and engineering contexts. PCA concepts made it possible to reduce the dimensionality of the data, transforming ten original variables into three main factors without losing the essence of the information. This combined approach simplifies the assessment process and increases the accuracy and objectivity of results, facilitating interpretation and practical application. Integrating qualitative data with quantitative ML algorithms offers a more robust and comprehensive analysis, allowing military decision-makers to evaluate the various criteria involved in ship selection with greater confidence and efficiency.
However, it is essential to emphasize that no single metric can fully encompass ship performance, as a comprehensive assessment requires the analysis of multiple factors that influence the quality and tactical employment of these naval assets. We also highlight that only quantitative performance data were considered by applying the SAPEVO-PC method, excluding aspects such as the technological level of these means and the qualification of the crew, for example. Future research could expand the analyzed criteria to include operability in adverse weather conditions, robustness and redundancy of vital systems, damage control capabilities, and a more detailed analysis of each ship’s armaments.
Based on the results obtained, it can be stated that the SAPEVO-PC method has vast potential that can be explored in several other areas besides naval science and engineering. Future studies could focus on adapting and optimizing SAPEVO-PC for different contexts and types of data, exploring its applicability in emerging areas such as Artificial Intelligence, smart cities, and IoT. Furthermore, studies can investigate the combination of SAPEVO-PC with other ML and Big Data analytic techniques to further improve the accuracy and efficiency of MCDM decisions.
In short, this study makes a significant contribution through the methodology developed, which has the potential to be applied in different situations, going beyond the exclusive emphasis on the resulting ranking. The methodology represents a significant advance in promoting more informed, faster, and reliable decisions, contributing to the optimization of resources and the effectiveness of operations in decision-making environments.

Author Contributions

Conceptualization, I.P.d.A.C., A.P.d.A.C., M.Â.L.M., M.A.C.J. and D.A.d.M.P.; methodology, M.d.S. and C.F.S.G.; software, I.P.d.A.C., A.P.d.A.C., and M.Â.L.M.; validation, D.A.d.M.P. and M.d.S.; formal analysis, I.P.d.A.C.; investigation, I.P.d.A.C., A.P.d.A.C. and M.A.C.J.; resources, I.P.d.A.C.; data curation, A.P.d.A.C.; writing—original draft preparation, I.P.d.A.C.; writing—review and editing, A.P.d.A.C., M.A.C.J., D.A.d.M.P., and C.F.S.G.; visualization, I.P.d.A.C.; supervision, M.d.S. and C.F.S.G.; project administration, C.F.S.G.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gilkova, O.N. Military Research and Development as a Driving Force for the Economy of the Future. In International Scientific and Practical Conference Operations and Project Management: Strategies and Trends; Springer International Publishing: Cham, Switzerland, 2022; Volume 380, pp. 185–189. [Google Scholar]
  2. Ali, I.; Sidhu, J.S. Strategic Dynamics of the Arms Race in South Asia. J. Asian Afr. Stud. 2023. [Google Scholar] [CrossRef]
  3. Souva, M. Material Military Power: A Country-Year Measure of Military Power, 1865–2019. J. Peace Res. 2022, 60, 1002–1009. [Google Scholar] [CrossRef]
  4. Costa, I.P.d.A.; Maêda, S.M.d.N.; Teixeira, L.F.H.d.S.D.B.; Gomes, C.F.S.; dos Santos, M. Choosing a Hospital Assistance Ship to Fight the COVID-19 Pandemic. Rev. Saude Publica 2020, 54, 79. [Google Scholar] [CrossRef] [PubMed]
  5. Santos, M.; Costa, I.P.d.A.; Gomes, C.F.S. Sensitivity Analysis of Multicriteria Decision between Standard Deviation and Average in the Selection of Construction of Warships: A New Approach to the AHP Method. Int. J. Anal. Hierarchy Process 2021. [Google Scholar]
  6. Chen, X.; Dou, S.; Song, T.; Wu, H.; Sun, Y.; Xian, J. Spatial-Temporal Ship Pollution Distribution Exploitation and Harbor Environmental Impact Analysis via Large-Scale AIS Data. J. Mar. Sci. Eng. 2024, 12, 960. [Google Scholar] [CrossRef]
  7. Rees, W. The Anglo-American Military Relationship: Arms across the Ocean; Oxford University Press: Oxford, UK, 2024; ISBN 0198884648. [Google Scholar]
  8. Hillier, F.; Lieberman, G. Introduction to Operations Research, 11th ed.; McGraw-Hill Education: New York, NY, USA, 2012. [Google Scholar]
  9. Belton, V.; Stewart, T. Multiple Criteria Decision Analysis: An Integrated Approach; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2002; ISBN 079237505X. [Google Scholar]
  10. Keeney, R.L.; Raiffa, H.; Meyer, R.F. Decisions with Multiple Objectives: Preferences and Value Trade-Offs; Cambridge University Press: Cambridge, UK, 1993; ISBN 0521438837. [Google Scholar]
  11. Pereira, R.C.A.; da Silva, O.S.; de Mello Bandeira, R.A.; dos Santos, M.; de Souza Rocha, C.; Castillo, C.d.S.; Gomes, C.F.S.; de Moura Pereira, D.A.; Muradas, F.M. Evaluation of Smart Sensors for Subway Electric Motor Escalators through AHP-Gaussian Method. Sensors 2023, 23, 4131. [Google Scholar] [CrossRef] [PubMed]
  12. Greco, S.; Figueira, J.; Ehrgott, M. Multiple Criteria Decision Analysis: State of Art Surveys; Springer: Berlin/Heidelberg, Germany, 2016; Volume 37. [Google Scholar]
  13. Corrente, S.; Greco, S.; Słowiński, R. Multiple Criteria Hierarchy Process for ELECTRE Tri Methods. Eur. J. Oper. Res. 2016, 252, 191–203. [Google Scholar]
  14. Tenorio, F.M.; dos Santos, M.; Gomes, C.F.S.; Araujo, J.D.C.; De Almeida, G.P. THOR 2 Method: An Efficient Instrument in Situations Where There Is Uncertainty or Lack of Data. IEEE Access 2021, 9, 161794–161805. [Google Scholar] [CrossRef]
  15. Costa, I.P.d.A.; Costa, A.P.d.A.; Sanseverino, A.M.; Gomes, C.F.S.; dos Santos, M. Bibliometric Studies on Multi-Criteria Decision Analysis (MCDA) Methods Applied in Military Problems. Pesqui. Oper. 2022, 42, e249414. [Google Scholar] [CrossRef]
  16. Basilio, M.P.; Pereira, V.; de Oliveira, M.W.C.; da Costa Neto, A.F. Ranking Policing Strategies as a Function of Criminal Complaints: Application of the PROMETHEE II Method in the Brazilian Context. J. Model. Manag. 2021, 16, 1185–1207. [Google Scholar] [CrossRef]
  17. Basilio, M.P.; Pereira, V. Operational Research Applied in the Field of Public Security. J. Model. Manag. 2020, 15, 1227–1276. [Google Scholar] [CrossRef]
  18. Moreira, M.Â.L.; Costa, I.P.d.A.; Pereira, M.T.; dos Santos, M.; Gomes, C.F.S.; Muradas, F.M. PROMETHEE-SAPEVO-M1 a Hybrid Approach Based on Ordinal and Cardinal Inputs: Multi-Criteria Evaluation of Helicopters to Support Brazilian Navy Operations. Algorithms 2021, 14, 140. [Google Scholar] [CrossRef]
  19. Moreira, M.Â.; Lellis; Gomes, C.F.S.; dos Santos, M.; Basilio, M.P.; de Araújo Costa, I.P.; de Souza Rocha Junior, C.; José Jardim, R.R.-A. Evaluation of Drones for Public Security: A Multicriteria Approach by the PROMETHEE-SAPEVO-M1 Systematic. Procedia Comput. Sci. 2022, 199, 125–133. [Google Scholar] [CrossRef]
  20. Costa, I.P.d.A.; Sanseverino, A.M.; Barcelos, M.R.d.S.; Belderrain, M.C.N.; Gomes, C.F.S.; dos Santos, M. Choosing Flying Hospitals in the Fight against the COVID-19 Pandemic: Structuring and Modeling a Complex Problem Using the VFT and ELECTRE-MOr Methods. IEEE Lat. Am. Trans. 2021, 19, 1099–1106. [Google Scholar] [CrossRef]
  21. Maêda, S.M.d.N.; Costa, I.P.d.A.; Castro Junior, M.A.P.; Fávero, L.P.; Costa, A.P.d.A.; Corriça, J.V.d.P.; Gomes, C.F.S.; dos Santos, M. Multi-Criteria Analysis Applied to Aircraft Selection by Brazilian Navy. Production 2021, 31, e20210011. [Google Scholar] [CrossRef]
  22. De Almeida, I.D.P.; Corriça, J.V.d.P.; Costa, A.P.d.A.; Costa, I.P.d.A.; Maêda, S.M.d.N.; Gomes, C.F.S.; dos Santos, M. Study of the Location of a Second Fleet for the Brazilian Navy: Structuring and Mathematical Modeling Using SAPEVO-M and VIKOR Methods. In International Conference of Production Research–Americas; Communications in Computer and Information Science; Springer International Publishing: Cham, Switzerland, 2021; Volume 1408, pp. 113–124. [Google Scholar] [CrossRef]
  23. Costa, I.P.d.A.; Terra, A.V.; Moreira, M.Â.L.; Pereira, M.T.; Fávero, L.P.L.; dos Santos, M.; Gomes, C.F.S. A Systematic Approach to the Management of Military Human Resources through the ELECTRE-MOr Multicriteria Method. Algorithms 2022, 15, 422. [Google Scholar] [CrossRef]
  24. Gou, F.; Wu, J. Triad Link Prediction Method Based on the Evolutionary Analysis with IoT in Opportunistic Social Networks. Comput. Commun. 2022, 181, 143–155. [Google Scholar] [CrossRef]
  25. Helmick, H.; Nanda, G.; Ettestad, S.; Liceaga, A.; Kokini, J.L. Applying Text Mining to Identify Relevant Literature in Food Science: Cold Denaturation as a Case Study. J. Food Sci. 2021, 86, 4851–4864. [Google Scholar] [CrossRef]
  26. Abdi, H.; Williams, L.J. Principal Component Analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  27. Younes, K.; Kharboutly, Y.; Antar, M.; Chaouk, H.; Obeid, E.; Mouhtady, O.; Abu-Samha, M.; Halwani, J.; Murshid, N. Application of Unsupervised Machine Learning for the Evaluation of Aerogels’ Efficiency towards Ion Removal—A Principal Component Analysis (PCA) Approach. Gels 2023, 9, 304. [Google Scholar] [CrossRef]
  28. Fávero, L.P.; Belfiore, P. Manual de Análise de Dados: Estatística e Machine Learning Com Excel®, SPSS®, Stata®, R® e Python®, 2nd ed.; Grupo Gen: Barueri, Brazil, 2024; ISBN 9788595159921. [Google Scholar]
  29. Saaty, T.L. A Scaling Method for Priorities in Hierarchical Structures. J. Math. Psychol. 1977, 15, 234–281. [Google Scholar]
  30. Wątróbski, J.; Jankowski, J. Guideline for MCDA Method Selection in Production Management Area. In New Frontiers in Information and Production Systems Modelling and Analysis; Springer: Berlin/Heidelberg, Germany, 2016; Volume 98, ISBN 978-3-319-23337-6. [Google Scholar]
  31. Moreira, M.Â.L.; Junior, M.A.P.d.C.; Costa, I.P.d.A.; Gomes, C.F.S.; dos Santos, M.; Basilio, M.P.; Pereira, D.A.d.M. Consistency Analysis Algorithm for the Multi-Criteria Methods of SAPEVO Family. Procedia Comput. Sci. 2022, 214, 133–140. [Google Scholar] [CrossRef]
  32. Bana e Costa, C.A.; Vansnick, J.-C. MACBETH—An Interactive Path towards the Construction of Cardinal Value Functions. Int. Trans. Oper. Res. 1994, 1, 489–500. [Google Scholar]
  33. Edwards, W.; Newman, J. Multiattribute Evaluation. Beverly Hills 1982, 1, 96. [Google Scholar]
  34. Gomes, L.F.A.M. Comparing Two Methods for Multicriteria Ranking of Urban Transportation System Alternatives. J. Adv. Transp. 1989, 23, 217–219. [Google Scholar]
  35. Hwang, C.; Yoon, K. Multiple Attribute Decision Making: Methods and Application, 1st ed.; Springer: New York, NY, USA, 1981. [Google Scholar]
  36. Jacquet-Lagreze, E.; Siskos, J. Assessing a Set of Additive Utility Functions for Multi-Criteria Decision Making: The UTA Method. Eur. J. Oper. Res. 1982, 10, 151–164. [Google Scholar]
  37. Costa, I.P.d.A.; Moreira, M.Â.L.; de Araújo Costa, A.P.; de Souza de Barros Teixeira, L.F.H.; Gomes, C.F.S.; dos Santos, M. Strategic Study for Managing the Portfolio of IT Courses Offered by a Corporate Training Company: An Approach in the Light of the ELECTRE-MOr Multicriteria Hybrid Method. Int. J. Inf. Technol. Decis. Mak. 2022, 21, 351–379. [Google Scholar] [CrossRef]
  38. Roy, B. Classement et Choix En Présence de Points de Vue Multiples. Rev. Française D’informatique Rech. Opérationnelle 1968, 2, 57–75. [Google Scholar]
  39. Brans, J.P.; Vincke, P.; Mareschal, B. How to Select and How to Rank Projects: The Promethee Method. Eur. J. Oper. Res. 1986, 24, 228–238. [Google Scholar] [CrossRef]
  40. Gomes, L.F.A.M.; Mury, A.R.; Gomes, C.F.S. Multicriteria Ranking with Ordinal Data. Syst. Anal. Model. Simul. 1997, 27, 139–145. [Google Scholar]
  41. Gomes, C.F.S.; dos Santos, M.; Teixeira, L.F.H.d.S.d.B.; Sanseverino, A.M.; Barcelos, M.R.d.S. SAPEVO-M: A Group Multicriteria Ordinal Ranking Method. Pesqui. Oper. 2020, 40, e226524. [Google Scholar] [CrossRef]
  42. Maêda, S.M.d.N.; Basílio, M.P.; Costa, I.P.d.A.; Moreira, M.Â.L.; dos Santos, M.; Gomes, C.F.S. The SAPEVO-M-NC Method. In Modern Management Based on Big Data II and Machine Learning and Intelligent Systems III; IOS Press: Amsterdam, The Netherlands, 2021; pp. 89–95. [Google Scholar] [CrossRef]
  43. Moreira, M.Â.L.; Gomes, C.F.S.; Pereira, M.T.; dos Santos, M. SAPEVO-H2 a Multi-Criteria Approach Based on Hierarchical Network: Analysis of Aircraft Systems for Brazilian Navy. In Lecture Notes in Mechanical Engineering; Springer: Berlin/Heidelberg, Germany, 2023; pp. 61–74. [Google Scholar]
  44. Almeida-Dias, J.; Figueira, J.R.; Roy, B. A Multiple Criteria Sorting Method Where Each Category Is Characterized by Several Reference Actions: The Electre Tri-NC Method. Eur. J. Oper. Res. 2012, 217, 567–579. [Google Scholar]
  45. Budaev, S.V. Using Principal Components and Factor Analysis in Animal Behaviour Research: Caveats and Guidelines. Ethology 2010, 116, 472–480. [Google Scholar] [CrossRef]
  46. Fávero, L.P.; Belfiore, P. Data Science for Business and Decision Making; Academic Press Elsevier: Cambridge, MA, USA, 2019; ISBN 0128112174. [Google Scholar]
  47. Pearson, K. VII. Mathematical Contributions to the Theory of Evolution.—III. Regression, Heredity, and Panmixia. Philos. Trans. R. Soc. London. Ser. A Contain. Pap. A Math. Phys. Character 1896, 187, 253–318. [Google Scholar]
  48. Spearman, C. “General Intelligence” Objectively Determined and Measured. Am. J. Psychol. 1904, 15, 201–293. [Google Scholar]
  49. Hotelling, H. Analysis of a Complex of Statistical Variables into Principal Components. J. Educ. Psychol. 1933, 24, 417–441. [Google Scholar] [CrossRef]
  50. Egghe, L.; Leydesdorff, L. The Relation between Pearson’s Correlation Coefficient r and Salton’s Cosine Measure. J. Am. Soc. Inf. Sci. Technol. 2009, 60, 1027–1036. [Google Scholar]
  51. Smith, R. A Mutual Information Approach to Calculating Nonlinearity. Stat 2015, 4, 291–303. [Google Scholar]
  52. Yong, A.G.; Pearce, S. A Beginner’s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis. Tutor. Quant. Methods Psychol. 2013, 9, 79–94. [Google Scholar]
  53. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E.; Tatham, R.L. Análise Multivariada de Dados; Bookman Editora: Porto Alegre, Brazil, 2009; ISBN 8577805344. [Google Scholar]
  54. Shrestha, N. Factor Analysis as a Tool for Survey Analysis. Am. J. Appl. Math. Stat. 2021, 9, 4–11. [Google Scholar]
  55. Kaiser, H.F. A Second Generation Little Jiffy. Psychometrika 1970, 35, 401–415. [Google Scholar] [CrossRef]
  56. Gujarati, D.N.; Porter, D.C. Econometria Básica, 5th ed.; McGraw-Hill: New York, NY, USA, 2008. [Google Scholar]
  57. Bartlett, M.S. A Note on the Multiplying Factors for Various χ2 Approximations. J. R. Stat. Soc. Ser. B 1954, 16, 296–298. [Google Scholar] [CrossRef]
  58. Ruscio, J.; Roche, B. Determining the Number of Factors to Retain in an Exploratory Factor Analysis Using Comparison Data of Known Factorial Structure. Psychol. Assess. 2012, 24, 282. [Google Scholar]
  59. Garrido, L.E.; Abad, F.J.; Ponsoda, V. A New Look at Horn’s Parallel Analysis with Ordinal Variables. Psychol. Methods 2013, 18, 454. [Google Scholar]
  60. Vukić, L.; Vidov, J.; Karin, I. Method in Selecting Vehicles for Interventions and Surveillance of Navigation Safety at Sea. J. Mar. Sci. Eng. 2024, 12, 979. [Google Scholar] [CrossRef]
  61. Maceiras, R.; Alfonsin, V.; Alvarez-Feijoo, M.A.; Llopis, L. Assessment of Selected Alternative Fuels for Spanish Navy Ships According to Multi-Criteria Decision Analysis. J. Mar. Sci. Eng. 2023, 12, 77. [Google Scholar] [CrossRef]
  62. Malyszko, M. Fuzzy Logic in Selection of Maritime Search and Rescue Units. Appl. Sci. 2022, 12, 21. [Google Scholar]
  63. dos Santos, M.; Costa, I.P. de A.; Gomes, C.F.S. Multicriteria Decision-Making in the Selection of Warships: A New Approach to the AHP Method. Int. J. Anal. Hierarchy Process 2021, 13. [Google Scholar] [CrossRef]
  64. Stavrou, D.I.; Siskos, Y.; Ventikos, N.P. Locating Ship-to-Ship (STS) Transfer Operations via Multi-Criteria Decision Analysis (MCDA): A Case Study; Multiple Criteria Decision Making; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  65. Malyszko, M. Multi-Criteria Assessment of Search and Rescue Units for SAR Action at Sea; Communications in Computer and Information Science; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  66. Hansson, J.; Månsson, S.; Brynolf, S.; Grahn, M. Alternative Marine Fuels: Prospects Based on Multi-Criteria Decision Analysis Involving Swedish Stakeholders. Biomass Bioenergy 2019, 126, 159–173. [Google Scholar] [CrossRef]
  67. Ren, J.; Lützen, M. Fuzzy Multi-Criteria Decision-Making Method for Technology Selection for Emissions Reduction from Shipping under Uncertainties. Transp. Res. Part D Transp. Environ. 2015, 40, 43–60. [Google Scholar] [CrossRef]
  68. Baesens, B.; Adams, A.; Pacheco-Ruiz, R.; Baesens, A.-S.; Broucke, S. Vanden Explainable Deep Learning to Classify Royal Navy Ships. IEEE Access 2024, 12, 1774–1785. [Google Scholar] [CrossRef]
  69. Tenório, F.M.; dos Santos, M.; Gomes, C.F.S.; Araujo, J.d.C. Navy Warship Selection and Multicriteria Analysis: The THOR Method Supporting Decision Making. In International Joint Conference on Industrial Engineering and Operations Management; Springer: Berlin/Heidelberg, Germany, 2020; pp. 27–39. [Google Scholar]
  70. Manap, N.; Voulvoulis, N. Risk-Based Decision-Making Framework for the Selection of Sediment Dredging Option. Sci. Total Environ. 2014, 496, 607–623. [Google Scholar] [CrossRef]
  71. Pesce, M.; Terzi, S.; Al-Jawasreh, R.I.M.; Bommarito, C.; Calgaro, L.; Fogarin, S.; Russo, E.; Marcomini, A.; Linkov, I. Selecting Sustainable Alternatives for Cruise Ships in Venice Using Multi-Criteria Decision Analysis. Sci. Total Environ. 2018, 642, 668–678. [Google Scholar] [CrossRef] [PubMed]
  72. Gumusay, M.U.; Bakirman, G.; Bakirman, T. An Assessment of Site Suitability for Marina Construction in Istanbul, Turkey, Using GIS and AHP Multicriteria Decision Analysis. Environ. Monit. Assess. 2016, 188, 677. [Google Scholar]
  73. Balin, A.; Demirel, H.; Alarcin, F. A Novel Hybrid MCDM Model Based on Fuzzy AHP and Fuzzy TOPSIS for the Most Affected Gas Turbine Component Selection by the Failures. J. Mar. Eng. Technol. 2016, 15, 69–78. [Google Scholar] [CrossRef]
  74. Monprapussorn, S.; Watts, D.J.; Banomyong, R.; Thaitakoo, D. Multi Criteria Decision Analysis and Geographic Information System Framework for Hazardous Waste Transport Sustainability. J. Appl. Sci. 2009, 9, 268–277. [Google Scholar]
  75. Bayraktar, M.; Nuran, M. Multi-Criteria Decision Making Using TOPSIS Method for Battery Type Selection in Hybrid Propulsion System. Trans. Marit. Sci. 2022, 11, 45–53. [Google Scholar]
  76. Wielgosz, M.; Malyszko, M. Multi-Criteria Selection of Surface Units for Sar Operations at Sea Supported by Ais Data. Remote Sens. 2021, 13, 3151. [Google Scholar] [CrossRef]
  77. Costa, A.P.d.A.; Choren, R.; Pereira, D.A.d.M.; Terra, A.V.; Costa, I.P.d.A.; Junior, C.d.S.R.; dos Santos, M.; Gomes, C.F.S.; Moreira, M.Â.L. Integrating Multicriteria Decision Making and Principal Component Analysis: A Systematic Literature Review. Cogent Eng. 2024, 11, 2374944. [Google Scholar] [CrossRef]
  78. Costa, A.P.d.A.; Terra, A.V.; de Souza Rocha Junior, C.; de Araújo Costa, I.P.; Moreira, M.Â.L.; dos Santos, M.; Gomes, C.F.S.; da Silva, A.S. Optimization of Obstructive Sleep Apnea Management: Novel Decision Support via Unsupervised Machine Learning. Informatics 2024, 11, 22. [Google Scholar] [CrossRef]
  79. Creswell, J.W.; Creswell, J.D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches; Sage Publications: Thousand Oaks, CA, USA, 2017; ISBN 1506386717. [Google Scholar]
  80. Bertrand, J.W.M.; Fransoo, J.C. Operations Management Research Methodologies Using Quantitative Modeling. Int. J. Oper. Prod. Manag. 2002, 22, 241–264. [Google Scholar]
  81. Janes. Jane’s Fighting Ships Yearbook 23/24; 2024; ISBN 978-0-7106-3428-3. Available online: https://shop.janes.com/fighting-ships-23-24-yearbook-6541-3000230021 (accessed on 18 June 2024).
  82. Global Security Rest-of-World Ships Resources. Available online: https://www.globalsecurity.org/military/world/links-navy.htm (accessed on 18 June 2024).
  83. Checkland, P.B. Systems Theory. Syst. Pract. 1981. [Google Scholar]
  84. Fabrigar, L.R.; Wegener, D.T.; MacCallum, R.C.; Strahan, E.J. Evaluating the Use of Exploratory Factor Analysis in Psychological Research. Psychol. Methods 1999, 4, 272. [Google Scholar]
Figure 1. Flowchart summarizing the SAPEVO-PC methodology. Source: authors.
Figure 1. Flowchart summarizing the SAPEVO-PC methodology. Source: authors.
Jmse 12 01444 g001
Figure 2. CATWOE analysis. Source: authors.
Figure 2. CATWOE analysis. Source: authors.
Jmse 12 01444 g002
Figure 3. Flowchart of the proposed methodology. Source: authors.
Figure 3. Flowchart of the proposed methodology. Source: authors.
Jmse 12 01444 g003
Figure 4. Heat map of the correlations of the 10 variables evaluated in this study. Source: authors.
Figure 4. Heat map of the correlations of the 10 variables evaluated in this study. Source: authors.
Jmse 12 01444 g004
Figure 5. Contributions of the eigenvectors to the formation of each component. Source: authors.
Figure 5. Contributions of the eigenvectors to the formation of each component. Source: authors.
Jmse 12 01444 g005
Figure 6. Factors PC1 and PC2. Source: authors.
Figure 6. Factors PC1 and PC2. Source: authors.
Jmse 12 01444 g006
Table 1. Methods of the SAPEVO family and their characteristics.
Table 1. Methods of the SAPEVO family and their characteristics.
MethodCompensatoryNo CompensatoryOrdinalCardinalOrdinationClassificationGroupReduction of Pairwise ComparisonsML
SAPEVO [40] X X X
SAPEVO-M [41]X X X X
PROMETHEE-SAPEVO-M1 [18] X XXX
ELECTRE-MOr [20] X XX XX
SAPEVO-M-NC [42] XX X X
SAPEVO-H2 [43] X XXX X
SAPEVO-PC, proposed in this paperX XXX XXX
Source: authors.
Table 2. Conceptual scale of the SAPEVO-PC method.
Table 2. Conceptual scale of the SAPEVO-PC method.
DegreeDescription
1The PIE and the other criterion are equally important
2The PIE is slightly more important than the other criterion
3The PIE has moderate importance when compared with the other criterion
4The PIE has considerable importance when compared with the other criterion
5The PIE has great importance when compared with the other criterion
6The PIE has very great importance when compared with the other criterion
7The PIE has absolute importance when compared with the other criterion
Source: authors.
Table 3. Comparison between the number of peer-to-peer evaluations.
Table 3. Comparison between the number of peer-to-peer evaluations.
Number of Criteria or Alternatives
(n)
SAPEVO Classic (P)SAPEVO-PC
(K)
211
333
465
5107
6159
72111
82813
93615
104517
115519
126621
137823
149125
1510527
Source: authors.
Table 4. Relationship between KMO statistics and the global adequacy of factor analysis.
Table 4. Relationship between KMO statistics and the global adequacy of factor analysis.
KMO StatisticsGlobal Adequacy of Factor Analysis
Between 1.00 and 0.90Very good
Between 0.90 and 0.80Good
Between 0.80 and 0.70Average
Between 0.70 and 0.70Reasonable
Between 0.60 e 0.50Bad
Less than 0.50Unacceptable
Source: [28].
Table 5. Factor loadings between original variables and factors.
Table 5. Factor loadings between original variables and factors.
Variable FactorF1F2 Fk
X1c11c12 c1k
X2c21c22c2k
Xkck1ck2ckk
Source: [28].
Table 6. Criteria used in the analysis.
Table 6. Criteria used in the analysis.
Criterion Trigram Unit of Measurement Description Monotonicity
Displacement DSPTons (t)Refers to the total weight of the ship, measured in tons. Displacement includes the weight of the ship’s structure, fuel, cargo, crew, and other components. A higher displacement generally indicates a larger and more robust ship. Maximization
Speed SPDKnots The maximum speed the ship can achieve is measured in knots (nautical miles per hour). Speed is crucial for quick maneuvers, threat evasion, and strategic positioning in combat situations. Maximization
Cargo CapacityCAPTons (t)The amount of useful cargo the ship can carry is measured in tons. It includes armaments, ammunition, vehicles, supplies, and other equipment. A higher cargo capacity allows the ship to be more versatile and self-sufficient during extended operations. Maximization
Armament ARMNumber of weapons The number of weapon systems onboard the ship, including guns, missiles, torpedoes, and other armaments. A ship’s firepower is critical to its defense and attack capabilities, directly influencing combat effectiveness. Maximization
CostCSTUS Dollars (USD)The total construction cost of the ship is measured in US dollars. Cost is a significant factor in military planning and budgeting, affecting the economic feasibility of building and maintaining a fleet. Minimization
Range RNGNautical miles The maximum distance the ship can travel without refueling is measured in nautical miles. A greater range allows the ship to operate for longer periods in open waters without relying on frequent logistical support.Maximization
CrewCRENumber of personnel The total number of personnel required to operate the ship, including officers, sailors, and technical staff. The size of the crew directly impacts operational efficiency and the ship’s capacity to perform various tasks. Minimization
Year of Commissioning YOCYear The year the ship was completed and commissioned. Newer ships tend to incorporate more advanced technology and designs, offering better performance and capabilities. Maximization
Missile RangeMSLMilesThe maximum distance the missiles onboard the ship can reach is measured in miles. A longer missile range allows the ship to strike targets at greater distances, providing a strategic advantage.Maximization
Radar Range RADMiles The maximum range of the ship’s radar system is measured in miles. A long-range radar improves the ship’s ability to detect and track threats at greater distances, enhancing both defensive and offensive effectiveness.Maximization
Source: authors.
Table 7. The decision matrix presenting the 32 alternatives evaluated on 10 criteria.
Table 7. The decision matrix presenting the 32 alternatives evaluated on 10 criteria.
ShipsDSPSPDCAPARMCSTRNGCREYOCMSLRAD
Ship 114,56430100080−45009000−1582016100300
Ship 2100,00031.5900090−800012,000−55001975800350
Ship 365,00025500050−410010,000−16002017600320
Ship 445,40028400060−23007000−16002013450280
Ship 558,60029450070−32008000−20001990700300
Ship 628,00032420080−500011,000−15001988900350
Ship 712,50034410080−450010,000−12001982800320
Ship 812,000303800112−15009000−3102019600340
Ship 9350040120050−6006000−752008400250
Ship 1010,00030130096−14008300−3002007500270
Ship 11552030140080−10004500−3002002250260
Ship 12600030200068−8006000−2002012500270
Ship 13569029180060−9004000−2302003400280
Ship 14700030200068−10005000−2302017500300
Ship 15580029180068−9004000−2302004400280
Ship 16720030180064−9504000−1902013300310
Ship 17660030190070−7008000−3002011500290
Ship 18320028150070−5007000−1802008400270
Ship 19529026160072−6005000−1462006500290
Ship 20310044120060−7006000−802010300240
Ship 21540030180070−10008000−2002013600280
Ship 22800032220048−12007000−1902010450320
Ship 23605029160040−12005000−2002002500290
Ship 24580028170048−10004500−2002002500290
Ship 25420029120030−8004000−1801990300250
Ship 26950030200090−18004000−3501989700310
Ship 27410029120040−6004000−1801977400260
Ship 28360025100040−3005000−1501992300230
Ship 2921,30018.8300068−180010,800−1772006700340
Ship 3012,49032250080−20008000−4851983800330
Ship 3126002576030−3704500−1102020280230
Ship 32735030160048−12007000−1902009400320
Source: authors.
Table 8. Description of the profiles of the five experts consulted.
Table 8. Description of the profiles of the five experts consulted.
IDProfileBackgroundSpecializationFunction
DM 1Operational Tactics AdvisorExperience in terrestrial or naval combat operationsSpecialized in the tactical deployment of troops and equipment in various combat scenariosProvides insights into effective combat strategies, enemy engagement tactics, and real-time decision-making in battlefield conditions
DM 2Logistics and Supply Chain AdvisorA senior officer with extensive experience in military logisticsSpecialist in supply chain management, logistics planning, and resource allocationEnsures efficient distribution of resources, from munitions to medical supplies, essential for sustained military operations
DM 3Intelligence and Security AdvisorFormer intelligence officer with field experience in intelligence gathering and analysisSpecialized in cybersecurity, counter-intelligence, and the acquisition of actionable intelligenceAdvises on threat assessment, enemy capabilities, and security protocols to protect sensitive information and operations
DM 4Technology and Innovations AdvisorMilitary engineer focused on emerging technologiesExpert in integrating cutting-edge technology into military operationsGuides the adaptation of new technologies to enhance operational effectiveness and strategic advantage
DM 5Strategic Planning and Policy AdvisorSenior military strategist with experience in defense policy formulationSkilled in long-term strategic planning, policy development, and military diplomacyProvides high-level strategic guidance policy recommendations and develops frameworks for international co-operation
Source: authors.
Table 9. Establishment of preferences for obtaining weights by DM 1.
Table 9. Establishment of preferences for obtaining weights by DM 1.
Criteria A+
(ARM)
A
(CST)
A*AN WS
Displacement 3−300.579.55%
Speed 4−5−10.6410.74%
Cargo Capacity 2−4−20.7111.93%
Armament 1−7−61.0016.71%
Cost 7180.060.95%
Range 5−410.508.35%
Crew 6−240.294.77%
Year of Commissioning 3−5−20.7111.93%
Missile Range 3−6−30.7913.13%
Radar Range 3−5−20.7111.93%
Source: authors.
Table 10. Weights obtained by SAPEVO-PC, considering individual and group opinions.
Table 10. Weights obtained by SAPEVO-PC, considering individual and group opinions.
DM 2DM 3DM 4DM 5Group
Criteria A+AWsA+AWsA+AWsA+AWsWG
Displacement 2−513.29%3−38.77%4−38.56%710.95%8.07%
Speed 2−614.49%2−613.78%4−612.22%5−25.97%12.05%
Cargo Capacity 1−716.91%3−410.03%6−47.33%3−410.74%11.01%
Armament 3−512.08%2−512.53%1−717.11%2−614.32%13.71%
Cost 216.04%7−45.01%710.98%1−716.71%6.75%
Range 3−410.87%2−613.78%3−411.00%5−37.16%11.32%
Crew 6−24.83%611.00%6−24.89%2−513.13%4.97%
Year of Commissioning 4−510.87%3−511.28%2−614.67%4−38.35%11.29%
Missile Range 710.97%4−37.52%2−412.22%5−14.77%6.60%
Radar Range 5−59.66%1−716.29%2−311.00%6−24.77%11.60%
Source: authors.
Table 11. Eigenvalues and variances of each Principal Component.
Table 11. Eigenvalues and variances of each Principal Component.
Main ComponentsPC1PC2PC3PC4PC5PC6PC7PC8PC9PC10
Eigenvalues5.551.231.120.910.440.320.280.080.050.02
Proportion of Variance0.560.120.110.090.040.030.030.010.000.00
The Proportion of Cumulative Variance0.560.680.790.880.930.960.990.991.001.00
Source: authors.
Table 12. Factorial scores.
Table 12. Factorial scores.
CriteriaPC1PC2PC3
Displacement 0.155423−0.24365−0.20246
Speed −0.0036−0.04860.805706
Cargo Capacity 0.171826−0.10539−0.07351
Armament 0.0936020.5119180.267572
Cost −0.164590.093697−0.02715
Range 0.1456840.240196−0.05238
Crew −0.159120.3053480.01627
Year of Commissioning −0.078370.411382−0.31889
Missile Range 0.1379070.1510270.101244
Radar Range 0.1372780.351931−0.08195
Source: authors.
Table 13. Factors extracted for each observation.
Table 13. Factors extracted for each observation.
ShipsPC1PC2PC3
Ship 10.0211270.0992730.004603
Ship 20.355221−0.203510.088186
Ship 30.096416−0.05724−0.13233
Ship 40.037867−0.05422−0.06278
Ship 50.102727−0.080850.017531
Ship 60.1538250.0125330.080186
Ship 70.126845−0.012770.119095
Ship 80.0534040.2715530.049097
Ship 9−0.071−0.04030.083082
Ship 10−0.001520.152390.060968
Ship 11−0.043070.0391310.03912
Ship 12−0.03780.047597−0.00613
Ship 13−0.04902−0.02064−0.01387
Ship 14−0.035010.065949−0.01822
Ship 15−0.043950.012582−4.1 × 10−5
Ship 16−0.049030.033232−0.02167
Ship 17−0.023880.074605−0.00556
Ship 18−0.044330.056257−0.01864
Ship 19−0.040930.058816−0.03
Ship 20−0.06909−0.00940.14294
Ship 21−0.021750.080846−0.00443
Ship 22−0.02988−0.00997−0.02792
Ship 23−0.05059−0.08381−0.05002
Ship 24−0.04925−0.05504−0.04558
Ship 25−0.08199−0.18225−0.04257
Ship 260.0148680.0671190.091879
Ship 27−0.06599−0.168730.005674
Ship 28−0.08625−0.13163−0.0716
Ship 290.0352790.099727−0.13419
Ship 300.0474990.043250.098805
Ship 31−0.11303−0.09539−0.14878
Ship 32−0.03773−0.00911−0.04684
Source: authors.
Table 14. Factor loadings of variables.
Table 14. Factor loadings of variables.
CriteriaPC1PC2PC3
Displacement 0.86302136−0.30061206−0.22759255
Speed −0.01998727−0.059959030.90574198
Cargo Capacity 0.95410681−0.13003317−0.08263133
Armament 0.519745950.631604480.30079325
Cost −0.913902860.11560292−0.03051812
Range 0.808944780.29635372−0.05888141
Crew −0.883556020.376738530.01828981
Year of Commissioning −0.435144590.50756273−0.35848847
Missile Range 0.765763060.186337110.11381451
Radar Range 0.762268300.43421274−0.09212555
Source: authors.
Table 15. Commonalities of each original variable.
Table 15. Commonalities of each original variable.
CriteriaCommonalities
Displacement 0.8869719
Speed 0.8243631
Cargo Capacity 0.9340564
Armament 0.7595367
Cost 0.8495138
Range 0.7456842
Crew 0.9229377
Year of Commissioning 0.5754847
Missile Range 0.6340683
Radar Range 0.7780808
Source: authors.
Table 16. Ranking of ships considering the opinion of five military experts with different profiles.
Table 16. Ranking of ships considering the opinion of five military experts with different profiles.
ShipsDSP SPDCAPARMCSTRNGCREYOCMSLRADpRanking
Ship 114,56430100080−45009000−15820161003000.02410
Ship 2100,00031.5900090−800012,000−550019758003500.1821
Ship 365,00025500050−410010,000−160020176003200.0327
Ship 445,40028400060−23007000−160020134502800.00712
Ship 558,60029450070−32008000−200019907003000.0495
Ship 628,00032420080−500011,000−150019889003500.0962
Ship 712,50034410080−450010,000−120019828003200.0823
Ship 812,000303800112−15009000−31020196003400.0694
Ship 9350040120050−6006000−752008400250−0.03526
Ship 1010,00030130096−14008300−30020075002700.0259
Ship 11552030140080−10004500−3002002250260−0.01516
Ship 12600030200068−8006000−2002012500270−0.01617
Ship 13569029180060−9004000−2302003400280−0.03125
Ship 14700030200068−10005000−2302017500300−0.01315
Ship 15580029180068−9004000−2302004400280−0.02321
Ship 16720030180064−9504000−1902013300310−0.02623
Ship 17660030190070−7008000−3002011500290−0.00514
Ship 18320028150070−5007000−1802008400270−0.02019
Ship 19529026160072−6005000−1462006500290−0.01918
Ship 20310044120060−7006000−802010300240−0.02322
Ship 21540030180070−10008000−2002013600280−0.00313
Ship 22800032220048−12007000−1902010450320−0.02120
Ship 23605029160040−12005000−2002002500290−0.04428
Ship 24580028170048−10004500−2002002500290−0.03927
Ship 25420029120030−8004000−1801990300250−0.07331
Ship 26950030200090−18004000−35019897003100.0278
Ship 27410029120040−6004000−1801977400260−0.05729
Ship 28360025100040−3005000−1501992300230−0.07230
Ship 2921,30018.8300068−180010,800−17720067003400.01711
Ship 301249032250080−20008000−48519838003300.0436
Ship 3126002576030−3704500−1102020280230−0.09132
Ship 32735030160048−12007000−1902009400320−0.02724
WG8.07%12.05%11.01%13.71%6.75%11.32%4.97%11.29%6.60%11.60%
Source: authors.
Table 17. Individual and consensus rankings, considering the weights obtained by SAPEVO-PC.
Table 17. Individual and consensus rankings, considering the weights obtained by SAPEVO-PC.
ShipsDM 1DM 2DM 3DM 4DM 5Consensus
Ship 111811131011
Ship 2111111
Ship 38781077
Ship 4151215141214
Ship 5656656
Ship 6222222
Ship 7333433
Ship 8444344
Ship 9232023252625
Ship 10109108910
Ship 11252125221623
Ship 12191919171718
Ship 13262626262526
Ship 14171717151516
Ship 15242524232124
Ship 16212221242322
Ship 17121312121412
Ship 18222322201921
Ship 19202420181820
Ship 20181418192217
Ship 21131513111313
Ship 22141614162015
Ship 23282828282828
Ship 24272727272727
Ship 25303030303130
Ship 269109788
Ship 27292929292929
Ship 28313131313031
Ship 2971179119
Ship 30565565
Ship 31323232323232
Ship 32161816212419
Source: authors.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Costa, I.P.d.A.; Costa, A.P.d.A.; Moreira, M.Â.L.; Castro Junior, M.A.; Pereira, D.A.d.M.; Gomes, C.F.S.; Santos, M.d. SAPEVO-PC: Integrating Multi-Criteria Decision-Making and Machine Learning to Evaluate Navy Ships. J. Mar. Sci. Eng. 2024, 12, 1444. https://doi.org/10.3390/jmse12081444

AMA Style

Costa IPdA, Costa APdA, Moreira MÂL, Castro Junior MA, Pereira DAdM, Gomes CFS, Santos Md. SAPEVO-PC: Integrating Multi-Criteria Decision-Making and Machine Learning to Evaluate Navy Ships. Journal of Marine Science and Engineering. 2024; 12(8):1444. https://doi.org/10.3390/jmse12081444

Chicago/Turabian Style

Costa, Igor Pinheiro de Araújo, Arthur Pinheiro de Araújo Costa, Miguel Ângelo Lellis Moreira, Marcos Alexandre Castro Junior, Daniel Augusto de Moura Pereira, Carlos Francisco Simões Gomes, and Marcos dos Santos. 2024. "SAPEVO-PC: Integrating Multi-Criteria Decision-Making and Machine Learning to Evaluate Navy Ships" Journal of Marine Science and Engineering 12, no. 8: 1444. https://doi.org/10.3390/jmse12081444

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop