Next Article in Journal
Optimizing Microgrid Planning for Renewable Integration in Power Systems: A Comprehensive Review
Previous Article in Journal
Deep Learning for Network Intrusion Detection in Virtual Networks
Previous Article in Special Issue
Towards Double-Layer Dynamic Heterogeneous Redundancy Architecture for Reliable Railway Passenger Service System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A MCDM-Based Analysis Method of Testability Allocation for Multi-Functional Integrated RF System

1
Department of Integrated Technology and Control Engineering, School of Aeronautics, Northwestern Polytechnical University, Xi’an 710072, China
2
National Key Laboratory of Aircraft Design, Xi’an 710072, China
3
Shanghai Civil Aviation Control and Navigation System Co., Ltd., Shanghai 201100, China
4
The 6th Research Institute of China Electronics Corporation, Beijing 100083, China
5
China Electronic Product Reliability and Environmental Testing Institute, Guangzhou 510610, China
*
Authors to whom correspondence should be addressed.
Electronics 2024, 13(18), 3618; https://doi.org/10.3390/electronics13183618
Submission received: 19 August 2024 / Revised: 6 September 2024 / Accepted: 9 September 2024 / Published: 12 September 2024

Abstract

:
The multi-functional integrated RF system (MIRFS) is a crucial component of aircraft onboard systems. In the testability design process, traditional methods cannot effectively deal with the inevitable differences between system designs and usage requirements. By considering the MIRFS’s full lifecycle characteristics, a new testability allocation method based on multi-criteria decision-making (MCDM) is proposed in this paper. Firstly, the testability framework was constructed and more than 100 indicators were given, which included both different system-level and inter-system indicators. Secondly, to manage parameter diversity and calculate complexity, the basic 12 testability indicators were optimized through the Analytic Hierarchy Process and Technique for Order Preference by Similarity to Ideal Solution (AHP-TOPSIS) method. Thirdly, the detailed testability parameters were obtained by using the Decision-Making Trial and Evaluation Laboratory and Analytic Network Process (DEMATEL-ANP) to reduce the subjectivity and uncertainty. Finally, an example was utilized, and the results show that the MCDM method is significantly better than traditional methods in terms of accuracy and effectiveness, which will provide a more scientific basis for the MIRFS testability design process.

1. Introduction

Multi-functional integrated RF systems (MIRFSs) [1] play a pivotal role in various applications, including communication, radar, and electronic warfare. These systems [2], which integrate multiple functions into a single unit, are essential for modern electronic warfare and communication systems [3]. However, their complexity presents significant challenges in terms of testability and evaluation. Ensuring the reliability and performance of MIRFSs throughout their lifecycle requires comprehensive testability strategies that can address their unique requirements and complexities.
The current state of research on MIRFSs highlights the necessity of effective testability modeling to ensure system reliability and performance. Researchers have concentrated on developing methods and tools to predict, analyze, and improve the ability to detect and diagnose faults within these complex systems. Traditional testability models often fall short due to the intricate nature of MIRFSs, which includes diverse functionalities and interdependencies between components. For instance, MIRFSs are characterized by the integration of multiple RF functions [4], each with their own set of performance parameters and failure modes, making fault isolation and diagnosis particularly challenging. Therefore, more advanced and tailored modeling approaches are necessary to effectively evaluate and improve the testability of these systems, ensuring that faults can be accurately detected and diagnosed to maintain system performance.
In the field of multi-criteria decision-making for complex engineering systems [5], significant advancements have been made to address the challenges [6] posed by the diversity and complexity of testability parameters [7]. This paper applies MCDM methods to the testability analysis of multi-functional integrated RF systems (MIRFSs), leveraging the advantages of MCDM models in the decision-making process of complex system design and testability to prioritize and select the most critical parameters that impact system performance. These models facilitate the evaluation of trade-offs between different design and testability parameters [8], enabling the identification of optimal solutions that balance performance, cost, and reliability [9]. However, single MCDM methods [10] often struggle to integrate and balance the multifaceted requirements of such complex systems [11]. This is due to the dynamic interactions between various system parameters [12] and the need to consider the interdependencies between different subsystems. Therefore, this paper proposes an integrated MCDM model to provide a comprehensive assessment of system-level performance and testability.
The innovative contributions of our work can be summarized as follows:
(1) A comprehensive testability framework applicable to the entire lifecycle and system level of MIRFS has been proposed, enabling testability analysis to fully cover all stages and critical links. This approach ensures that testability is not limited to isolated components but rather spans the entire system, allowing for a more thorough evaluation of MIRFS performance. The framework includes all stages, from design to deployment and operation, supporting continuous assessment and enhancement of system testability and reliability.
(2) To address the diversity and complexity of testability parameters, this study constructs a basic testability parameter system for MIRFSs from both usage requirements and system requirements. This parameter system provides a scientifically effective method and tool for quantitatively evaluating the testability performance of MIRFSs, ensuring that all critical aspects are adequately measured and assessed. By establishing a comprehensive set of parameters that reflect both operational and technical requirements, the system allows for a detailed and accurate evaluation of the MIRFS, supporting better decision-making and optimization.
(3) Based on the integrated MCDM model, this study develops a parameter indicator allocation model that considers the mutual influence between units. The determined system-level testability indicators are refined and allocated to each subsystem, ensuring that the testability requirements are met throughout the entire system. This approach enhances the precision and effectiveness of testability analysis in MIRFSs, leading to more reliable and robust systems. By taking into account the interdependencies between subsystems, the model ensures that the allocation of testability indicators is optimized, supporting improved fault detection and diagnosis capabilities across the entire system.
The remainder of this paper is organized as follows. Section 2 presents a qualitative analysis of the testability framework that comprehensively spans the entire lifecycle and all system levels. Section 3 offers a detailed explanation of the proposed method, including the testability indicator framework as well as the testability indicator allocation framework. Section 4 uses illustrative examples to validate the proposed methods. Finally, Section 5 provides the concluding remarks and future research directions.

2. Literature Review

In the process of conducting testability allocation for MIRFSs, multi-criteria decision-making (MCDM) provides an effective tool to address challenges involving multiple conflicting indicators and complex decision-making environments [13]. Testability allocation is an important link in ensuring system reliability, maintenance, and availability.
In the testability modeling and allocation of a MIRFS, the application of multi-criteria decision-making (MCDM) can significantly improve the scientificity [14,15] and effectiveness [16,17] of decision-making. The Analytic Hierarchy Process (AHP) can effectively allocate and rank the weights of various indicators in testability allocation by constructing a hierarchical structure model and paired comparison matrix [18], thereby optimizing testability strategies [19]. The Technical Economics Indicator method (TOPSIS) ranks each scheme by calculating their relative closeness to the ideal solution [20], helping to select the optimal testability scheme. The Analytic Hierarchy Process (ANP) considers the interdependence and feedback relationship between indicators [21], making the decision-making process more comprehensive and accurate [22]. The Decision Laboratory Analysis and Evaluation Method (DEMATEL) identifies and analyzes the interactions and degrees of influence among various factors by constructing an impact relationship diagram [23], ensuring more reasonable testability allocation decisions in complex systems [24]. These methods have shown significant advantages in MIRFS testability modeling and allocation, helping decision-makers make scientifically reasonable choices under multiple indicators and constraints.
Specifically, the hierarchical structure analysis and weight allocation of the AHP [25,26] make complex decision-making problems transparent and operable [27,28,29], making it suitable for handling priority issues of multiple testability indicators in MIRFSs. The TOPSIS, when evaluating the relative advantages and disadvantages of multiple solutions, is based on the concepts of ideal solutions and negative ideal solutions [30], making the decision-making process intuitive and easy to understand, and effectively identifying the testability plan closest to the ideal state. The ANP considers the interdependence [31] and feedback between various indicators [32,33], making the decision model more realistic [34,35] and able to handle complex correlation relationships in MIRFS testability modeling, improving the accuracy and reliability of decision-making. DEMATEL helps identify key factors [36] and main influencing paths [37] by constructing and analyzing factor impact diagrams [38], enabling the better understanding and optimization of testability allocation schemes in complex systems [39].
Recent advancements in cross-domain fault diagnosis have also demonstrated significant potential for enhancing testability in complex systems. For example, subgraph convolutional networks (SGCNs) [40,41] have been effectively utilized to manage and analyze complex, interconnected data structures in systems with multi-source sensor data. These networks can capture the complex dependencies and nonlinear patterns that often exist in heterogeneous data from various sensors, making them highly applicable for MIRFS testability modeling and allocation, where multi-source data fusion is a critical challenge [42,43]. Integrating these advanced approaches with existing MCDM frameworks, such as the AHP and DEMATEL, could provide a more comprehensive analysis of a MIRFS’s complex system structure and interactions, leading to more optimized testability allocation strategies [44]. This integrated method would not only enhance the precision of resource allocation but also leverage deep learning capabilities to process large volumes of multi-source data, improving system reliability and stability.
The combination of these methods can not only improve the scientificity and comprehensiveness of MIRFS testability modeling and allocation [45], but also significantly reduce the risks and costs caused by decision errors [46], ensuring efficient operation and maintenance of the system throughout its entire lifecycle [47]. The comprehensive application of the MCDM method, which can better cope with the complexity [48] and uncertainty in MIRFS testability allocation, can thereby lead to the achievement of a better system performance.

3. The Proposed Methodology

Up to now, over a hundred types of testability parameters have been defined, and it is inevitable that issues such as strong professional specificity, ambiguous or overlapping meanings, and difficulties in verification arise. Therefore, it is crucial to carefully select parameters that accurately reflect the testability characteristics of the MIRFS from the many available options, and subsequently establish a robust foundational testability parameter framework for the MIRFS.
The testability design process requires coordination and cooperation between the manufacturer and the ordering user to complete. The contractor needs to carry out testability design work according to the system requirements, while the ordering user conducts a trade-off analysis based on the specific usage of the system engineering project, proposes testability design suggestions for usage requirements, and hands them over to the contractor for implementation. Therefore, testability requirements not only reflect the system requirements but also meet the usage requirements of the system.
Therefore, this article will comprehensively analyze and meticulously determine the testability index framework of MIRFSs from the perspectives of both system requirements and practical usage needs, ensuring a holistic and well-rounded approach.
The MIRFS utilizes channel synthesis, aperture synthesis, and software synthesis to eliminate the isolation between subsystems like traditional modular systems. Component units can effectively serve multiple subsystems to assist them in efficiently completing specific functions and various tasks. Aperture synthesis enables free and fast switching of multiple RF functions seamlessly between different apertures.
Channel synthesis involves the re-partitioning and integration of analog and digital circuits between antennas and integrated processing units, achieving channel- and resource-sharing, as well as the construction of a reconfigurable and universal transmission and reception system. Software integration provides a unified and scalable platform for various signal-processing, data-handling, and storage applications within the system, thereby enhancing the information-sharing capabilities among the MIRFS’s functions. The combination of these three technologies has significantly strengthened the interconnection between units, rendering traditional testability allocation methods insufficient to meet the evolving trend of functional structure integration in MIRFSs.
The MCDM method has its unique advantages over other existing methods in the testability modeling and allocation process of MIRFSs.
(1) Multiple criteria can be handled: MCDM is essentially designed to handle multiple (usually conflicting or influencing) criteria. In the whole life process of a MIRFS, decision-makers are allowed to consider various factors such as cost, reliability, maintainability, and testability at the same time, and fully weigh the impact of various factors on the system. It has obvious advantages over the traditional methods that cannot effectively capture multiple influencing factors.
(2) It can cope with the complexity of the system: The MIRFS contains multiple subsystems, which are interdependent. DEMATEL, the ANP, and other methods can well model this interdependence and capture the hierarchical structure or network characteristics of the complex system. By comparison, fault tree analysis (FTA) or failure mode and effect analysis (FMEA), which can more easily identify key fault points, fully capture the multi-dimensional dependency between components.
(3) Fully integrates the preferences of stakeholders: The preferences of stakeholders (customer requirements) need to be fully considered in the whole life process of a MIRFS. The MCDM method can deal with the different priorities of multiple stakeholders (such as engineers, managers, and customers) in the testability aspect of the system design process, while the traditional method does not consider the preferences of stakeholders.
According to the above analysis, this paper adopts the MCDM method for application to the testability modeling and allocation of a MIRFS, so as to meet the testability of the whole life process of the MIRFS. Therefore, a testability allocation method based on MCDM is proposed, as shown in Figure 1. This method is mainly divided into two parts. The first part is to screen a large number of testability indicators to obtain the MIRFS basic parameter framework. The second part is to establish a MIRFS testability allocation model and allocate testability indicators to the parameters in the basic parameter framework obtained in the first part to verify the rationality of the allocation model.
In the first part, the AHP and TOPSIS are integrated. The AHP is used to determine the weight of each factor, and the TOPSIS uses these weights to sort the candidate schemes. This combination method makes up for the lack of an AHP in sorting, and enhances the accuracy of the decision-making. The AHP relies more on the subjective judgment of experts, while the TOPSIS can use objective data for calculation. The combination of the two method can completely consider problems more comprehensively and avoid the limitations of a single method.
In the second part, the ANP and DEMATEL are integrated. DEMATEL is used to identify and quantify the interaction between factors, and the ANP can make decision analyses based on these relationships. This combination is more in line with the actual situation, because the factors in many decision-making problems are not completely independent. The ANP considers the interdependence between factors, and DEMATEL can help identify which factors have the greatest impact on other factors, making the network structure of the ANP more reasonable and effective.
AHP-TOPSIS and DEMATEL-ANP can be used to solve different levels of problems in MIRFS testability modeling and allocation, respectively. The former is suitable for determining priorities and ranking, and the latter is suitable for analyzing the complex relationships and dependencies between factors.

3.1. Construction of Testability Indicator Framework

To align with the testability requirements analysis concept across the entire framework and lifecycle, it is essential to establish a comprehensive testability indicator framework for the MIRFS grounded in the foundational testability parameter framework.
(1) Testability indicator framework under different testability methods
Automatic Test Equipment (ATE), Built-In Test (BIT), and manual testability are currently the three main testability methods for system maintenance in the field of aviation engineering. The testability methods for system equipment vary in different maintenance environments and task stages, and the testability parameter indicators that each testability method focuses on also vary. The testability parameters Mean Time Between False Alarms (MTBEB), Mean Time to Repair for BIT (MTTRB), and Mean Time Between Repairs for BIT (MBRT) proposed for BIT can be included in the basic testability parameters when necessary for BIT testability. However, BIT detection and isolation execution speeds are fast, and their testability time can be ignored. Therefore, BIT may not consider Mean Fault Detection Time (MFDT) and Mean Fault Isolation Time (MFIT).
(2) The testability index framework of multi-level structure in MIRFS
The testability of structural properties and functional differences between units at various levels of the MIRFS necessitates testability parameters that cannot be generalized. In other words, the testability of any system unit should not be defined by a fixed set of testability parameter indicators. Instead, it should be refined and expanded upon the basic testability parameter indicators to accommodate the specific testability requirements of individual units.
Figure 2 shows a sample diagram of the MIRFS’s multi-level testability parameter correlation framework. Taking the example of a three-tier maintenance system, assume the foundational testability parameter set for the MIRFS is T = ( p 1 , p 2 , p 3 ) . The testability parameter sets for subsystems 1 to 3 are denoted as S 1 , S 2 , and S 3 , respectively.
Based on the system testability parameters, new parameters are included according to specific requirements. At this point, S 1 = ( p 1 , p 2 , p 3 , p 4 ) , S 2 = ( p 1 , p 2 , p 3 , p 5 ) , and S 3 = ( p 1 , p 2 , p 3 , p 6 , p 7 ) . Unlike traditional systems, synthesis enables the L R U of the MIRFS to serve multiple subsystems under algorithmic allocation. The parameter set corresponding to L R U m is denoted as L m n , which needs to retain the parameters of subsystem m. The S R U layer parameter R m n k is determined to be consistent with the subsystem layer. For example, in the three-level maintenance system, L R U 13 and L R U 31 are also referred to as L R U 22 and L R U 23 , meaning that L 13 and L 31 should retain subsystem parameters 1 and 3, while also retaining subsystem 2 parameters.
The multi-level parameter framework of the MIRFS can be expressed in set language as Equation (1).
T S m L t m R n m n k T = m = 1 m max S m S m = m max m = 1 S m L m n = k max k = 1 R n m k .
(3) A testability index framework for the full design process of MIRFS
The testability indicator requirements outlined in Figure 3 summarize the different engineering phases, serving as the theoretical foundation for designing a comprehensive testability indicator framework that spans the entire MIRFS design process.
In different engineering phases, the requirements for the same parameter indicator often vary. During the validation phase, analysts review system requirements and propose usage indicators, which can be divided into threshold values and target values. The threshold values reflect the minimum acceptable usage requirements, while the target values represent the desired expectations for usage indicators. In the design phase, the research and production department conduct an implementable analysis based on these usage indicators. Feedback is then provided to the validation department, leading to modifications and adjustments. Contract indicators are subsequently discussed and agreed upon in consultation with the design department. These contract indicators consist of minimum acceptable values and specified values, where the former represents the minimum contractual requirements, and the latter indicates the desired performance expectations.
During the research and production phase, constrained by technical defects, slightly higher design indicators are often proposed to control and guide the contract indicators, ensuring that the completed design can meet the contractual requirements.
(4) MIRFS testability index framework for the whole system lifecycle
The testability indicator framework constructed from the first three perspectives is closely interrelated and permeates throughout the entire system design process. The MIRFS testability design process should comprehensively incorporate all three key aspects and establish a three-dimensional testability indicator framework that is applicable to various testability methods, the complete system lifecycle, and the overall system structural hierarchy, as clearly illustrated in Figure 4.

3.2. Construction of Parameter Indicator Framework

The usage requirements reflect the subjective demands put forward by the ordering user based on their past experience in similar equipment development and actual engineering needs. It covers specific requirements such as reliability, maintenance assurance, and limiting constraints. Only by comprehensively considering these usage requirements and building a testability index framework based on these basic conditions can the system characteristics and requirements be truly integrated into the system design. System requirements are a collection of various requirements, including performance, functional structure, etc., established based on the established task objectives that need to be achieved. From a usage perspective, system requirements are objective requirements, and using them to evaluate testability parameters can, to some extent, avoid the influence of subjective experience on parameter selection. This section will first use the Analytic Hierarchy Process (AHP) to prioritize the testability candidate parameter set from the usage requirement dimension. Then, it will use the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) to effectively prioritize the testability candidate parameter set based on the system requirement dimension, thereby assisting in the comprehensive construction of the testability parameter framework.

3.2.1. Evaluation and Analysis of Alternative Parameters Based on Usage Requirements

After the model construction is completed, the solution process must be executed according to the hierarchical structure. The specific steps are as follows:
Step 1.1 Establish the inter-level relationship judgment matrices.
Relying on the relationships between adjacent levels, use the nine-point scale method to construct judgment matrices indicating the relative importance of elements within the same level with respect to an element from the previous level. The value of the matrix element x i j is defined as follows:
  • 1—The factor i is equally important compared to the factor j .
  • 3—The factor i is slightly more important than the factor j .
  • 5—The factor i is noticeably more important than the factor j .
  • 7—The factor i is strongly more important than the factor j .
  • 9—The factor i is extremely more important than the factor j .
The remaining numbers indicate the importance between the adjacent judgment values mentioned above, and x i j is the reciprocal of x j i .
Step 1.2 Consistency check.
When comparing the importance of elements within the same level to avoid logical conflicts in judgment, it is essential to perform a consistency check on each judgment matrix. The degree of consistency is measured by calculating the Consistency Ratio ( C R ), as shown in Equation (2):
C R = C I R I
C I = λ max n n 1
where C I represents the consistency indicators and R I represents the random consistency index. The C I calculation equation is shown in Equation (3), where λ max represents the maximum eigenvalue and n represents the matrix order.
When there is inconsistency, the eigenvalue adjustment method is first adopted to minimize the C I while preserving the original data as much as possible. If the direct adjustment is difficult, the standard can be reevaluated and discussed with experts to refine the judgment and optimize the original data.
Step 1.3 Calculation of weight vectors for adjacent levels.
The method used to calculate weights can lead to different results. Therefore, this study will use the arithmetic mean method, geometric mean method, and eigenvalue method to calculate weights separately, and the average will be taken as the final weight. The weights ω i 1 and ω i 2 are obtained using both arithmetic mean and geometric mean calculations as follows:
ω i 1 = 1 n n j = 1 x i j n k = 1 x i j
ω i 2 = ( n j = 1 x i j ) 1 n n k = 1 ( n j = 1 x k j 1 n )
Step 1.4 Calculation of priority for the scheme layer.
Taking the average weight vector of the scheme layer to the criterion layer obtained in step 1.3 and arranging them in order, a weight matrix W M is formed. According to Equation (5), the comprehensive weight vector for the indicator layer with respect to the goal layer is obtained. In other words, using the MIRFS usage requirements as evaluation criteria, the priorities of the testability alternative parameters are determined as Pr A .
Pr A = ω P W M

3.2.2. Analysis of System Requirement Dimension Parameter Indicators

Step 1.5 Building a testability parameter evaluation matrix.
Quantitatively analyze the correlation between multiple system requirements and alternative testability parameters to obtain a correlation evaluation matrix C . Nine-scale fuzzy numbers are used to reflect the correlation between system requirements and indicator parameters. Larger values indicate stronger correlations, while a value of zero signifies no correlation.
Step 1.6 Calculation of the impact of system requirements on entropy weight.
z i j = c i j i = 1 n c i j 2
r i j = z i j min n ( z i j ) max n ( z i j ) min n ( z i j )
For the evaluation matrix C T = { c i j } ( m = 9 , n = 10 ) , standardize it sequentially through Equation (7) to (8). After obtaining the membership matrix R = { r i j } m × n , utilize Equation (9) to calculate the entropy values Hi. For each requirement, w s i is calculated by Equation (10).
H i = i = 1 m f i j ln f i j lnm ,   f i j = r i j + 1 i = 1 m r i j + 1
w s i = 1 H i i = 1 m ( 1 H i )
Step 1.7 Calculation of the scores for system testability parameters.
From matrix R, take each column to form the ideal solution vector R+ and the worst solution vector R; R+ and R are calculated by Equation (11).
R + = max { R 11 , R 21 , , R n 1 } , max { R 12 , R 22 , , R n 2 } , , max { R 1 m , R 2 m , , R n m } R = min { R 11 , R 21 , , R n 1 } , min { R 12 , R 22 , , R n 2 } , , min { R 1 m , R 2 m , , R n m }
Subsequently, use Equation (12) to calculate the distances Di+ and Di for the positive and negative ideal solutions, respectively. Di is calculated by Equation (13).
D i + = w s i j = 1 m Z j + z i j 2 ,   D i = w s i j = 1 m Z j z i j 2
D i = D i D i + + D i

3.2.3. Integration of Evaluation Results

Step 1.8 Calculation of system testability parameter score.
Using the method of seeking equilibrium solutions in game theory, balance and integrate the evaluation results Pr A and Pr D . Firstly, the linear combination of Pr A and Pr D forms a parameter comprehensive evaluation vector Pr , as shown in Equation (14), where λ1 and λ2 are the combination coefficient to be solved, with the objective of minimizing the sum of deviations between Pr A , and Pr D . The constraint conditions are shown in Equation (15). According to the fundamental principle of differentiation, the equation system conditions that must be met in order to obtain the optimal solution for the above objective function are clearly shown in Equation (16).
Pr = λ 1 Pr A + λ 2 Pr D
min ( Pr Pr A 2 + Pr Pr D 2 ) s . t . λ 1 + λ 2 = 1 ,   λ 1 , λ 2 0
λ 1 Pr A Pr A T + λ 2 Pr A Pr D T = Pr Pr A T λ 1 Pr D Pr D T + λ 2 Pr D Pr D T = Pr D Pr D T
By solving Equation (16) and normalizing the absolute values, λ 1 and λ 2 are obtained. Substituting these values into Equation (13) yields the comprehensive importance of alternative parameters. The top three parameters are then selected based on priority to further develop the MIRFS testability parameter indicator system for the airborne MIRFS.

3.3. MIRFS Testability Indicator Allocation Model

The previous section completed the screening of testability parameters for the MIRFS, and now parameter allocation is carried out for each parameter.
Step 2.1 Constructing a testability allocation model for MIRFS.
Select typical influencing factors as criterion layer elements (Pi) and divide the MIRFS into three subsystems, communication, radar, and electronic countermeasures, according to their functions, as network layer element groups ( C 1 C 3 ). C 1 C 3 consist of system reconfigurable antenna units ( E 11 , E 21 , E 31 ), integrated transceiver units ( E 12 , E 22 , E 32 ), and data comprehensive processing units ( E 13 , E 23 , E 33 ). The detailed MIRFS testability allocation model is clearly illustrated and shown in Figure 5 below.
The elements within a group belong to the same subsystem and are interconnected through circuits, leading to cascading effects, which manifest as self-feedback within the element group. Structural reconstruction of similar elements across different element groups is significantly influenced by the sharing of units between various subsystems, which results from the functional switching and dynamic reallocation of resources.
Step 2.2 Solving element unit weight vectors.
Using a certain element P s ( s = 1 ,   2 ,   ,   M ) in the control layer as the criterion, and the element e j l ( l = 1 , 2 , n j ) in a certain element group C j ( j = 1 , 2 , N ) in the network layer as the sub-criterion, the dominance of the influence of element group C i ( i = 1 , 2 , N ) element e j l ( l = 1 , 2 , n i ) on e j l can be compared. The quantification of the influence of different criteria can be achieved through data collection or expert scoring methods. The Saaty scale is used for pairwise comparison, and the results are filled in the comparison matrix according to the judgment of experts. By adjusting the weight, a balance can be achieved between multiple testability indicators to achieve the rationality of the testability indicator distribution. The specific values of the Saaty scaling method and their corresponding detailed meanings are as follows:
  • 1 (equally important)—the two factors have the same contribution or influence on the goal;
  • 3 (slightly important)—one factor is slightly more important than another;
  • 5 (obviously important)—one particular factor is significantly more important and influential than another specific factor;
  • 7 (very important)—one factor is more important than another;
  • 9 (extremely important)—one specific factor is considered to be absolutely more important and significant than another factor;
  • 2, 4, 6, 8 (intermediate value)—these values are used to indicate the intermediate degree between the above adjacent judgments. For example, 2 is between 1 (equally important) and 3 (slightly important), which is used to indicate that it is slightly more important than “equally important”;
  • 1/3, 1/5, 1/7, 1/9 (reverse judgment scale value)—these values are used to indicate reverse judgment. For example, if one factor is slightly less important than another, use 1/3; if significantly unimportant, use 1/5; and so on. These values are the reciprocal of the original scale value and are used to reflect the relative importance at the symmetrical position of the comparison matrix.
After comparison, the feature influence matrix W i j can be constructed and combined into a super weight matrix W ¯ S by block. Then, using P s as the criterion and C j ( j = 1 , 2 , 3 ) as sub-criteria, the influence of element groups is compared to obtain the weight influence matrix A s between subsystems. The weight influence matrix as and the super weight matrix W ¯ S are weighted to obtain the weighted super α λ , α F , α M matrix W s . W i j and W ¯ S can be calculated by Equations (17) and (18). The parameter W s can be calculated directly by using Equation (19), provided in the text.
W i j = w i 1 ( j 1 ) w i 1 ( j 2 ) w i 1 ( j n j ) w i 2 ( j 1 ) w i 2 ( j 2 ) w i 2 ( j n j ) W p n i ( j 1 ) w p n i ( j 2 ) w m i ( j n j )
W ¯ s = 1 n 1 1 n 2 1 n N W 11 W 12 W 1 N W 21 W 22 W 2 N W N 1 W N 2 W N N 1 n 1 1 n 2 1 n N
W s = W ¯ s A s
Theorem 1.
When a certain layer of a cyclic system is internally dependent, its hypermatrix W limit exists, namely, W s exists. And each column is a normalized vector of eigenvalue 1. According to the principle of limit vector sorting, the limit of W s exists and the jth column of W s is the relative sorting vector of the network layer elements under P s with respect to the limit of the element; therefore, when the limit of W s exists, perform multiple products until the elements in each column are similar. It is believed that the vectors in column W s represent the weights of each element in the network layer under the P s criterion. According to the calculation equation, a weighted judgment hypermatrix W ¯ S can be obtained, and the weight vector k of each element under P s can be solved.
Step 2.3 Solving the comprehensive importance of elements.
The comprehensive importance calculation method can be divided into linear and nonlinear approaches. The linear calculation equation is shown as Equation (20).
K = α λ k λ + α F k F + α M k M + α C k C
The weights of α λ , α F , α M and α C are determined by the relative importance vector of the criterion relative to the target, and can be adjusted appropriately according to the focus, ensuring that the sum is 1. For the convenience of calculation, this article chooses the linear solution method.
Step 2.4 Obtaining a comprehensive impact matrix.
Specifically, through expert discussions, questionnaire surveys, and other methods, the impact relationship between elements is analyzed pairwise. A 4-level scale (0, 1, 2, 3) is used to measure the degree of influence between indicators. The value of the matrix element d i j is defined as follows:
  • 0—Factor i has no effect on factor j .
  • 1—Factor i has little effect on factor j , which can be ignored.
  • 2—Factor i has a certain impact on factor j , which needs to be considered.
  • 3—Factor i has a significant impact on factor j and is an important consideration in the decision-making process.
Through this scale, experts score the direct impact of each pair of factors to form an initial direct relationship matrix D = [ d i j ] . After the direct relationship matrix is constructed, the data need to be standardized to ensure that the data in the matrix are between 0 and 1. The purpose of this is to ensure the stability and consistency of the calculation. The normalized direct relationship matrix n is calculated as shown in Equation (21):
N = S × D
where S is a scaling factor, and its calculation method is shown as Equation (22):
S = min 1 max i j d i j , 1 max j i d i j
After obtaining the standardized direct relation matrix N , calculate the comprehensive influence matrix T . The comprehensive impact matrix T includes direct and indirect impacts, which are the basis for constructing the impact relationship diagram. The calculation equation is shown as Equation (23):
T = N + N 2 + N 3 + = N ( I N ) 1
The specific process is illustrated in Figure 6.
Step 2.5 Construction of MIRFS testability indicator allocation framework.
To avoid significant differences between K i and K j , which could lead to considerable variations in the detection rate and isolation rate between units i and j , a sigmoid function is introduced. This ensures that K i remains within the range of (0, 1), with the sigmoid slope decreasing as K i increases, thereby mitigating unreasonable allocation caused by large gaps in K i . The calculated K is then used to allocate the fault detection rate, fault isolation rate, and false alarm rate. The fault detection rate, fault isolation rate, and false alarm rate can be calculated by Equations (24) and (25).
γ FD i = 1 λ S 1 γ FDS f K , i = 1 n λ i f K i γ FI = 1 λ DS 1 γ FIS f K i i = 1 n λ D i f K i f ( K i ) = 1 1 + e K i
γ FAR i = λ s γ FAR f ( K i ) i = 1 n λ i f ( K i )
where K i represents the i t h element of K , λ S is the system failure rate, λ D S denotes the system fault detection rate, λ i indicates the failure rate of unit I , λ D represents the fault detection rate of unit I , and γ FDS and γ FIS indicate the system fault detection rate and isolation rate required values.

4. Result and Discussion

4.1. Parameter Screening and Construction of Parameter Framework

(1) Preliminary screening of testability parameters
Testability parameters can be roughly divided into two categories: characteristic parameters and capability parameters. The physical characteristic parameters reflect the inherent properties of the test equipment, including aspects such as its physical volume, the number of components it contains, and its deployment location within the system. Additionally, these parameters encompass the structural design and material quality, which are crucial for determining the equipment’s robustness. The use characteristics, on the other hand, are reflected through metrics like reliability, maintainability, and supportability, which together influence the long-term performance and efficiency of the equipment. The technical level and experience of the maintenance personnel are the most common parameters of subjective ability evaluation, which can vary greatly depending on training and expertise. Meanwhile, the objective ability parameters can be further divided into three categories: range parameters, which define the operational limits; time parameters, which assess the duration of certain processes; and ratio parameters, which provide insight into the efficiency and effectiveness of operations. The range parameters include test coverage, fuzzy group, etc.; time parameters include fault detection time, bit fault interval time, etc.; ratio parameters mainly include far, fire, unrepeatable rate, etc. The classification diagram of testability parameters is shown in Figure 7.
Through the preliminary classification and analysis of testability parameters, the range of parameters is preliminarily controlled, yet a considerable number of optional parameters remain, and the issue of repeated correlation between these parameters persists. To address this and further narrow down the range of alternative parameters for MIRFS testability, a series of scientific and reasonable parameter-screening criteria will be proposed as the foundation for more effective decision-making. The specific criteria are as follows: these criteria will be based on both quantitative and qualitative assessments, ensuring comprehensive coverage and applicability across different scenarios.
  • Clarity: The selected testability parameters should have clear and definite definitions and well-established mathematical calculation methods, as specified in the existing outline standards or technical manuals.
  • Universality: Try to avoid selecting testability parameters based on subjective needs and experiences to ensure the universality of the index system.
  • Reflexivity: Testability parameters should not only clearly reflect testability performance characteristics but also accurately reflect their relationship with other performance-related characteristics and attributes.
  • Comprehensiveness: Give priority to the testability parameters that are representative, that is, can comprehensively reflect the test characteristics and level, and realize the simplification of the testability index system structure.
  • Independence: It is quite difficult to completely avoid the correlation between testability parameters, which is generally reflected by the set relationships of adjacency, complementarity, inclusion, and others. Only by carefully selecting parameters that are relatively independent of each other’s meaning and unique characteristics can the parameter architecture be effectively optimized.
  • Testability: According to the actual test conditions, the testability parameters that are easy to measure and obtain, are easy to quantify and compare, and can be accurately measured are selected.
  • Verifiability: The testability parameter value can be calculated according to the data obtained in the design stage, and then the subsequent assessment, evaluation, and verification related to the parameter index can be completed.
  • Convertibility: Try to select testability parameters that can be converted from use parameters to contract parameters, for parameters that cannot be verified can only be used as usage parameters, and cannot be further converted into contract parameters.
Based on the above model construction principles, the target layer (T) is to screen the basic testability parameters of the airborne MIRFS. Combining the literature and analysis of the actual use process of the MIRFS, 12 usage requirements, including accurate reporting of system status (G1), continuous uninterrupted operation (G2), and downtime caused by faults (G3), are ultimately selected as the criteria layer (G1–G12). At the solution layer, ten distinct alternative parameters have been defined, including fault detection rate (FDR), fault isolation rate (FIR), false alarm rate (FAR), Mean Fault Detection Time (MFDT), Mean Fault Isolation Time (MFIT), Built-In Test Mean Time Between Failures (MTBEB), Built-In Test Mean Time to Repair (MTTRB), Non-Reproducibility Rate (CNDR), Retest Qualification Rate (RTOKR), and Mean Effective Operating Time (MBRT). These parameters are designated as the solution layer indicators (M1~M10).
(2) Calculation of the importance of alternative parameters based on usage requirements
As shown in the hierarchical relationship in Figure 5, one target layer can be constructed—the criterion layer judgment matrix is denoted as P , and the 12 criterion layers to the indicator layer judgment matrix are denoted as Q 1 Q 12 . Among them, P , Q 1 , and Q 8 are shown in Table 1, Table 2 and Table 3. Therefore, the average weight vector of the criterion layer to the target layer is calculated based on the judgment matrix P . It can be calculated that ω p = {0.0582,0.925,0.1765,0.0294,0.263,0.0244,0.1939,0.0349,0.341,0.0497,0.0510,0.0291}.
Similarly, for Q1 to Q12, the average weight vector of the scheme layer over the criterion layer, ω M 1 ω M 12 , is obtained. By taking the transposes of ω M 1 ω M 12 and arranging them in sequence, a weight matrix WM is formed. The comprehensive weight vector from the indicator layer to the target layer is then derived from Equation (10), resulting in the priority Pr A of the test candidate parameters based on the MIRFS usage requirements. The values for Pr A are given by the following set: {0.3510, 0.1735, 0.1278, 0.0948, 0.0600, 0.0369, 0.0383, 0.0261, 0.0578, 0.0338}.
(3) Calculation of the relative importance of candidate parameters based on the system requirement dimension criteria
For the evaluation matrix C T = c i j m × n ( m = 9 , n = 10 ) , R = r i j m × n is obtained according to Equations (11) and (12). The weight vector, composed of entropy weights w s i for each system requirement, is calculated using Equations (13) and (14). The results are shown in Table 4, with W S = {0.974,0.1103,0.1324,0.1289,0.1155,0.0667,0.1077,0.1474,0.0937}. Subsequently, R is calculated and normalized to obtain the evaluation score for the candidate parameter framework requirements. The values for Pr D are given by the following set: (0.2059,0.2048,0.1523,0.0912,0.1132,0.0184,0.0184,0.0672,0.0639,0.0646).
According to game theory, the comprehensive importance of the candidate parameters is obtained from Equation (15). Based on priority, the top three parameters—fault detection rate (FDR), fault isolation rate (FIR), and false alarm rate (FAR)—are selected as the basic testability parameter indicators for the airborne MIRFS. These parameters are used to construct the MIRFS testability parameter indicator framework.

4.2. Establishment and Verification of Testability Allocation Method

In the actual engineering situation, four typical influencing factors, including failure rate, failure importance, mean time to repair failure, and cost, are selected as criteria layer elements (P1P4). The MIRFS is divided into communication, radar, and electronic countermeasure subsystems as network layer element group ( C 1 C 3 ). C 1 C 3 are composed of antenna units ( E 11 , E 21 , E 31 ), integrated transceiver units ( E 12 , E 22 , E 32 ), and data comprehensive processing units ( E 13 , E 23 , E 33 ) that can be integrated and reconstructed by the system. The elements in the element group belong to the same subsystem and are interconnected through circuits. There is a certain cascade effect, which is manifested by the self-feedback of the element group; the structural reconfiguration of similar elements in different element groups due to function switching has the influence of unit-sharing among the subsystems. In this example, assume γ FDS = 0.96 , γ FIS = 0.94 , and γ FAR = 0.06 .
First, the control weight of the corresponding control layer is calculated. According to the expert score, the control criterion weight W s ( s = 1 , 2 , 3 , 4 ) and the element weight v m ( m = 1 , 2 , 3 , 4 ) of the scheme layer are obtained. According to the control criterion integration equation, as shown in Equation (26), the control weight is obtained. The control weight is used to appropriately weight the hypermatrix. The calculation results of W s and v m are provided in Table 5 and Table 6, respectively.
f x = W 1 v 1 + W 2 v 2 + W 3 v 3 + W 4 v 4
Taking P s as the criterion ( s = 1 , 2 , 3 , 4 ) and E j l ( l = 1 , 2 , 3 ) in C j ( j = 1 , 2 , 3 ) as the sub-criterion, the influence of any element group is compared with e j l . The influence quantification of different criteria can be achieved through data collection and solution or the expert scoring method. After comparison, the characteristic influence matrix W i j can be constructed and combined into a super weight matrix W s by blocks.
Take P 1 as an example. With P 1 as the criterion and C j ( j = 1 , 2 , 3 ) as the sub-criterion, the influence of element groups is compared to obtain the weight influence matrix A1 between subsystems, as shown in Table 7, and all local priority vectors are integrated into a supermatrix to obtain the unweighted supermatrix W 1 , as shown in Table 8. The weighted supermatrix is obtained by weighting W1 with the integrated weight ( f x ), as shown in Table 9. Multiple power operations are performed on the weighted supermatrix, such as Equation (27), until the matrix converges. The resulting matrix is called the limit supermatrix, as shown in Table 10.
W limit = lim k W weighted k
The resulting matrix is called the limit supermatrix, as shown in Table 10.
The column corresponding to this criterion can be extracted from the limit supermatrix, which is the final weight vector of each element under the control criterion P1. Solve the weight vector k λ of each element under P1. This process is repeated to obtain the weight vectors k F , k M , and k C , corresponding to criteria P 2 , P 3 , and P 4 , respectively. Here, P2, P3, and P4 are given as weighted hypermatrices under the criteria, as shown in Table 11, Table 12 and Table 13. The calculation results of k λ , k F , k M , and k C are as follows:
k λ = 0.0602 , 0.0599 , 0.0773 , 0.2441 , 0.1243 , 0.2176 , 0.0389 , 0.0859 , 0.0917 ; k F = { 0.0745 , 0.1774 , 0.2061 , 0.0305 , 0.0233 , 0.0361 , 0.1631 , 0.1232 , 0.1659 } ; k M = { 0.0288 , 0.0359 , 0.0609 , 0.1527 , 0.2217 , 0.0969 , 0.1628 , 0.0859 , 0.1543 } ; k C = { 0.0582 , 0.0437 , 0.1111 , 0.0427 , 0.0251 , 0.0272 , 0.2985 , 0.1664 , 0.2271 } .
Establish an optimization model to solve the optimized comprehensive importance vector. It can be obtained that W = {0.0515,0.0661,0.1046,0.1109,0.1069,0.0837,0.1874,0.1189,0.1700}. From Table 14, the mixed weight K can be obtained by using the mixed weight matrix calculation in Equation (28).
K = W T + W = ( 1 + T ) W
Based on the analysis, it can be determined that the values for K 1 are as follows: {0.0951,0.1014,0.1093,0.1118,0.1208,0.1146,0.1108,0.1137,0.1225}.
After calculating K 1 , the fault detection rate and isolation rate can be allocated. The values of K 1 obtained from the ordinary traditional formula method and the MCDM method are substituted into Equations (21) and (22) to derive the allocation results for the fault detection rate (FDR), fault isolation rate (FIR), and false alarm rate (FAR).

4.3. Comparison and Analysis of Results

To comprehensively evaluate the effectiveness of the MCDM method in the allocation of testability indicators, three comparison methods are introduced: Simple Weighted Method (SWM), Equal Distribution Method (EDM), and Historical Data-Based Allocation (HDBA). By comparing these methods, the performance of the MCDM method in different testability indicators (fault detection rate, fault isolation rate, and false alarm rate) can be more clearly understood, ensuring that the research results are persuasive and practically valuable. Below are brief descriptions of these three different methods along with their respective calculation results.
(1) Simple Weighted Method (SWM)
The Simple Weighted Method allocates testability indicators proportionally based on the failure rates and importance of each system component. The weighting factors are simply weighted according to the importance and failure rates of the components, ultimately obtaining the fault detection rate (FDR), fault isolation rate (FIR), and false alarm rate (FAR) for each component.
K i = λ i n i = 1 λ i
According to Equation (29), the weight factor K i can be calculated. The failure rates of each individual unit are clearly shown in Table 15 below. It can be obtained that K 2 = {0.095,0.095,0.1,0.1,0.105,0.105,0.11,0.11,0.115}.
(2) Equal Distribution Method (EDM)
The Equal Distribution Method is a method of evenly distributing testability indicators among all system components. This method assumes that each component is equally important to the system’s testability, resulting in the same FDR, FIR, and FAR for each component. It can be obtained that K 3 = {0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1}.
(3) Historical Data-Based Allocation (HDBA)
The Historical Data-Based Allocation method utilizes historical failure data to allocate testability indicators. By calculating the failure rates of each component based on historical data, the FDR, FIR, and FAR are allocated accordingly. It can be obtained that K 4 = {0.093,0.099,0.107,0.109,0.118,0.112,0.108,0.111,0.120}.
The obtained weight factors are substituted into Formulas (20) and (21) to derive the allocation results of the testing indicators, as shown in Table 16.
Quantitative Analysis: The MCDM-based method consistently outperforms other methods in terms of fault detection rate (FDR) and fault isolation rate (FIR) across all test elements (E11 to E33). For example, the highest FDR achieved by the MCDM method is 0.9898, compared to a maximum of about 0.9666 for other methods. Similarly, the FIR values for the MCDM method are higher, with a maximum of 0.9783, significantly exceeding those of the other methods. Additionally, the false alarm rate (FAR) for the MCDM method is notably lower, with a minimum value of 0.0309, which is substantially lower than the corresponding values for the other methods. These results indicate that the MCDM method is more effective at reducing false alarms while maintaining high fault detection and isolation capabilities.
Qualitative Analysis: Compared to traditional methods, the MCDM method offers several qualitative advantages. It provides a more comprehensive approach by considering multiple factors, particularly those that are often conflicting, leading to a more holistic and optimized decision-making process. Unlike traditional methods that may rely heavily on experience or a single criterion, the MCDM method integrates both expert judgment and quantitative data, enhancing the accuracy and reliability of the decisions. Moreover, the MCDM method demonstrates greater adaptability and robustness when handling various metrics simultaneously, allowing it to maintain stable performance under different conditions, providing more reliable outcomes in real-world applications.

5. Conclusions

MIRFSs present significant challenges due to their complex nature and their diversity of testability parameters. The intricate interactions among their components and their wide range of functionalities can result in instability and false alarms, which can substantially affect the reliability and performance of electronic systems. This study introduces a comprehensive testability framework and an integrated MCDM model specifically designed for MIRFSs, addressing the complexities of fault detection and system testability. A series of comparative and ablation experiments were conducted to validate the effectiveness of the proposed method. Based on the results, the main conclusions are as follows:
(1) The proposed testability framework, suitable for the entire lifecycle and system level of MIRFS, ensures comprehensive testability coverage across all stages and key links of the system. This holistic approach significantly enhances the ability to detect and diagnose faults, improving overall system reliability.
(2) By constructing a foundational testability parameter framework based on both usage requirements and system requirements, a scientifically sound method for quantitatively evaluating the testability performance of a MIRFS has been established. This framework enables detailed and precise assessments, thereby supporting better decision-making and optimizing system performance.
(3) The parameter indicator allocation model, developed using the integrated MCDM model, accounts for the mutual influence between units and refines system-level testability indicators for each subsystem. This approach ensures that testability requirements are consistently fulfilled across the entire system, thereby significantly improving the overall precision and effectiveness of the testability analysis.
Despite these strengths, there are several limitations to the current approach. The reliance on expert judgment for assigning weights and determining priorities within the MCDM framework may introduce subjectivity, potentially leading to biases in the decision-making process. Additionally, the computational complexity associated with integrating multiple MCDM methods could pose challenges, particularly for large-scale systems. Future work could focus on developing more efficient algorithms and thoroughly exploring data-driven methods to significantly reduce dependency on expert input and further enhance model scalability and flexibility.
The application of the proposed MCDM framework and testability methods in real-world MIRFSs will be explored in future work to achieve practical implementation and validation. To further enhance the testability framework and the integrated MCDM model, future research could explore the integration of recent advancements in subgraph convolutional networks (SGCNs). Subgraph convolutional networks have shown promising results in handling complex, interconnected data structures, which could be particularly beneficial for the MIRFS, given its intricate component interactions and dependencies. By incorporating a SGCN, the model could better capture the hierarchical and relational information within the MIRFS, allowing for more precise fault detection and diagnosis. Furthermore, integrating a SGCN with the existing MCDM-based approach could provide a dual advantage: leveraging the MCDM model’s decision-making capabilities while enhancing it with the deep learning abilities of the SGCN to analyze complex graph structures. This integration would enable a more refined and comprehensive analysis of the MIRFS’s subsystem interactions, potentially identifying novel testability indicators and further refining existing ones based on continuous dynamic data analysis techniques.
In conclusion, the proposed testability framework and MCDM model offer a robust foundation for enhancing the MIRFS’s testability. However, by actively exploring these advanced research directions, the model can be further refined and effectively adapted to better meet the evolving demands of complex integrated systems, thereby ensuring continued improvements in both reliability and performance.

Author Contributions

Conceptualization, C.Z. and D.Z.; methodology, C.Z. and Y.H.; validation, Y.H.; formal analysis, S.H.; investigation, Y.H.; resources, C.Z.; data curation, C.Z.; writing—original draft preparation, Y.H.; writing—review and editing, C.Z. and Z.D.; supervision, C.Z and Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the National Key Scientific Research Projects of China (JSZL2022607B002, JSZL202160113001 and JCKY2021608B018), the Fundamental Research Funds for the Central Universities (HYGJXM202310, HYGJXM202311 and HYGJXM202312), and the Ministry of Industry and Information Technology Project (CEICEC-2022-ZM02-0249). The authors also gratefully acknowledge the helpful comments and suggestions of the reviewers which have improved the presentation.

Data Availability Statement

No data were used for the research described in the article.

Conflicts of Interest

Author Dingyu Zhou was employed by the company Shanghai Civil Aviation Control and Navigation System Co., Ltd. Author Zhijie Dong was employed by the company The 6th Research Institute of China Electronics Corporation. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Dabak, Ö.C.; Balaban, B. RF Interoperability Analysis for Fighter Aircrafts. In Proceedings of the 2023 IEEE USNC-URSI Radio Science Meeting (Joint with AP-S Symposium), Portland, OR, USA, 23–28 July 2023; Volume 23–28, pp. 1–2. [Google Scholar]
  2. Srinaga Nikhil, N.; Vinoy, K.J. Design of a compact dual-band antenna for RF power transfer in an aircraft fuel tank. In Proceedings of the 2015 IEEE Applied Electromagnetics Conference (AEMC), Guwahati, India, 18–21 December 2015; Volume 18–21, pp. 1–2. [Google Scholar]
  3. Kasiselvanathan, M.; Prabhu, P.; Raja, L.; Lebaka, S. A multi-radio antenna system for Cognitive Radio(CR), 5G, WLAN and UWB MIMO applications. AEU—Int. J. Electron. Commun. 2024, 180, 155315. [Google Scholar] [CrossRef]
  4. Kinoshita, M.; Hikage, T.; Nojima, T.; Futatsumori, S.; Kohmura, A.; Yonemoto, N. Numerical estimation of RF propagation characteristics of cellular radio in an aircraft cabin. In Proceedings of the Asia-Pacific Microwave Conference 2011, Melbourne, VIC, Australia, 5–8 December 2011; Volume 5–8, pp. 82–85. [Google Scholar]
  5. Filatovas, E.; Marcozzi, M.; Mostarda, L.; Paulavičius, R. A MCDM-based framework for blockchain consensus protocol selection. Expert Syst. Appl. 2022, 204, 117609. [Google Scholar] [CrossRef]
  6. Shao, J.; Zhong, S.; Tian, M.; Liu, Y. Combining fuzzy MCDM with Kano model and FMEA: A novel 3-phase MCDM method for reliable assessment. Ann. Oper. Res. 2024, 1–41, 24. [Google Scholar] [CrossRef]
  7. Khani, M.; Ghazi, R.; Nazari, B. An unsupervised learning based MCDM approach for optimal placement of fault indicators in distribution networks. Eng. Appl. Artif. Intell. 2023, 125, 106751. [Google Scholar] [CrossRef]
  8. Baydaş, M.; Yılmaz, M.; Jović, Ž.; Stević, Ž.; Özuyar, S.E.; Özçil, A. A comprehensive MCDM assessment for economic data: Success analysis of maximum normalization, CODAS, and fuzzy approaches. Financ. Innov. 2024, 10, 105. [Google Scholar] [CrossRef]
  9. Wu, S.; Niu, R. Development of carbon finance in China based on the hybrid MCDM method. Humanit. Soc. Sci. Commun. 2024, 11, 156. [Google Scholar] [CrossRef]
  10. Srivastava, S.; Tripathi, A.; Arora, N. Multi-criteria decision making (MCDM) in diverse domains of education: A comprehensive bibliometric analysis for research directions. Int. J. Syst. Assur. Eng. Manag. 2024, 1–18, 9. [Google Scholar] [CrossRef]
  11. Özçelik, G. The attitude of MCDM approaches versus the optimization model in finding the safest shortest path on a fuzzy network. Expert Syst. Appl. 2022, 203, 117472. [Google Scholar] [CrossRef]
  12. Sampathkumar, S.; Augustin, F. Optimizing Robot Deployment in Hazardous Environment: MCDM Approach Using Field Performers Under Intuitionistic Dense Fuzzy Set. Int. J. Fuzzy Syst. 2024, 26, 1537–1566. [Google Scholar] [CrossRef]
  13. Kut, P.; Pietrucha-Urbanik, K. Bibliometric Analysis of Multi-Criteria Decision-Making (MCDM) Methods in Environmental and Energy Engineering Using CiteSpace Software: Identification of Key Research Trends and Patterns of International Cooperation. Energies 2024, 17, 3941. [Google Scholar] [CrossRef]
  14. Alemdar, K.D.; Çodur, M.Y. A New Approach to Detect Driver Distraction to Ensure Traffic Safety and Prevent Traffic Accidents: Image Processing and MCDM. Sustainability 2024, 16, 7624. [Google Scholar] [CrossRef]
  15. Medrán, F.; Enfedaque, A.; Alberti, M.G. A Sustainability Assessment of Industrialised Housing Construction Using the MIVES (Modelo Integrado de Valor para una Evaluación Sostenible)-Based Multicriteria Decision-Making Method. Buildings 2024, 14, 2712. [Google Scholar] [CrossRef]
  16. Maserrat, Z.; Alesheikh, A.A.; Jafari, A.; Charandabi, N.K.; Shahidinejad, J. A Dempster–Shafer Enhanced Framework for Urban Road Planning Using a Model-Based Digital Twin and MCDM Techniques. ISPRS Int. J. Geo-Inf. 2024, 13, 302. [Google Scholar] [CrossRef]
  17. Tran, N.-T.; Trinh, V.-L.; Chung, C.-K. An Integrated Approach of Fuzzy AHP-TOPSIS for Multi-Criteria Decision-Making in Industrial Robot Selection. Processes 2024, 12, 1723. [Google Scholar] [CrossRef]
  18. Ahmad, R.; Gabriel, H.F.; Alam, F.; Zarin, R.; Raziq, A.; Nouman, M.; Young, H.-W.V.; Liou, Y.-A. Remote sensing and GIS based multi-criteria analysis approach with application of AHP and FAHP for structures suitability of rainwater harvesting structures in Lai Nullah, Rawalpindi, Pakistan. Urban Clim. 2024, 53, 101817. [Google Scholar] [CrossRef]
  19. He, Y.; Yu, Z.; Deng, Y.; Deng, J.; Cai, R.; Wang, Z.; Tu, W.; Zhong, W. AHP-based welding position decision and optimization for angular distortion and weld collapse control in T-joint multipass GMAW. J. Manuf. Process. 2024, 121, 246–259. [Google Scholar] [CrossRef]
  20. Li, Y.; Zhang, Y.; Zhang, X.; Zhao, J.; Huang, Y.; Wang, Z.; Yi, Y. Distribution of geothermal resources in Eryuan County based on entropy weight TOPSIS and AHP–TOPSIS methods. Nat. Gas Ind. B 2024, 11, 213–226. [Google Scholar] [CrossRef]
  21. Daimi, S.; Rebai, S. Sustainability performance assessment of Tunisian public transport companies: AHP and ANP approaches. Socio-Econ. Plan. Sci. 2023, 89, 101680. [Google Scholar] [CrossRef]
  22. Olmedo-Navarro, A.; Fuentes, C.K.; Vásquez, L.A.; Utreras, J.J.; Arrieta-Barrios, T.; Corrales-Paternina, A. Conforming work teams within SMEs using Fuzzy Logic and Analytic Network Process (ANP). Procedia Comput. Sci. 2023, 220, 952–957. [Google Scholar] [CrossRef]
  23. Jeng, D.J.-F.; Tzeng, G.-H. Social influence on the use of Clinical Decision Support Systems: Revisiting the Unified Theory of Acceptance and Use of Technology by the fuzzy DEMATEL technique. Comput. Ind. Eng. 2012, 62, 819–828. [Google Scholar] [CrossRef]
  24. Jeong, J.S.; González-Gómez, D. A web-based tool framing a collective method for optimizing the location of a renewable energy facility and its possible application to sustainable STEM education. J. Clean. Prod. 2020, 251, 119747. [Google Scholar] [CrossRef]
  25. Zhang, L.; Wang, Y.; Zhang, J.; Zhang, S.; Guo, Q. Rockfall hazard assessment of the slope of Mogao Grottoes, China based on AHP, F-AHP and AHP-TOPSIS. Environ. Earth Sci. 2022, 81, 377. [Google Scholar] [CrossRef]
  26. Naeem, M.; Farid, H.U.; Madni, M.A.; Albano, R.; Inam, M.A.; Shoaib, M.; Shoaib, M.; Rashid, T.; Dilshad, A.; Ahmad, A. GIS-Based Analytical Hierarchy Process for Identifying Groundwater Potential Zones in Punjab, Pakistan. ISPRS Int. J. Geo-Inf. 2024, 13, 317. [Google Scholar] [CrossRef]
  27. Eriş, M.B.; Sezer, E.D.G.; Ocak, Z. Prioritization of the factors affecting the performance of clinical laboratories using the AHP and ANP techniques. Netw. Model. Anal. Health Inform. Bioinform. 2022, 12, 5. [Google Scholar] [CrossRef]
  28. Bukar, U.A.; Sayeed, M.S.; Razak, S.F.; Yogarayan, S.; Sneesl, R. Prioritizing Ethical Conundrums in the Utilization of ChatGPT in Education through an Analytical Hierarchical Approach. Educ. Sci. 2024, 14, 959. [Google Scholar] [CrossRef]
  29. Hoang, T.-T.; Huang, Y.-F.; Do, M.-H. The Key Factors of Watching Live Streaming in Taiwanese Manufacturing Sectors Identified by the Analytic Hierarchy Process. Eng. Proc. 2024, 74, 29. [Google Scholar] [CrossRef]
  30. Haidar, A.M.A.; Sharip, M.R.M.; Ahfock, T. An integrated decision-making approach for managing transformer tap changer operation while optimizing renewable energy storage allocation using ANP-entropy and TOPSIS. Electr. Eng. 2024, 106, 2407–2423. [Google Scholar] [CrossRef]
  31. Liu, Y.; Song, P. Research on the Application Maturity of Enterprises’ Artificial Intelligence Technology Based on the Fuzzy Evaluation Method and Analytic Network Process. Appl. Sci. 2024, 14, 7804. [Google Scholar] [CrossRef]
  32. Membele, G.M.; Naidu, M.; Mutanga, O. Application of analytic network process (ANP), local and indigenous knowledge in mapping flood vulnerability in an informal settlement. Nat. Hazards 2024, 120, 2929–2951. [Google Scholar] [CrossRef]
  33. Qi, C.; Zou, Q.; Cao, Y.; Ma, M. Hazardous Chemical Laboratory Fire Risk Assessment Based on ANP and 3D Risk Matrix. Fire 2024, 7, 1465–1469. [Google Scholar] [CrossRef]
  34. Li, X.; Ran, Y.; Fafa, C.; Zhu, X.; Wang, H.; Zhang, G. A maintenance strategy selection method based on cloud DEMATEL-ANP. Soft Comput. 2023, 27, 18843–18868. [Google Scholar] [CrossRef]
  35. Li, M.; Yang, C.; Zhang, L.; Fan, R. Research on Sustainable Development Strategy of Energy Internet System in Xiongan New Area of China Based on PEST-SWOT-ANP Model. Sustainability 2024, 16, 6395. [Google Scholar] [CrossRef]
  36. Tsai, B.-H. Applying Fuzzy Decision-Making Trial and Evaluation Laboratory and Analytic Network Process Approaches to Explore Green Production in the Semiconductor Industry. Sustainability 2024, 16, 7163. [Google Scholar] [CrossRef]
  37. Yadav, A.; Sachdeva, A.; Garg, R.K.; Qureshi, K.M.; Mewada, B.G.; Qureshi, M.R.; Mansour, M. Achieving Net-Zero in the Manufacturing Supply Chain through Carbon Capture and LCA: A Comprehensive Framework with BWM-Fuzzy DEMATEL. Sustainability 2024, 16, 6972. [Google Scholar] [CrossRef]
  38. Jamali, A.; Robati, M.; Nikoomaram, H.; Farsad, F.; Aghamohammadi, H. Urban Resilience Assessment Using Hybrid MCDM Model Based on DEMATEL-ANP Method (DANP). J. Indian Soc. Remote Sens. 2023, 51, 893–915. [Google Scholar] [CrossRef]
  39. Zhang, S.; Liu, J.; Li, Z.; Xiahou, X.; Li, Q. Analyzing Critical Factors Influencing the Quality Management in Smart Construction Site: A DEMATEL-ISM-MICMAC Based Approach. Buildings 2024, 14, 2400. [Google Scholar] [CrossRef]
  40. Yu, Y.; He, Y.; Karimi, H.R.; Gelman, L.; Cetin, A.E. A two-stage importance-aware subgraph convolutional network based on multi-source sensors for cross-domain fault diagnosis. Neural Netw. 2024, 179, 106518. [Google Scholar] [CrossRef]
  41. Zhang, Q.; Sun, Y.; Hu, Y.; Wang, S.; Yin, B. A subgraph sampling method for training large-scale graph convolutional network. Inf. Sci. 2023, 649, 119661. [Google Scholar] [CrossRef]
  42. Li, T.; Zhou, Z.; Li, S.; Sun, C.; Yan, R.; Chen, X. The emerging graph neural networks for intelligent fault diagnostics and prognostics: A guideline and a benchmark study. Mech. Syst. Signal Process. 2022, 168, 108653. [Google Scholar] [CrossRef]
  43. Dong, Y.; Tang, Y.; Cheng, X.; Yang, Y.; Wang, S. SedSVD: Statement-level software vulnerability detection based on Relational Graph Convolutional Network with subgraph embedding. Inf. Softw. Technol. 2023, 158, 107168. [Google Scholar] [CrossRef]
  44. Yu, Y.; Karimi, H.R.; Shi, P.; Peng, R.; Zhao, S. A new multi-source information domain adaption network based on domain attributes and features transfer for cross-domain fault diagnosis. Mech. Syst. Signal Process. 2024, 211, 111194. [Google Scholar] [CrossRef]
  45. Gamal, A.; Abdel-Basset, M.; Hezam, I.M.; Sallam, K.M.; Alshamrani, A.M.; Hameed, I.A. A computational sustainable approach for energy storage systems performance evaluation based on spherical-fuzzy MCDM with considering uncertainty. Energy Rep. 2024, 11, 1319–1341. [Google Scholar] [CrossRef]
  46. Cui, H.; Dong, S.; Hu, J.; Chen, M.; Hou, B.; Zhang, J.; Zhang, B.; Xian, J.; Chen, F. A hybrid MCDM model with Monte Carlo simulation to improve decision-making stability and reliability. Inf. Sci. 2023, 647, 119439. [Google Scholar] [CrossRef]
  47. Zhang, J. Research and Application of Corporate Sustainable Value Assessment Based on Discounted Cash Flow Method and MCDM Method. In Proceedings of the 2021 2nd International Conference on Electronics, Communications and Information Technology (CECIT), Sanya, China, 27–29 December 2021; Volume 27–29, pp. 912–917. [Google Scholar]
  48. Abdelaal, M.A.; Seif, S.M.; El-Tafesh, M.M.; Bahnas, N.; Elserafy, M.M.; Bakhoum, E.S. Sustainable assessment of concrete structures using BIM–LCA–AHP integrated approach. Environ. Dev. Sustain. 2023, 1–20, 3701. [Google Scholar] [CrossRef]
Figure 1. Research method.
Figure 1. Research method.
Electronics 13 03618 g001
Figure 2. Sample diagram of MIRFS’s multi-level testability parameter correlation framework.
Figure 2. Sample diagram of MIRFS’s multi-level testability parameter correlation framework.
Electronics 13 03618 g002
Figure 3. MIRFS design’s full process’s testability index framework.
Figure 3. MIRFS design’s full process’s testability index framework.
Electronics 13 03618 g003
Figure 4. Testing indicator system for the entire lifecycle of MIRFS system.
Figure 4. Testing indicator system for the entire lifecycle of MIRFS system.
Electronics 13 03618 g004
Figure 5. MIRFS candidate parameter evaluation and testability allocation model.
Figure 5. MIRFS candidate parameter evaluation and testability allocation model.
Electronics 13 03618 g005
Figure 6. DEMATEL-ANP method.
Figure 6. DEMATEL-ANP method.
Electronics 13 03618 g006
Figure 7. Classification diagram of testability parameters.
Figure 7. Classification diagram of testability parameters.
Electronics 13 03618 g007
Table 1. Judgment matrix P between MIRFS target layer T and criterion layer G.
Table 1. Judgment matrix P between MIRFS target layer T and criterion layer G.
PiP1P2P3P4P5P6P7P8P9P10P11P12
Pj
P1153957177755
P21/511/33531/533333
P31/331755157553
P41/91/31/71131/5111/31/31/3
P51/51/51/51111/7111/31/31/3
P61/71/31/51/3111/7111/31/31/3
P7151577111335
P81/71/31/51111/555111
P91/71/31/71111/511111
P101/71/31/53331/311111
P111/51/31/53331/311111
P121/51/31/3331/31/511111
Table 2. Judgment matrix Q1 between criterion layer element G1 and scheme layer M.
Table 2. Judgment matrix Q1 between criterion layer element G1 and scheme layer M.
Q1jQ11Q13Q14Q16Q17Q18Q19
Q1i
Q111357755
Q131/3134433
Q141/51/311211
Q161/71/411111
Q171/71/41/211115/47
Q181/51/311111
Q191/51/311111
Table 3. Judgment matrix Q8 between criterion layer element G2 and indicator layer M.
Table 3. Judgment matrix Q8 between criterion layer element G2 and indicator layer M.
Q8jQ82Q84Q86Q87Q88Q89
Q8i
Q82113533
Q84113533
Q861/31/31211
Q871/51/51/2122
Q881/31/311/211
Q891/31/311/211
Table 4. Correlation matrix C between system requirements and alternative testability parameters.
Table 4. Correlation matrix C between system requirements and alternative testability parameters.
ParametersMission ProfileMission GradeTask
Time
Task
Frequency
Task
Environment
Combat ReadinessMission SuccessStructural FunctionTactical Metrics
FDR897979778
FIR879977976
FAR657797757
MFDT450505575
MFIT450505576
CNDR000003000
RTOK000003000
MTBEB206035000
MTTRB030000 705
MBRT300050007
Table 5. Control criteria weight W s .
Table 5. Control criteria weight W s .
Control Criteria (Network)Variable NameWeight
P1W10.565670
P2W20.111141
P3W30.114038
P4W40.209153
Table 6. Scheme layer element weight v m .
Table 6. Scheme layer element weight v m .
Element Name v 1 v 2 v 3 v 4 f x
E110.0602350.0744680.0288170.0581540.057799
E120.0599280.1774320.0.59350.0436780.066853
E130.0772640.2060850.0608720.1110200.096772
E210.2441160.0304810.1527410.0427010.167826
E220.1243150.0232850.2116930.0250900.103438
E230.2175280.0361110.0969630.0272140.143842
E310.0389860.1630500.1628000.2985370.121180
E320.0916650.1658800.1543040.2271090.135385
E330.0859100.12320880.858760.1664960.106906
Table 7. Weight influence matrix A1.
Table 7. Weight influence matrix A1.
Layer CC1C2C3
C10.0911310.2341220.195028
C20.2458150.6521430.717205
C30.6630540.1137350.087768
Table 8. Unweighted hypermatrix W1.
Table 8. Unweighted hypermatrix W1.
E11E12E13E21E22E23E31E32E33
E110.000000.125000.142860.135160.234120.363640.593630.5625010.58719
E120.750000.000000.857140.366890.652140.090910.157060.136500.09587
E130.250000.875000.000000.497950.113740.545460.249310.238490.31694
E210.549810.142860.648330.000000.750000.750000.400000.395800.28538
E220.368060.714290.122020.200000.000000.250000.066670.505820.08643
E230.082130.142860.229650.800000.250000.000000.533330.0983800.62820
E310.111410.128540.231830.333330.212270.098230.000000.166670.20000
E320.206330.212270.584170.333330.659200.701230.833330.833330.00000
E330.682250.659200.184000.333330.128540.200540.166670.000000.80000
Table 9. Weighted hypermatrix with P 1 as criterion.
Table 9. Weighted hypermatrix with P 1 as criterion.
E11E11E11E11E11E11E11E11E11
E110.000000.011390.013020.031650.054810.085140.115780.121890.11452
E120.068350.000000.078110.085900.152680.021280.030630.026620.01870
E130.022790.079740.000000.116580.026630.127700.0486220.046510.06181
E210.135150.035120.159370.00000.489110.489110.286880.283870.20467
E220.090480.175580.029990.130430.000000.163040.047810.362770.06199
E230.020190.035120.056450.521710.163040.000000.382510.070560.45055
E310.073870.085230.153710.037910.024140.011170.000000.14630.01755
E320.136810.140740.387340.037910.074970.079750.073140.073140.00000
E330.452370.437080.122000.037910.014620.022810.0146280.000000.07021
Table 10. Limit supermatrix with P 1 as criterion.
Table 10. Limit supermatrix with P 1 as criterion.
E11E11E11E11E11E11E11E11E11
E110.060240.060240.060240.060240.060240.060240.060240.060240.06024
E120.059930.059930.059930.059930.059930.059930.059930.059930.05993
E130.077260.077260.077260.077260.077260.077260.077260.077260.07726
E210.244120.244120.244120.244120.244120.244120.244120.244120.24412
E220.124320.124320.124320.124320.124320.124320.124320.124320.12432
E230.217580.217580.217580.217580.217580.217580.217580.217580.21758
E310.038990.038990.038990.038990.038990.038990.038990.038990.03899
E320.085910.085910.085910.085910.085910.085910.085910.085910.08591
E330.091670.091670.091670.091670.091670.091670.091670.091670.09167
Table 11. Weighted hypermatrix with P2 as criterion.
Table 11. Weighted hypermatrix with P2 as criterion.
E11E11E11E11E11E11E11E11E11
E110.000000.063740.050990.074890.064780.132040.163550.058050.06324
E120.084990.000000.203980.356270.371420.332710.048810.486370.07227
E130.169980.191230.000000.138380.133340.104790.429070.097010.50592
E210.007270.041620.034590.000000.019480.019470.057150.007630.02990
E220.057860.008260.034590.016230.000000.077910.012480.023340.00947
E230.010960.026220.006920.081160.077910.000000.032710.071360.06296
E310.100590.397100.3121670.121120.058780.093150.000000.042700.04270
E320.484870.105060.3121670.181670.235110.031180.051240.000000.21351
E330.083470.166770.044600.030280.039190.208730.2049730.213510.00000
Table 12. Weighted hypermatrix with P3 as criterion.
Table 12. Weighted hypermatrix with P3 as criterion.
E11E11E11E11E11E11E11E11E11
E110.000000.010750.051610.042540.013550.011090.008360.072200.02568
E120.007160.000000.012900.011570.036550.103680.077110.010110.01050
E130.057350.053760.000000.094580.098590.033910.032190.035350.08147
E210.021010.095760.013410.000000.389180.077950.089550.044450.37808
E220.021010.016370.100550.350800.000000.389780.338490.361990.14016
E230.126100.056000.054160.116930.077950.000000.142160.163760.05196
E310.196600.199440.123020.052350.245720.243490.000000.249690.20807
E320.492200.316600.530100.239970.025620.029900.260090.000000.10403
E330.078530.251280.114200.091470.112210.110150.0552010.062420.00000
Table 13. Weighted hypermatrix with P4 as criterion.
Table 13. Weighted hypermatrix with P4 as criterion.
E11E11E11E11E11E11E11E11E11
E110.000000.190080.152060.040590.113560.113560.017790.081570.02616
E120.045620.000000.076030.024280.024800.024800.070620.016180.02068
E130.182480.038010.000000.203480.130000.130000.112110.102780.15824
E210.028940.016220.010180.000000.058610.058610.054010.054850.03890
E220.007270.048660.046630.100470.058610.58610.023590.011910.01029
E230.038390.009730.017790.016740.000000.000000.020610.031390.04902
E310.413930.301940.190750.286720.201200.201200.000000.560980.56098
E320.109510.325260.456970.286720.159690.159690.350610.000000.14024
E330.173840.070070.049550.040960.253500.25350.350610.140240.00000
Table 14. Direct relationship matrix D and comprehensive impact matrix T.
Table 14. Direct relationship matrix D and comprehensive impact matrix T.
P1P2P3
P10/1.712.0/1.682.2/1.77
P22.3/1.930.0/1.562.1/1.81
P32.5/1.992.2/1.770.0/1.70
Table 15. Failure rate of each unit.
Table 15. Failure rate of each unit.
ComponentE11E12E13E21E22E23E31E32E33
λ i (10−6)120705010060801004060
Table 16. Allocation and validation of indicators for each unit in MIRFS.
Table 16. Allocation and validation of indicators for each unit in MIRFS.
FactorFDRFIRFAR
SWMEDMHDBAMCDMSWMEDMHDBAMCDMSWMEDMHDBAMCDM
E110.96490.96150.95150.96980.94140.94180.92730.95120.05740.05610.06060.0511
E120.95620.96150.95860.96010.94140.94180.93780.94650.05740.05610.05180.0501
E130.96040.96150.95470.97790.94160.94180.93210.94050.05740.05610.05660.0543
E210.95840.96150.96740.97610.94160.94180.95100.97830.05740.05610.04080.0309
E220.96150.96150.96450.97290.94180.94180.94670.95440.05740.05610.04440.0397
E230.96150.96150.95880.98980.94180.94180.93810.95190.05740.05610.05160.0506
E310.95870.96150.95720.96180.94210.94180.93580.94770.05740.05610.05350.0459
E320.96270.96150.96940.97890.94210.94180.95410.97100.05740.05610.03820.0377
E330.96690.96150.96620.97110.94230.94180.94930.94110.05740.05610.04220.0414
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, C.; Huang, Y.; Zhou, D.; Dong, Z.; He, S.; Zhou, Z. A MCDM-Based Analysis Method of Testability Allocation for Multi-Functional Integrated RF System. Electronics 2024, 13, 3618. https://doi.org/10.3390/electronics13183618

AMA Style

Zhang C, Huang Y, Zhou D, Dong Z, He S, Zhou Z. A MCDM-Based Analysis Method of Testability Allocation for Multi-Functional Integrated RF System. Electronics. 2024; 13(18):3618. https://doi.org/10.3390/electronics13183618

Chicago/Turabian Style

Zhang, Chao, Yiyang Huang, Dingyu Zhou, Zhijie Dong, Shilie He, and Zhenwei Zhou. 2024. "A MCDM-Based Analysis Method of Testability Allocation for Multi-Functional Integrated RF System" Electronics 13, no. 18: 3618. https://doi.org/10.3390/electronics13183618

APA Style

Zhang, C., Huang, Y., Zhou, D., Dong, Z., He, S., & Zhou, Z. (2024). A MCDM-Based Analysis Method of Testability Allocation for Multi-Functional Integrated RF System. Electronics, 13(18), 3618. https://doi.org/10.3390/electronics13183618

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop