Next Article in Journal
Enhancing an Imbalanced Lung Disease X-ray Image Classification with the CNN-LSTM Model
Next Article in Special Issue
Optical High-Speed Rolling Mark Detection Using Object Detection and Levenshtein Distance
Previous Article in Journal
A High-Sensitivity Cesium Atomic Magnetometer Based on A Cesium Spectral Lamp
Previous Article in Special Issue
Spatial Data-Based Automatic and Quantitative Approach in Analyzing Maintenance Reachability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sequential-Fault Diagnosis Strategy for High-Speed Train Traction Systems Based on Unreliable Tests

1
State Key Laboratory of Advanced Rail Autonomous Operation, Beijing Jiaotong University, Beijing 100044, China
2
Beijing Research Center of Urban Traffic Information Sensing and Service Technologies, Beijing Jiaotong University, Beijing 100044, China
3
Frontiers Science Center for Smart High-Speed Railway System, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2023, 13(14), 8226; https://doi.org/10.3390/app13148226
Submission received: 19 April 2023 / Revised: 22 May 2023 / Accepted: 9 June 2023 / Published: 15 July 2023

Abstract

:
A train traction system is an important part of an urban rail transit system. However, a train traction system has many components and a high risk of internal faults. How to systematically evaluate the fault coverage and diagnosis ability of testing equipment is a fundamental problem in the technical field of train operation. In response to this problem, this study attempts to apply testability technology to the test capability analysis of train traction systems for rail transit. In view of the uncertainty in actual tests, a method for constructing a fault diagnosis strategy for a traction system under unreliable testing is proposed. The concept of test credibility is introduced for the first time, and the quantitative evaluation of test credibility is realized using a cloud model, so as to construct a new “fault-test” credibility correlation matrix. On this basis, a single-fault diagnosis strategy of the traction system is constructed and compared based on information theory. The results show that a using a fault diagnosis strategy under the condition of unreliable testing is more similar to actual maintenance work, proving the significance of the diagnosis strategy constructed using this method for the practical application of the project.

1. Introduction

High-speed trains have become the mainstream form of intercity transportation because of their comfort, convenience, safety, and punctuality. Due to the rapid increase in rail transit passenger flow, the requirements for train safety have become more stringent. Research on train safety issues [1,2] is also emerging in an endless stream. A train information control system is the core of a high-speed train, and a traction system is the “heart” of the high-speed train information control system. Once a major failure occurs, it will bring great losses to society and the economy. Therefore, the real-time monitoring and testing of the traction systems are important technological means to ensure the safe operation of high-speed trains, so the evaluation of traction system test performance is essential.
Testability is the evaluation of testing capability, which refers to the ability of a system to accurately and quickly determine its state, for example, whether it can work or whether its performance ability has decreased. Based on this, the system’s internal structure is used to isolate faults. If a system has a good testability, it can efficiently and accurately detect and locate faults. Testability is widely used in various fields and can be applied to fault diagnosis [3,4], as well as for analyzing power devices [5,6]. Testability design comes into being with testing. Using design-for-testability (DFT) [7] can make a bus controller work more reliably and stably. One of the methods of DFT is scanning design [8,9,10,11], which can effectively improve the reliability of testing. Testability optimization can address the current shortcomings in the testability of a system by taking appropriate measures via certain means to meet the testability requirements of the system. For optimization purposes, testability can involve a flexible test system (FTS) with a phased mission model [12], a hybrid algorithm based on priority strategy and a genetic algorithm [13], the Non-dominated Sorting Genetic Algorithm (NSGA)II algorithm [14], and the pullout test optimization device based on the design of mechanical principles [15], etc.
The way to provide a strong solution for train maintenance is to build an appropriate fault diagnosis strategy for a rail transit train traction system via the testability analysis and design of trains. Fault diagnosis [16,17] is the key to ensuring safe railway operation. Diagnosis strategy construction is a key part of testability technology, and its goal is to determine the sequence of testing when fault detection and fault isolation are performed. The error-voltage-based, open-switch fault diagnosis strategy [18] can locate the faulty switch and improved slight fault diagnosis strategy for induction motor considering even and triple harmonics [19] can enhance the diagnosis accuracy for the induction motor under the slight fault situation. This reflects the importance of constructing diagnosis strategies. Because the traction system itself is relatively large and the internal electrical coupling relationship is very complex, there are inevitably some uncertain factors in the test process, which leads to the unreliability of our test results. However, a traditional testability analysis [20,21] is based on accurate results. Methods such as testability analysis based on testability modeling [22,23,24,25,26] and testability analysis based on multi-signal modeling [27,28,29,30,31] cannot consider the uncertainty factors of these tests, and thus have many disadvantages regarding the construction of fault diagnosis strategy of high-speed train traction systems. Therefore, it is very necessary to study the construction method of a fault diagnosis strategy for high-speed train traction systems based on the unreliable tests.
Therefore, in order to meet the needs of system fault diagnosis in the field of rail transit, this study attempts to apply testability technology to the testing capability analysis of rail transit train traction systems. In response to the uncertainty issues in testing, this study puts forward the concept of test reliability, quantitatively evaluates the test reliability of train traction systems, constructs a fault diagnosis strategy suitable for high-speed train traction systems, and systematically and objectively evaluates the fault coverage and diagnostic ability of testing equipment, which is essential for improving both the efficiency of maintenance support and fault diagnosis ability of the traction system. The construction process of a sequential-fault diagnosis strategy based on unreliable tests for high-speed train traction systems is shown in Figure 1.
The main innovative contributions of this paper are summarized as follows:
  • Applying testability analysis to the testing capability analysis of train traction systems provides new ideas for evaluating the fault coverage and diagnostic capability of train traction system detection equipment, and provides new methods for improving the maintenance efficiency of train traction systems and constructing reasonable diagnostic strategies.
  • In the testability modeling, the uncertainty of the test is considered, and the concept of test reliability is proposed, describing the probability that the test can accurately detect the failure of the system.
  • The single-fault diagnosis strategy of the traction system is constructed based on information theory, which can more reasonably and efficiently meet the diagnosis requirements in actual scenarios.
The other chapters of this paper are as follows: The Section 2 is a further description of the problem, the Section 3 is the details of the proposed method, the Section 4 is the experimental section, and the Section 5 is the conclusion.

2. Problem Statement

2.1. Traction System

In this paper, the era electric traction system of a certain China Railway High-speed (CRH) train is adopted, and the main circuit of traction is shown in Figure 2.
The traction system is the core component of urban rail transit vehicles. It provides traction and braking force for the train according to traction and braking needs. It converts the electric energy provided by the power grid into three-phase AC energy for the motor, and then the traction motor converts the electric energy into mechanical energy to drive the train. According to the system maintenance manual and related materials, the main components included in the system are high-voltage electrical box HV011/HV02, reactor box L, Variable-Voltage–Variable-Frequency (VVVF) inverter box, etc. Via the analysis of the traction system, it can be seen that the electrical relationship of the traction system is very complex, and there are inevitably some uncertain factors in the test process.

2.2. Test Uncertainty Analysis

A traditional testability analysis is carried out based on reliable testing, but in practical engineering applications, the unreliable testing of a traction system is common and has a great impact on the judgment of equipment testing. This is because, in reality, various factors interfere with the traction system, such as environmental and operating factors, etc., due to its large size, many components, and complex electrical coupling relationship.
T 1 T 2 T n F 1 f t = F 2   F n [ f t 11 f t 12 f t 1 n f t 21 f t 22 f t 2 n f t m 1 f t m 2 f t m n ]
In the fault–test dependency matrix f t , the elements f t i j in the formula are Boolean. f t i j = 1 means that the fault f i is related to t j , and the test t j can detect and isolate the fault f i . f t i j = 0 means that the fault f i is not related to t j , and the test t j cannot detect and isolate the fault f i . The ith row vector F i = [ f t i 1 f t i 2 f t i n ] represents the output of all available tests when the fault f i exists, describing the symptoms of the fault f i . The jth row vector T j = [ f t 1 j f t 2 j f t m j ] T represents the faults that the available test t j can detect and isolate and describes the fault detection and isolation capability of the test t j . When the test results are unreliable, the test results reflect three forms: normal, missing inspection, and false inspection. Missing inspection and false inspection are essentially different manifestations of unreliable testing.
The unreliability of the traction system test results shows the uncertainty of the traction system test, which means that the test process is affected to a certain extent; thus, it is necessary to analyze which factors affect the test.
Therefore, this paper puts forward the concept of test reliability to uniformly describe the reliability of test results, so as to avoid classifying and analyzing the different forms of results obtained using unreliable traction system tests. We can think of test reliability is as describing the probability that the test can accurately detect the occurrence or non-occurrence of a system failure.

3. Methodology

3.1. Evaluation Index of Test Reliability

The analysis of influencing factors is the basis of the quantitative evaluation of test reliability. Therefore, when constructing the quantitative evaluation model, the key is to select the appropriate influencing factors as the evaluation index. Based on a field investigation of the historical fault and maintenance detection of a train traction system, and after consultation with maintenance staff, we found five main factors that influence the test, and the quantitative evaluation index of test reliability is constructed from these five aspects.
The evaluation index standard of test reliability is shown in Table 1. The influencing factors of the test mainly include five aspects: the influence of test error, the rationality of threshold setting, the influence of environmental noise, the influence of sensor reliability, and working condition disturbance.

3.2. Evaluation Method of Test Reliability Based on the Cloud Model

(1)
Basic theory of the cloud model
The cloud model is a model that describes the transformation between qualitative concepts and their corresponding values based on a probability theory and fuzzy mathematics theory. A = ( E x , E n , H e ) , E x , E n and H e represent the three digital features of the cloud model, respectively.
(2)
Determination of the evaluation set
We divide each index into five levels of language evaluation vectors, which together can form an evaluation set. Assuming that the domain of an index in the evaluation set is U = [ X min , X max ] , five clouds are generated in the domain U according to the language scale of the index via the normal cloud model, and the clouds are used to represent the evaluation languages of different levels. It refers to the evaluation level and is expressed as the cloud scale.
Considering the detection accuracy of the sensor, we set the index universe as U = [ 90 , 100 ] and divide it into five rating ranges using the golden section method. The hyper entropy H e 0 of the central cloud is taken as 0.05, so the five cloud models are C 1 = ( 90 , 1.031 , 0.13 ) , C 2 = ( 93.09 , 0.64 , 0.08 ) , C 3 = ( 95 , 0.39 , 0.05 ) , C 4 = ( 96.91 , 0.64 , 0.08 ) and C 5 = ( 100 , 1.031 , 0.13 ) , respectively. We set the number of cloud droplets N to 5000, and the generated cloud map is shown in Figure 3. From left to right are C 1 , C 2 , C 3 , C 4 , C 5 .
(3)
Determine expert weights
We determined the expert weight by judging the similarity between the evaluation information given by the maintenance technician and the evaluation information given by all other maintenance personnel. We can use cloud similarity to calculate the similarity evaluated by maintenance technicians, and then calculate the weight. The concept of cloud similarity can be understood as: assuming that two evaluation clouds are C 1 = ( E x 1 , E n 1 , H e 1 ) and C 2 = ( E x 2 , E n 2 , H e 2 ) , and their three digital features constitute vectors C 1 and C 2 . The cosine value between them can be regarded as the similarity of the two clouds.
We conclude that C 1 = ( E x 1 , E n 1 , H e 1 ) , C 2 = ( E x 2 , E n 2 , H e 2 ) , and the similarity is ( C 1 , C 2 ) = cos ( C 1 , C 2 ) .
In the cloud evaluation matrix, assuming that there are k maintenance technicians, the cloud evaluation matrix can be expressed as E k = ( E i j k ) n × m ( k = 1 , 2 , ) ; then, the calculation formula of the evaluation similarity of the u maintenance technician can be obtained as follows:
S u = u v i = 1 n j = 1 m ( E i j u , E i j v ) ( v = 1 , 2 , , k )
where ( E i j u , E i j v ) represents the cloud similarity of corresponding clouds in the evaluation matrix of different maintenance technicians.
After obtaining the evaluation similarity of each maintenance technician, the expert weight can be determined:
P U = ( S u ) / k = 1 K ( S u )
where k represents the number of maintenance technicians.
(4)
Construct a floating cloud evaluation matrix
After obtaining the evaluation cloud model of each test under each index and the evaluation weight of each maintenance technician, the evaluation information for each maintenance technician can be represented using the floating cloud model. Assuming that there are k maintenance technicians participating in the evaluation, the corresponding weight of each maintenance technician is P k , and the cloud evaluation matrix C = ( E x , E n , H e ) is calculated:
E n = P 1 2 i = 1 k P i 2 E n 1 + P 2 2 i = 1 k P i 2 E n 2 + + P k 2 i = 1 k P i 2 E n k E x = P 1 E x 1 + P 2 E x 2 + + P k E x k P 1 + P 2 + P k H e = P 1 2 i = 1 k P i 2 H e 1 + P 2 2 i = 1 k P i 2 H e 2 + + P k 2 i = 1 k P i 2 H e k
Finally, the cloud evaluation matrix is obtained:
C = B 1 B 2 B m t 1 C 11 = ( E x 11 , E n 11 , H e 11 ) C 12 = ( E x 12 , E n 12 , H e 12 ) C 1 m = ( E x 1 m , E n 1 m , H e 1 m ) t 2 C 21 = ( E x 21 , E n 21 , H e 21 ) C 2 m = ( E x 2 m , E n 2 m , H e 2 m ) t n C n 1 = ( E x n 1 , E n n 1 , H e n 1 ) C n 2 = ( E x n 2 , E n n 2 , H e n 2 ) C n m = ( E x n m , E n n m , H e n m )
where t = ( t 1 , t 2 , , t n ) represents the test set, and B = ( B 1 , B 2 , , B n ) represents the evaluation index set.
(5)
Determine the weight of the evaluation indicators
When determining the index weight, we used the Analytic Hierarchy Process (AHP) method. The calculation steps are as follows:
(1) Construct a judgment matrix
The nine-scale method is a commonly used method that is suitable for determining the relative importance between indicators. The judgment matrix of pairwise comparisons can be expressed as:
B = b 11 b 12 b 1 n b 21 b 22 b 2 n b n 1 b n 2 b n n
where b i j ( 1 i n , 1 j n ) is represented the relative importance of index b i to index b j . The elements in the judgment matrix have the following relationship, and there is a quantitative relationship between b i j and b j i , which can be expressed as follows:
b i j · b j i = 1 b i i = 1 b i j > 0
For example, if the decision maker thinks that index b i is much more important than the index b j , then b i j = 7 .
(2) Weight calculation
After obtaining the relative importance judgment matrix B , we calculate its maximum eigenvalue λ max and calculate its normalized eigenvector W = ( ω 1 , ω 2 , , ω n ) T . The value of each ω i in the vector is the subjective weight value corresponding to each index.
Here, the root square method is used to calculate λ max and W . Firstly, the weight of each index is calculated. On the basis of obtaining the weight, λ max is calculated to obtain the feature vector required for inspection. The steps are as follows:
① The product of each row b i j of the judgment matrix B can be expressed as:
M i = i = 1 n b i j ( i , j = 1 , 2 , , n )
② The nth power root of M i can be expressed as:
W i ¯ = b i j n ( i = 1 , 2 , , n )
③ Vector W ¯ = ( W 1 ¯ , W 2 ¯ , , W n ¯ ) is normalized, which can be expressed as:
ω i = W i ¯ j = 1 n W i ¯ ( i = 1 , 2 , , n )
The obtained ω i is the weight of the five evaluation indexes of the test reliability.
(3) Consistency test
①The maximum eigenvalue λ max of the judgment matrix can be expressed as:
λ max = i = 1 n ( B × ω ) i n ω i
② The general consistency index C I of B is calculated. Finally, the consistency of the matrix is tested according to the calculated random consistency ratio C R , there is the following calculation:
C I = λ max n n 1
③ The random consistency ratio C R of the judgment matrix B :
C R = C I R I
where R I represents the average random consistency index of B .
We can obtain the value of R I according to Table 2.
If C R < 0.1 , it can be considered that the consistency of B satisfies the conditions, and the weight of each evaluation index is reasonable; if C R > 0.1 , it needs to be re-evaluated.
(6)
Quantitative test reliability
Based on the above study, we can obtain the quantitative model of each test.
N = B 1 B m t 1 C 11 = ( E x 11 , E n 11 , H e 11 ) C 1 m = ( E x 1 m , E n 1 m , H e 1 m ) t 2 C 21 = ( E x 21 , E n 21 , H e 21 ) C 2 m = ( E x 2 m , E n 2 m , H e 2 m ) t n C n 1 = ( E x n 1 , E n n 1 , H e n 1 ) C n m = ( E x n m , E n n m , H e n m ) ( ω 1 , ω 2 , , ω n ) T
where t = ( t 1 , t 2 , , t n ) represents test set, B = ( B 1 , B 2 , , B n ) represents the evaluation index set and W = ( ω 1 , ω 2 , , ω n ) represents index weight.
In the cloud model of the test set, the expectation is that the value can best represent the qualitative concept. Therefore, we can think that the expected value E x is the score for the reliability of each test. We specify the reliability P = { 0 < P j < 1 , j = 1 , 2 , , n } of each test, and then the reliability P = 0.1 E x and uncertainty matrix can be transformed into:
d i j = f t i j p j , f t i j = 1 1 p j , f t i j = 0

3.3. Optimization Objectives of Fault Diagnosis Strategy

(1)
Problem components
The problem of solving a fault diagnosis tree via system fault reasoning is mainly composed of five factors, including a system fault fuzzy set F , fault probability set λ , test set T , diagnosis cost set, and system dependency matrix.
(1)
F = { f 0 ,   f 1 , f 2 , , f m } is the fault set of the system, where f 0 indicates that the system is in a normal condition without faults.
(2)
= { 0 , 1 , 2 , , m } is a set representing failure probability, which is calculated from the failure rate. 0 + 2 + + n = 1 , where 0 represents the probability of no faults in the system.
f 0 = k = 1 m 1 λ k k = 1 m 1 λ k + k = 1 m λ k k = 1 , k i m 1 λ k = 1 1 + k = 1 m λ k / 1 λ k
f i = λ i k = 1 , k i m 1 λ k k = 1 m 1 λ k + i = 1 m λ i k = 1 , k i m 1 λ k = λ i / 1 λ i 1 + k = 1 m λ k / 1 λ k ,   i 0
(3)
T = { t 1 , t 2 , , t n } is the available test set.
(4)
S = { s 1 , s 2 , , s n } represents the cost of isolating the fault when the fault occurs, i.e., the cost of fault diagnosis.
(5)
F T = f t i j ( m + 1 ) × n is the system fault–test dependency matrix. Considering the reliability of the test, this can be transformed into a fault–test dependency matrix F T = d i j ( m + 1 ) × n based on reliability.
(2)
Optimization objectives
The construction of a fault diagnosis strategy is a typical Nondeterministic polynominal (NP) problem. On the premise that only a single fault occurs, the objective function of the system is:
D o p t = min i = 0 m f i k = 0 | D ( i ) | s D ( i ) [ k ]
where D o p t represents generated diagnosis tree, m represents the node of the generated diagnosis tree, s D ( i ) [ k ] represents the diagnostic cost of the kth test in the sequence D ( i ) , D ( i ) represents the test node required for fault isolation, and D ( i ) represents the length of the test sequence, which refers to the number of tests required to locate an isolated fault.

3.4. Fault Diagnosis Reasoning Based on Test Reliability Dependency Matrix

(1)
Reasoning theory of fault diagnosis when testing reliability
When the test is reliable, which refers to when the dependency matrix is determined, there are two kinds of test output. Assuming that the current fault set is X , the fault set is divided into two fault subsets X p and X r , which refer to the test result is binary output.
If the test t j passes, there are:
X p j = { f i f t i j = 0 , f i X }
If the test fails, there are:
X r j = { f i f t i j = 1 , f i X }
The value of f i in the fault–test dependency matrix is divided into corresponding subsets. f i must belong to one of the two subsets, which means that f i cannot belong to both X p j and X r j .
(2)
Reasoning theory for fault diagnosis when a test is unreliable
When a test is unreliable, false alarms may occur if the test passes, and missed detection may occur if the test fails. The test output still divides the fault set into two fault subsets X p and X r , but there are two other cases, namely:
When the test t j passes, there may be the following in set X p :
X p j = { f i f t i j = 1 , f i X }
When the test t j fails, there may be the following in set X r :
X r j = { f i f t i j = 0 , f i X }
(3)
Construction method of fault diagnosis strategy based on test information
The heuristic search method is the most commonly used method and has high practicability. The heuristic search method is relatively simple to some extent and can reduce the steps in a traversal search and achieve rapid fault isolation.
(1) Criteria for test selection
In the process of fault diagnosis reasoning, how to determine the next test is the key problem to be solved. This refers to determining the reason for the criterion of fault diagnosis, which is also known as the evaluation function.
According to information theory, when the test t j is unreliable, we can conclude that, the sum of the information gain for each part of the system after the test is:
I ( F , t j ) = p j X p j ( X ) log 2 p j X p j ( X ) + p j X f j ( X ) log 2 p j X f j ( X ) ( 1 p j ) X p j ( X ) log 2 ( 1 p j ) X p j ( X ) + ( 1 p j ) X f j ( X ) log 2 ( 1 p j ) X f j ( X )
The proportion of failures that pass the test is:
p j p X p j p ( X )
The proportion of failures that fail the test is:
( 1 p j ) p X f j p ( X )
where p j represents the reliability of test J, ( X ) represents the sum of the detection probabilities for fault sets before the test, X p j represents the sum of detection probabilities for fault sets passing the test, and X f j represents the sum of detection probabilities for fault sets that fail the test.
The evaluation function k can be designed. Considering the maintenance cost, we specify I ( F , t j ) as the evaluation function of fault diagnosis reasoning. When I ( F , t j ) is larger, the greater the amount of information obtained by the test. Therefore, we select the test that can maximize I ( F , t j ) as the next test, and make the test number k .
k = arg max I F ; t j c j
(2) Diagnostic tree generation process
The basic idea of constructing a fault diagnosis tree is as follows: Firstly, for the root node, we calculate the evaluation function value I ( F , t j ) of each test to be selected in the test set, compare their evaluation function values, select the test that can maximize I ( F , t j ) as the next test, and obtain two branches from the test. The faults that can pass the test are placed in one branch and those that cannot pass are placed in another branch. Then, we repeat this step for each branch node from left to right and from top to bottom, respectively and use the evaluation function value I ( F , t j ) to obtain the subsequent test sequence until only leaf nodes are left.
In this way, we can obtain the fault diagnosis tree. According to the fault diagnosis tree, we can isolate faults quickly.

4. Case Study

4.1. Experimental Setup

This experiment adopts the era electric traction system of a certain CRH train. Due to the particularity and importance of the traction control unit, we divide the system according to the level of system → subsystem → Line Replaceable Unit (LRU) → Shop Replaceable Unit (SRU), and then classify various fault modes according to historical fault data and historical maintenance data of the depot. A Failure Mode, Effects and Criticality Analysis (FMECA) table is shown in Table S1. After the division results are obtained, the multi-signal flow modeling information of the system is analyzed, and the multi-signal flow modeling scheme of the traction system is obtained. The information flow diagram of the system is shown in Figure 4. Finally, the testability model of the urban rail train traction system is constructed, and the complete test set fault–test dependency matrix of the system is obtained. There are 28 faults and 24 alternative tests in the system, which constitute the fault set and test set of the traction system test selection problem, respectively, as shown in Table S2.

4.2. Quantitative Evaluation of Traction System Test Reliability

In order to make the quantitative results closer to reality, we invited four maintenance technicians from the Jinan Metro Bureau to form an evaluation team to participate in the evaluation of test reliability. The cloud evaluation matrix given by them is shown in Table S3. According to the evaluation results, the qualitative evaluation is transformed into a cloud model to obtain the corresponding quantitative value.
(1)
Determination of expert weight
According to the cloud evaluation matrix given by each expert, the similarity of evaluation information between experts is calculated as follows:
S 1 = 103.703
S 2 = 111.453
S 3 = 109.862
S 4 = 115.231
After obtaining the evaluation similarity of each maintenance technician, the weight of each expert is:
p 1 = 0.236 , p 2 = 0.253 , p 3 = 0.249 , p 4 = 0.262
(2)
Comprehensive cloud assessment matrix
After obtaining the weight of each expert, the cloud evaluation matrix is obtained according to the calculation method of the cloud model, as shown in Table S4.
(3)
Determination of index weight
According to the ranking of index importance, the judgment matrix is constructed as follows:
B = 1 1/3 1/5 1/9 1/7 3 1 1/3 1/7 1/5 5 3 1 1/4 1/2 9 7 4 1 3 7 5 2 1/3 1
After the relative importance judgment matrix B is obtained, the weight ω i is calculated using the square root method to obtain the weight corresponding to each index, and the maximum eigenvalue λ max is 51 / 6 . Then, the general consistency index C I = 0.0425 and random consistency ratio C R = 0.0379 of the judgment matrix is calculated, which meet C R < 0.1 and pass the consistency test, respectively.
(4)
Quantification of test uncertainty
Using the weight of each index, the reliability of each test can be obtained, as shown in Table 3.
After obtaining the reliability of each test, according to the fault–test dependency matrix described in Section 4.1, the corresponding fault–test dependency matrix based on reliability can be obtained, as shown in Table S5.

4.3. Construction of Traction System Diagnosis Strategy under Reliability Test and Unreliable Test

Assuming that the test is reliable, based on the fault–test dependency matrix, we use the method in Section 3.4 to construct a single-fault diagnosis strategy of the traction system, as shown in Figure S1.
According to the diagnosis tree, we can obtain the diagnosis sequence of the traction system using a reliable tests as follows:
t 7 t 20 t 8 t 23 t 21 t 22 t 9 t 10 t 11 t 12 t 13 t 14 t 15 t 16 t 6 t 13 t 25 t 1 t 3 t 19 t 24 t 5 t 9 t 17 t 3 t 18 t 1 t 4
In this diagnosis strategy, a total of 28 steps need to be performed to isolate all faults. The test and isolation steps required to isolate faults are shown in Table 4.
When the test is unreliable, we can obtain the fault diagnosis tree of the system, as shown in Figure S2.
According to the diagnosis tree, we can obtain the diagnosis order as follows:
t 7 t 18 t 23 t 8 t 21 t 22 t 14 t 13 t 15 t 12 t 16 t 10 t 9 t 6 t 11 t 13 t 20 t 19 t 25 t 1 t 26 t 18 t 9 t 5 t 17 t 1 t 3 t 4
In this diagnosis strategy, a total of 28 steps of testing are also required to isolate all faults. The test and isolation steps required to isolate faults are shown in Table 5.

4.4. Comparative Analysis

The fault diagnosis strategies of traction systems under reliable tests and unreliable tests are constructed, respectively, and the two are compared and analyzed.
From the traction system fault diagnosis tree under reliable tests and unreliable tests, the fault set corresponding to the root node is F = { f 0 ,   f 1 , f 2 , , f 28 } . After calculating the value of the evaluation function, t 7 is the test that maximizes the value of the evaluation function; therefore, t 7 is selected as the next test under both the reliable tests and the unreliable tests. Two branches are obtained from t 7 , and the failures of the reliable tests and the unreliable tests are F = { f 0 ,   f 9 , f 10 , , f 28 } if they pass the test. The reliable tests and the unreliable tests again use the evaluation function value to obtain values of t 20 and t 18 , respectively, in the next tests. When there are only leaf nodes left, the first isolated fault in the reliable test is f 12 , and the first isolated fault in the unreliable test is f 14 . The isolation fault results obtained under the consideration of unreliable tests and reliable tests are completely contrasting, which further reflects the uncertainty analysis of the traction system testability model during construction. Taking into account the uncertain factors that affect inference diagnosis is of great practical significance to ensure the accuracy of fault diagnosis strategies.
According to the analyses shown in Table 4 and Table 5, from the number of tests used, it takes 28 steps to locate all faults when the tests are reliable and unreliable. From the number of tests, it can be concluded that there is no difference between reliable tests and unreliable tests regarding the impact of fault isolation. From the perspective of the test sequence, when the test is reliable or unreliable, the test sequence and fault isolation sequence are all different, as shown in Figure 5 and Figure 6.
From the test sequence comparison curve, it can be seen that the test sequence is different under the condition of reliable tests and unreliable tests. The tests used in the first step are all t 7 . When the test is reliable, the test used in the second step is t 20 , and when the test is unreliable, the test used in the second step is t 18 . When the test is reliable, the test used in step 27 is t 1 , and when the test is unreliable, the test used in step 27 is t 3 . When the test is reliable, the test used in step 9 is t 11 , and when the test is unreliable, the test used in step 9 is t 15 . Additionally, the test in the last step is t 4 . The testing sequence under reliable and unreliable tests is completely different, which fully demonstrates that whether the testing is reliable or not will affect the testing sequence, and the testing sequence will affect system fault detection and fault isolation. This also confirms why we need to construct a fault diagnosis strategy for a high-speed train traction system based on unreliable testing.
From the comparison curve of the fault isolation sequence, it can be seen that the sequence of fault isolation is also different under the condition of reliable tests and unreliable tests. If the test is reliable, the fault f 1 is isolated in step 24, and the fault f 28 is isolated in step 14. When the test is unreliable, the fault f 1 is isolated in step 25, and the fault f 28 is isolated in step 11. At the same time, when the test is reliable, the fault will be isolated from step 4. When the test is unreliable, the fault is isolated from step 5. In another example, when the test is reliable, faults f 3 and f 4 are isolated in step 28, and faults f 5 and f 6 are isolated in step 26. When the test is unreliable, faults f 3 and f 4 are isolated in step 28, the fault f 5 is isolated in step 27, and the fault f 6 is isolated in step 24. This fully shows that whether the test is reliable or not will not only affect the sequence of fault isolation, but also affect which step of the test is isolated. Therefore, it is of great significance to fully consider the condition of unreliable tests when implementing a fault diagnosis strategy.
Using the above discussion and analysis, we conclude that, although the steps to achieve fault isolation are the same when constructing diagnostic strategies for traction systems under reliable and unreliable testing, the sequence of fault diagnosis and the specific tests selected for fault diagnosis are different. This also confirms that in engineering, considering the unreliability of testing is closely related to practical applications. It can help us better judge equipment detection and improve the accuracy of testability modeling.

5. Conclusions

This paper puts forward a construction method for a traction system fault diagnosis strategy using unreliable tests. Firstly, the traction system is introduced, the possible uncertain factors and the results caused by various factors in the test process are analyzed, and the concept of test reliability is put forward. Secondly, the quantitative evaluation index of test reliability is constructed, the quantitative evaluation method of test reliability based on a cloud model is proposed, the basic elements and optimization objectives of fault diagnosis strategy are analyzed, and fault reasoning method based on a reliability dependency matrix is proposed. Then, a quantitative evaluation of the test reliability of a traction system is realized using a cloud model, and the fault–test dependency matrix of the traction system based on test reliability is constructed. Finally, under the condition of unreliable test, considering the reliability of test results and test cost, a single-fault diagnosis strategy construction method based on a reliability dependency matrix is established. The differences between single-fault diagnosis strategies in traction systems under the conditions of reliable tests and unreliable tests are compared and analyzed. From the results of the diagnosis strategy, the reliability of the test has a certain impact on the fault detection, isolation, and location sequence of the traction system. It also shows that the construction method of a diagnosis strategy considering unreliable testing plays a certain role in guiding practical engineering applications.
However, the construction method of fault diagnosis strategy under unreliable tests studied in this paper only considers the construction method of a fault diagnosis tree under a single fault. However, in practical engineering applications, there are many concurrent faults. Therefore, the following consideration is to build a multi-fault diagnosis strategy for concurrent faults.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app13148226/s1, Figure S1: Fault diagnosis tree of traction system under reliable test; Figure S2: Fault diagnosis tree of traction system under unreliable test; Table S1: The FMECA of Traction system; Table S2: Complete test set failure-test dependency matrix; Table S3: Expert cloud matrix; Table S4: Comprehensive Cloud Evaluation Matrix; Table S5: Reliability failure—test dependency matrix.

Author Contributions

Conceptualization, M.L. and Y.Z.; methodology, M.L.; validation, Y.Q. and Z.W.; investigation, Y.Z.; writing—original draft, L.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the Fundamental Research Funds for the Central Universities (Science and technology leading talent team project, Grant No. 2022JBQY007), the National Key R&D Program of China (No. 2022YFB4300601) and the National Natural Science Foundation of China (Grant 61833002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing was not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Geng, Y.; Wang, Z.; Jia, L.; Qin, Y.; Chai, Y.; Liu, K.; Tong, L. 3DGraphSeg: A Unified Graph Representation-Based Point Cloud Segmentation Framework for Full-Range Highspeed Railway Environments. IEEE Trans. Ind. Inform. 2023, 1–13. [Google Scholar] [CrossRef]
  2. Tong, L.; Jia, L.; Geng, Y.; Liu, K.; Qin, Y.; Wang, Z. Anchor-adaptive railway track detection from unmanned aerial vehicle images. Comput.-Aided Civ. Infrastruct. Eng. 2023. [Google Scholar] [CrossRef]
  3. Aizenberg, I.; Belardi, R.; Bindi, M.; Grasso, F.; Manetti, S.; Luchetta, A.; Piccirilli, M.C. A Neural Network Classifier with Multi-Valued Neurons for Analog Circuit Fault Diagnosis. Electronics 2021, 10, 349. [Google Scholar] [CrossRef]
  4. Aizenberg, I.; Belardi, R.; Bindi, M.; Grasso, F.; Manetti, S.; Luchetta, A.; Piccirilli, M.C. Failure Prevention and Malfunction Localization in Underground Medium Voltage Cables. Energies 2021, 14, 85. [Google Scholar] [CrossRef]
  5. Farkas, G.; Sarkany, Z.; Rencz, M. Structural Analysis of Power Devices and Assemblies by Thermal Transient Measurements. Energies 2019, 12, 2696. [Google Scholar] [CrossRef] [Green Version]
  6. Farkas, G.; Schweitzer, D.; Sarkany, Z.; Rencz, M. On the Reproducibility of Thermal Measurements and of Related Thermal Metrics in Static and Transient Tests of Power Devices. Energies 2020, 13, 557. [Google Scholar] [CrossRef] [Green Version]
  7. Jiang, S.; Liu, S.; Guo, C.; Fan, X.; Ma, T.; Tiwari, P. Implementation of ARINC 659 Bus Controller for Space-Borne Computers. Electronics 2019, 8, 435. [Google Scholar] [CrossRef] [Green Version]
  8. Lee, S.; Cho, K.; Kim, J.; Park, J.; Lee, I.; Kang, S. Low-Power Scan Correlation-Aware Scan Cluster Reordering for Wireless Sensor Networks. Sensors 2021, 21, 6111. [Google Scholar] [CrossRef] [PubMed]
  9. Lim, H.; Cheong, M.; Kang, S. Scan-Chain-Fault Diagnosis Using Regressions in Cryptographic Chips for Wireless Sensor Networks. Sensors 2020, 20, 4771. [Google Scholar] [CrossRef] [PubMed]
  10. Naeini, M.M.; Dass, S.B.; Ooi, C.Y. The Design and Implementation of a Low-Power Gating Scan Element in 32/28 nm CMOS Technology. J. Low Power Electron. Appl. 2017, 7, 7. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, W.; Deng, Z.; Wang, J. Enhancing Sensor Network Security with Improved Internal Hardware Design. Sensors 2019, 19, 1752. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Xiao, Y.; Wei, S.; Chai, Y.; Pan, T.; Hou, Y. Reliability optimization of flexible test system based on pyro-mechanical device products production driven. Reliab. Eng. Syst. Saf. 2023, 230, 108880. [Google Scholar] [CrossRef]
  13. Huang, X.; Xu, C.; Zhang, L.; Hu, C.; Mo, W. Parallel testing optimization method of digital microfluidic biochip. Measurement 2022, 194, 111018. [Google Scholar] [CrossRef]
  14. Gupta, N.; Sharma, A.; Pachariya, M.K. Multi-objective test suite optimization for detection and localization of software faults. J. King Saud Univ.–Comput. Inf. Sci. 2022, 34, 2897–2909. [Google Scholar] [CrossRef]
  15. Wang, Q.; Chen, B.; Xu, G.; Bao, H.; Zeng, J.; Xiang, X. Optimization of test method for bond performance between steel bar and concrete. Structures 2023, 47, 1822–1835. [Google Scholar] [CrossRef]
  16. Wang, N.; Jia, L.; Zhang, H.; Qin, Y.; Zhao, X.; Wang, Z. Manifold-Contrastive Broad Learning System for Wheelset Bearing Fault Diagnosis. IEEE Trans. Intell. Transp. Syst. 2023, 1–15. [Google Scholar] [CrossRef]
  17. Wang, Z.; Wang, N.; Zhang, H.; Jia, L.; Qin, Y.; Zuo, Y.; Zhang, Y.; Dong, H. Segmentalized mRMR Features and Cost-Sensitive ELM With Fixed Inputs for Fault Diagnosis of High-Speed Railway Turnouts. IEEE Trans. Intell. Transp. Syst. 2023, 24, 4975–4987. [Google Scholar] [CrossRef]
  18. Dan, H.; Yue, W.; Xiong, W.; Liu, Y.; Su, M.; Sun, Y. Open-Switch and Current Sensor Fault Diagnosis Strategy for Matrix Converter-Based PMSM Drive System. IEEE Trans. Transp. Electrif. 2022, 8, 875–885. [Google Scholar] [CrossRef]
  19. Xu, W.; Zhang, Y.; Liu, Y.; Islam, M.R.; Zhang, M.; Luo, D. Improved Slight Fault Diagnosis Strategy for Induction Motor Considering Even and Triple Harmonics. IEEE Trans. Ind. Appl. 2022, 58, 4436–4449. [Google Scholar] [CrossRef]
  20. Wu, L.J.; He, W.; Liu, B.J.; Han, X.Y.; Tang, L.L. Scenario-Based Software Reliability Testing and Evaluation of Complex Information Systems. In Proceedings of the 18th IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C), Lisbon, Portugal, 16–20 July 2018; pp. 73–78. [Google Scholar]
  21. Zhao, J.Y.; Chen, H.J.; Liu, G.J.; Zhang, Y.; Wu, P.H. Reliability testing and reliability analysis of the over-load protective relay. J. Zhejiang Univ.-Sci. A 2007, 8, 475–480. [Google Scholar] [CrossRef]
  22. Deng, Y.; Shi, J.Y.; Liu, K. An Extended Testability Modeling Method Based on the Enable Relationship Between Faults and Tests. In Proceedings of the Prognostics and System Health Management Conference (Phm), Beijing, China, 21–23 October 2015. [Google Scholar]
  23. Gao, J.; Shih, M.C.; Society, I.C. A component testability model for verification and measurement. In Proceedings of the 29th Annual International Computer Software and Applications Conference, Edinburgh, UK, 26–28 July 2005; pp. 211–218. [Google Scholar]
  24. Sui, J.H.; Wang, J.X.; Liu, H.N.; Liu, L.Q. A Baseline-based BIST Design Model for Software Testability. In Proceedings of the International Conference on Industrial Technology and Management Science (ITMS), Tianjin, China, 27–28 March 2015; pp. 1395–1399. [Google Scholar]
  25. Tan, X.D.; Qiu, J.; Liu, G.J.; Li, Q. Fault Evolution Testability Modeling and Analysis for Centrifugal Pumps. In Proceedings of the Prognostics and System Health Management Conference (PHM-Hunan), Lab Sci & Technol Integrated Logist Support, Zhangjiajie, China, 24–27 August 2014; pp. 469–473. [Google Scholar]
  26. Yang, C.L.; Gu, X.D.; Zhu, M.; Li, M.Q. Research on Modeling Techniques of Testability Evaluation Based on Modelica. In Proceedings of the 4th International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Harbin Inst Technol, Harbin, China, 18–20 September 2014; pp. 590–594. [Google Scholar]
  27. Lei, J.N.; Wan, F.Y.; Cui, W.M.; Li, W.H. Testability Modeling of Hydraulic System Based on Multi–Signal Flow Model. In Proceedings of the International Conference on Sensing, Diagnostics, Prognostics, and Control (SDPC), Shanghai, China, 16–18 August 2017; pp. 361–366. [Google Scholar]
  28. Liu, H.S.; Wu, J.C.; Chen, G.J. Application of Multi-signal Modeling Theory to Testability Analysis for Complex Electronic System. In Proceedings of the International Conference on Computer Science and Network Technology (ICCSNT), Harbin Normal University, Harbin, China, 24–26 December 2011; pp. 755–758. [Google Scholar]
  29. Lyu, C.; Ding, H.; Liu, S.S.; Wang, L.X.; Wang, S.J. An Initial Implementation of Testability Analysis Based on Multi-Signal Flow Graph Model. In Proceedings of the 8th IEEE Conference on Industrial Electronics and Applications (ICIEA), Swinburne University of Technology, Melbourne, Australia, 19–21 June 2013; pp. 1874–1879. [Google Scholar]
  30. Wu, Y.; Yu, J.S.; Tang, D.Y.; Tian, L.M.; Gao, Z.B.; Dai, J. A hierarchical testability analysis method for reusable liquid rocket engines based on multi-signal flow model. In Proceedings of the 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), Electr Network, Kristiansand, Norway, 9–13 November 2020; pp. 1768–1772. [Google Scholar]
  31. Yan, P.; Chen, F.; Sun, S.Y.; Li, X. Testability Modeling of Guided Projectile Based on Multi-Signal Flow Graphs. In Proceedings of the IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 14–16 December 2018; pp. 1219–1225. [Google Scholar]
Figure 1. The construction process of a sequential-fault diagnosis strategy for high-speed train traction systems based on unreliable tests.
Figure 1. The construction process of a sequential-fault diagnosis strategy for high-speed train traction systems based on unreliable tests.
Applsci 13 08226 g001
Figure 2. Main circuit for car traction of CRH.
Figure 2. Main circuit for car traction of CRH.
Applsci 13 08226 g002
Figure 3. Cloud model corresponding to each evaluation level.
Figure 3. Cloud model corresponding to each evaluation level.
Applsci 13 08226 g003
Figure 4. Multi-signal model diagram.
Figure 4. Multi-signal model diagram.
Applsci 13 08226 g004
Figure 5. Test sequence comparison curve.
Figure 5. Test sequence comparison curve.
Applsci 13 08226 g005
Figure 6. Fault isolation sequence comparison curve.
Figure 6. Fault isolation sequence comparison curve.
Applsci 13 08226 g006
Table 1. Evaluation criteria for test-influencing factors.
Table 1. Evaluation criteria for test-influencing factors.
Evaluation LevelInfluence of Test ErrorThreshold Setting Rationality InfluenceImpact of
Environmental Noise
Influence of Sensor ReliabilityInfluence of Working Condition Disturbance
1greatgreatgreatgreatgreat
2largerlargerlargerlargerlarger
3commoncommoncommoncommoncommon
4lesslesslesslessless
5very smallvery smallvery smallvery smallvery small
Table 2. R I of repetitive calculation for 1–9 dimensional matrices.
Table 2. R I of repetitive calculation for 1–9 dimensional matrices.
nRI
10
20
30.58
40.9
51.12
61.24
71.32
81.41
91.45
Table 3. Traction system test reliability.
Table 3. Traction system test reliability.
TestReliabilityTestReliabilityTestReliability
t 1 0.9475 t 10 0.9529 t 19 0.9350
t 2 0.9436 t 11 0.9398 t 20 0.9387
t 3 0.9550 t 12 0.9322 t 21 0.9502
t 4 0.9436 t 13 0.9528 t 22 0.9580
t 5 0.9523 t 14 0.9511 t 23 0.9455
t 6 0.9416 t 15 0.9446 t 24 0.9549
t 7 0.9521 t 16 0.9588 t 25 0.9590
t 8 0.9513 t 17 0.9536 t 26 0.9538
t 9 0.9476 t 18 0.9366
Table 4. Fault diagnosis strategy of traction system under reliable test.
Table 4. Fault diagnosis strategy of traction system under reliable test.
FaultSteps Required for IsolationFaultSteps Required for Isolation
f 1 24 f 15 19
f 2 27 f 16 21
f 3 28 f 17 20
f 4 28 f 18 20
f 5 26 f 19 19
f 6 26 f 20 21
f 7 23 f 21 7
f 8 23 f 22 8
f 9 16 f 23 10
f 10 16 f 24 9
f 11 15 f 25 12
f 12 4 f 26 11
f 13 6 f 27 13
f 14 5 f 28 14
Table 5. Fault diagnosis strategy of traction system under unreliable test.
Table 5. Fault diagnosis strategy of traction system under unreliable test.
FaultSteps Required for IsolationFaultSteps Required for Isolation
f 1 25 f 15 20
f 2 26 f 16 21
f 3 28 f 17 18
f 4 28 f 18 18
f 5 27 f 19 20
f 6 24 f 20 21
f 7 23 f 21 13
f 8 24 f 22 12
f 9 16 f 23 10
f 10 16 f 24 15
f 11 14 f 25 7
f 12 17 f 26 8
f 13 6 f 27 9
f 14 5 f 28 11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.; Zhou, Y.; Jia, L.; Qin, Y.; Wang, Z. Sequential-Fault Diagnosis Strategy for High-Speed Train Traction Systems Based on Unreliable Tests. Appl. Sci. 2023, 13, 8226. https://doi.org/10.3390/app13148226

AMA Style

Li M, Zhou Y, Jia L, Qin Y, Wang Z. Sequential-Fault Diagnosis Strategy for High-Speed Train Traction Systems Based on Unreliable Tests. Applied Sciences. 2023; 13(14):8226. https://doi.org/10.3390/app13148226

Chicago/Turabian Style

Li, Mengwei, Ying Zhou, Limin Jia, Yong Qin, and Zhipeng Wang. 2023. "Sequential-Fault Diagnosis Strategy for High-Speed Train Traction Systems Based on Unreliable Tests" Applied Sciences 13, no. 14: 8226. https://doi.org/10.3390/app13148226

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop