1. Introduction
Recently, with the rapid development of power electronics technologies, numerous studies have been conducted on multilevel inverters [
1,
2] used in various engineering fields [
3,
4], such as high-power transmission in grids [
5], motor drives [
6,
7] and energy storage devices [
3]. Multilevel inverters have become popular because of the improvement in output voltage quality, reduction in switching frequency and the extensibility of the structure. However, the main drawbacks are the higher number of components compared to classical inverters and more complex control.
The safety of power electronics switches is one of the most important issues in the normal operation of inverters. Several researchers have studied the behavior of inverters with internal failure, focusing particularly on the open-circuit fault of a switch. Short-circuit and open-circuit are the most common faults in power switches. Furthermore, short-circuits need to be detected to avoid destructive overcurrents, e.g., using a dedicated gate drive protection hardware circuit [
8]. An open-circuit does not necessarily cause system shutdown and can remain undetected for an extended period of time. If an internal fault occurs in the inverter, serious distortion to the voltage and current could result. This may lead to secondary faults in the remaining drive components, resulting in high repair costs.
Open-circuit fault diagnosis methods can be classified as voltage-based or current-based. The former extracts diagnostic features by additional voltage sensors or electrical circuits. The diagnosis strategy presented in [
9] is based on pole voltage estimation, requiring knowledge of the line parameters. Through voltage analysis, on the inverter side, it is also possible to detect fault occurrence [
8,
10]. Under open-circuit faults, these voltages show some characteristic irregularities that can be detected, which enable direct localization of the faulty device. Recently, many current-based, fault diagnosis methods have attracted much attention [
5,
11,
12]. The authors of [
13] present an effective open-circuit fault diagnosis approach based on the brushless motor characteristics, observing phase currents and other parameters. The fault diagnosis strategy proposed in [
11] is based on an analysis of current waveforms—it identifies the faulty half-leg, but cannot identify the faulty component. In order to improve the diagnosis speed, in [
5,
12] a current-based diagnosis algorithm is proposed that identifies the reference current errors and their average absolute values. Open-circuit fault detection is of critical importance for the stable operation of inverters. Recently, several fault detection approaches have been developed for power inverters and are discussed below.
Fault detection and location methods are hardware-based or software-based methods: the hardware-based methods require additional sensors, making fault detection complex and costly, whereas software-based methods do not require extra hardware.
There are many fault detection approaches for multi-level converters, including comparison between measured and reference values, the sliding mode observer [
14], Lyapunov theory [
15], the Kalman filter algorithm [
16,
17], Fourier transform [
18], Park transformation [
19], wavelet transform [
20], machine learning [
21], neural network approaches [
22], and the Filippov method [
23].
In [
5], a diagnosis method for a two-level voltage source inverter based on the average absolute value of current is proposed. The detection of several kinds of faults can be completed using the ratio of the theoretical and the practical voltage values on the capacitor of each inverter’s sub-module [
24]. The authors of [
14] proposed a sliding mode observer to diagnose any open-circuit fault of the switch and to avoid interference caused by sampling error and system fluctuation. The authors of [
16] proposed a Kalman-filter-based approach: the comparison of measured and estimated voltages and currents obtained by the Kalman filter enables detection of the open-circuit fault of a modular multilevel converter sub-module. The amplitude and argument of each phase current harmonic can be used to detect and localize the faults.
An analysis of the first harmonics in [
18] showed that the difference between the healthy state and the open-circuit fault case lay in the zero-order harmonic, i.e., the presence of a
component in the signal. The argument of the zero harmonic with respect to the fundamental enables determination of the type of fault.
The authors of [
19,
25] proposed a Park technique followed by the polarity of the trajectory slope in the complex
-
frame to identify the faulty switch. A neural network [
22], a mixed kernel support tensor machine [
21], an adaptive one-convolutional neural network, or an adaptive linear neural-recursive least squares algorithm, have been utilized to detect and identify faulty switches for multilevel inverters. These strategies can be applied directly to the voltage and current data. However, a large amount of reliable data is required to train the neural network in advance. Fault diagnosis methods are mainly machine-learning based. In [
26], an “optimized support vector machine method” is proposed: the fault characteristics are deduced from the average value of the three-phase currents. In [
27], wavelet analysis and an improved neural network are used. The authors of [
28] use the wavelet energy spectrum entropy to perform a statistical analysis of the energy distribution of the signal in each frequency band. Finally, ref. [
29] employed “weighted-amplitude permutation” entropy, with better feature extraction than standard permutation entropy.
Our goal herein is to study the output current of an
converter using the usual and multiscale entropies, and to evaluate their ability to differentiate a healthy state from an open-circuit faulty state. We are able to pick up the entropies which detect and locate the arm of the bridge with an open-circuit fault. We first present the usual entropies and multiscale entropies and describe the commonly used entropy algorithms.
Section 4 presents the dataset we used for the three-phase output currents of the inverter under a healthy state, with one and two open-circuit faults. Then, in
Section 5, the results are detailed and discussed, including evaluation of the entropy with variation in the data length, embedding dimension, time lag and tolerance. We end with conclusions in
Section 6.
2. Entropy Methods
Serving as an algebraic quantification parameter of time series, the entropy measures the system’s complexity, including the degree of irregularity of time series, to evaluate the probability of finding
m-similar patterns. A number of methods to quantify the entropy of a data series have been proposed in recent decades. They aim for higher discriminating power, less noise and parameter dependence. Since the introduction of approximate entropy—
[
30], other methods have been proposed, such as
—sample entropy [
30],
—Kolmogorov entropy [
31],
—conditional entropy [
32],
—cosine similarity entropy [
33],
—fuzzy entropy [
33,
34,
35],
—entropy of entropy [
36],
—multiscale entropy functions [
37,
38,
39,
40,
41],
—refined multiscale entropy [
42,
43], and
—composite multiscale entropy [
44,
45].
and
: approximate entropy was proposed by Pincus in 1991. The objective of
is to determine how often different patterns of data are found in the dataset. According to [
30], the bias of
means that the results suggest more regularity than there is in reality. This bias is obviously more important for samples with a small length of the series. Eliminating this bias by preventing each vector from being counted with itself would make
unstable in many situations, leaving it undetermined if each vector does not find at least one match. To avoid those two problems, Richman [
30] defined
, a statistic which does not have self-counting. For a time series
with a given embedding dimension
m, tolerance
r and time lag
, the Algorithm 1 for determining the
of a sequence is as follows:
Algorithm 1: Approximation Entropy |
- (a)
Construct the embedding vectors = - (b)
Compute the Chebyshev distance . - (c)
Construct the number of similar patterns , when . - (d)
Compute the local probability of occurrences of similar patterns . - (e)
Compute the global probability of occurrences of similar patterns . - (f)
Compute for the vector . - (g)
Obtain approximation entropy .
|
: certain information provided by times series can only be extracted by analyzing the dynamical characteristics. The conventional entropy gives information about the number of states available for the dynamical system in phase space. Therefore, it does not provide any information on how the system evolves. On the contrary, the
[
31] takes into account how often these states are visited by a trajectory, i.e., it provides information about the dynamic evolution of the system.
The initial time series
can be divided into a finite partition
, according to
, where
is the time delay parameter. The Shannon entropy of such a partition is given as:
The Kolmogorov entropy [
31] is then defined by
The difference is the average information needed to predict which partition will be visited.
: Porta [
32] reports the conditional entropy (
) to distinguish the entropy variation over short data sequences. A time series
is reduced to a process with zero mean and unitary variance by means of the normalisation
where
and
are the mean and standard deviation of the series. From the normalised series, a reconstructed
L dimensional phase space [
32] is obtained by considering
vectors
, for a pattern of
L consecutive samples. The
can be obtained as the variation in the Shannon entropy of
Small Shannon entropy values are obtained when a length L pattern appears several times. Small values are obtained when a length L pattern can be predicted by a length pattern. quantifies the variation in information necessary to specify a new state in a one-dimensional incremented phase space.
: cosine similarity entropy is amplitude-independent and robust to spikes and short time series, two key problems that occur with the
. The
algorithm [
33] replicates the computational steps in the
approach with the following modifications: the angle between two embedding vectors is evaluated instead of the Chebyshev distance; then the estimated entropy is based on the global probability of occurrences of similar patterns
using the local probability of occurrences of similar patterns
. For a time series
, with a given embedding dimension
m, tolerance
r and time lag
, the Algorithm 2 for determining the
of a sequence is as follows:
Algorithm 2: Cosine Similarity Entropy |
- (a)
Construct the embedding vectors - (b)
Compute the angular distance for all pairwise embedding vectors as . - (c)
The number of similar patterns is obtained when . - (d)
The local probability of occurrences of similar patterns is computed as . - (e)
The global probability of occurrences of similar patterns is computed from . - (f)
The cosine similarity entropy is estimated as .
|
: fuzzy entropy introduces the concept of uncertainty reasoning to solve the drawback of sample entropy
, which adopts a hard threshold as a discriminant criterion, which may result in unstable results. Several fuzzy membership functions, including triangular, trapezoidal, Z-shaped, bell-shaped, Gaussian, constant-Gaussian, and exponential functions, have been employed in
. In [
34], it was found that
has a stronger relative consistency and less dependence on data length.
introduces two modifications to the
algorithm: (1) reconstructing the embedding vectors in
, they are centred using their own means in order to become zero-mean. (2) The
calculates the
fuzzy similarity obtained from a fuzzy membership function [
35], where
is the order of the Gaussian function.
For a time series
, the steps of the
approach are summarized in the third Algorithm 3 [
33] as follows:
Algorithm 3: Fuzzy Entropy |
- (a)
Construct the zero-mean embedding vectors as , where and - (b)
Compute the Chebyshev distance . - (c)
Using the Gaussian function, construct the fuzzy similarity . - (d)
Compute the local probability of occurrences of similar patterns . - (e)
Compute the global probability of occurrences of similar patterns . - (f)
Obtain , for the vector . - (g)
Obtain fuzzy entropy .
|
[
36] consists of two steps. First, the Shannon entropy is used to characterize the state of a system within a time window, which represents the information contained in the time period. The one-dimensional discrete time series
of length
N is divided into consecutive non-overlapping windows. Secondly, the Shannon entropy is used instead of the sample entropy, to characterize the degree of change in the states. As the Shannon entropy is computed twice, this algorithm is called entropy of entropy.
,
,
: the analysis of time series at different temporal scales is also frequent. The multiple time scales are constructed from the original time series by averaging the data points within non-overlapping windows of increasing length.
[
37,
38,
39,
40,
41], that represents the system dynamics on different scales, relies on the computation of the sample entropy over a range of scales. The
algorithm is composed of two steps:
- (i)
A coarse-graining procedure. To represent the time-series dynamics on different time scales, a coarse-graining procedure is used to derive a set of time series. For a discrete signal
of length
N, the coarse-grained time series
is computed as
The length of the coarse-grained time series
for a scale factor
s is
.
Figure 1 presents the coarse-grained time series
for a scale 4.
- (ii)
Sample entropy computation. Sample entropy is a conditional probability measure that a sequence of
m consecutive data points will also match the other sequence when one more point is added to each sequence [
30]. Sample entropy is determined as
where
is the probability that two sequences will match for
points and
is the probability that two sequences will match for
m points (self-matches are excluded). They are computed as described in [
40]. From Equation (
6),
can be written as
where
and
are calculated from the coarse-grained time series at the scale factor
s.
The
algorithm can be used with a sample entropy approach or other usual entropy functions, such as
,
,
,
,
,
,
. Other algorithms have been developed to improve the
algorithm and to overcome some limitations of
when dealing with short time series. While the coarse-graining procedure reduces the length of the time series with a scale factor
s, the sample entropy algorithm may give imprecise or undefined values for short time series. Moreover, the coarse-graining procedure (that eliminates the fast temporal scales) cannot prevent aliasing in the presence of fast oscillations and is, therefore, suboptimal. In 2009, ref. [
42] proposed the refined
(
). Its algorithm removes the fast temporal scales [
43] and prevents the influence of reduced variance on the complexity evaluation. The
[
44,
45] aims at reducing the variance of estimated entropy values at large scales. The sample entropy values of all coarse-grained time series are computed at a scale factor
s. The means of these entropy values define
where
represents the total number of
m-dimensional matched vector pairs; it is computed from the
kth coarse-grained time series.
3. System Description
The
converter is one of several configurations explored in the literature [
46,
47] to decouple two frequencies. The
converter topology can be divided into an
rectifier conversion and a
inverter conversion, as shown in
Figure 2. The schematic diagram of the chosen topology employs a transformer at the input, a rectifier, a DC filter, a 3-phase, 2-level inverter and an output filter before the load. The power supply is connected to the primary of the transformer. The transformer changes the primary winding voltage 25 kV into a 600 V, 60 Hz voltage obtained at the secondary of the transformer. It is then rectified by a six-pulse diode bridge. Series snubber circuits are connected in parallel with each switch device. The rectifier is expected to accomplish two main tasks: to provide a constant link voltage and to provide an almost unitary power factor. The filtered DC voltage (responsible for filtering the harmonic components at the switching frequency) is applied to an inverter generating 50 Hz.
Figure 3 shows the three-phase inverter modeled with IGBT (
and
,
i = 1, 2, 3), which has two active switches with anti-parallel diodes controlled by pulse-width modulation produced by a generator. The inverter outputs are connected to a three-phase second-order filter and the filter output voltage is supplied to the load. A 60 Hz, voltage source feeds a 50 Hz 50 kW load through an
converter.
Table 1 summarizes the main specifications of this
converter. The three-phase inverter simulation model was built using Matlab/Simulink.
One open-circuit fault may occur in the switch: or of the first phase a, or of the second phase b, or of the phase c. The case of two open-circuit faults on the upper and lower arms are and , and , and , and , and , and . If the upper arms are affected by the open-circuit, the two open-circuit faults can be and , and , and . The two open-circuit faults, and , and , and are the symmetrical faults of the lower arms. With no loss of generality, this work is focused on the open-circuit fault on the first switch of the first-phase a. Two open-switch faults are also considered: on the first switch of the first-phase and on the second switch of the second-phase; then, on the first switch of the first-phase and on the first switch of the second-phase.
4. Datasets
The growing interest in entropy approaches can be explained by their ability to analyse large sets of signals and to provide information related to their complexity. The three-phase output currents
,
and
collected from the inverter are measured and recorded as a one-dimensional time series to create a dataset. First, we observe the current of phase
a under normal conditions when no-fault occurs in any switch of the inverter.
Figure 4a shows this time-series data, sampled with sampling time
T = 4
s and composed of
N = 30,000 samples. In normal conditions, the currents of phases
b and
c are similar to those of phase
a. Then, an open-circuit fault occurs in phase
a on the
switch. The output currents of phases
a,
b and
c corresponding to an open-circuit fault are shown in
Figure 4a and
Figure 5a,b. The
side of the output current of phase
a can be observed, where two periods of
are represented in
Figure 4b. The output current of phases
b and
c do not change, except for a very small amplitude decrease.
Next, two open-circuit faults occur on
and
. The corresponding output currents of phases
a,
b and
c are shown in
Figure 6a,b and
Figure 7a. The open-circuit faults do not cause system shutdown but degrade the system performance. Then, two open-circuit faults occur in
and
. The corresponding output currents of phases
a,
b and
c are shown in
Figure 7b and
Figure 8a,b, where the
side can be observed.
The mean of the output currents of phases
a,
b and
c, with no-fault, one open-circuit fault and two open-circuit faults occurring on the switches, is illustrated on
Table 2.
5. Results and Discussion
Serving as an algebraic quantification parameter, entropy is used here to characterize the complexity of different electrical signals, such as the healthy and faulty waveforms in the open-circuit case, as in
Figure 4,
Figure 5,
Figure 6,
Figure 7 and
Figure 8. The block diagram of the fault detection method is presented in
Figure 9. Three sensors are added to the circuit to measure the currents of the load (
,
and
of phases
a,
b and
c), which can be used to identify the fault in the switch. For convenience of comparison, the entropy of phases
a,
b and
c, when one open-circuit fault occurs on
, is divided by the entropy of phase
a under healthy conditions. In the same way, the entropy of phases
a,
b and
c, when two open-circuit faults occur on
and
or
, is divided by the entropy of phase
a under healthy conditions.
All the entropies are formed as an average over two, three, or four implemented realizations. Relevant values of and obtained with were averaged to give a mean value named . and obtained with do not distinguish the faults occurring at different locations. Similarly, represents the mean values and using . Implementing the mean of , and and , used with , gives . The mean of , , and (three multiscale entropies using ) gives . was computed as the mean of the usual entropy function (), , and , where multiscale entropy was deduced with . In the same way, is obtained as the mean of the pertinent values of , and using . For the last mean value, is the mean of , and using .
5.1. One Open-Circuit Fault on on the Phase a
In this study, the efficiency of
,
,
,
,
,
and
is investigated with several parameters, such as data length
N = 30,000 samples, embedding dimension
m = 2, time delay
= 1, and tolerance
r = 0.2 (commonly set by default). The entropy values of the 30,000 samples were calculated and are shown in
Figure 10. The samples of phase
a, where the open-circuit fault occurs, have larger entropy values with
,
,
,
and
(represented in red) than the entropy of the phase
b and
c (represented in black). Obviously, they are clearly separated. Even the entropy of the phase
a is lower than the entropy of the phases
b and
c,
and
,
Figure 10 also shows the separation of the three phases
a,
b and
c: the phases
b and
c have an entropy very close to each other, and different to that of phase
a.
Each entropy is able to detect the phase where one open-circuit fault occurs. As we can see in
Figure 10, the larger difference between the entropy of phase
a and the entropy of phases
b and
c is given by
.
5.2. Two Open-Circuit Faults on —Phase a and on —Phase b
The embedding dimension
m, data length
N, time delay
and the choice of tolerance
r, remain unchanged.
Figure 11 illustrates the performance with two open-circuit faults: on phase
a with faulty
and on phase
b with faulty
. As shown below, the phases with open-circuit fault are represented in red. This time, only
,
,
,
and
entropies are able to detect the phase where two open-circuit faults occur.
and
are not able to detect the two open-circuit faults on phases
a and
b. As we can see in
Figure 11, the larger difference between the entropy of the faulty phase (
a or
b) and the entropy of phase
c is given by
and
.
5.3. Two Open-Circuit Faults on —Phase a and —Phase b
The parameters involved in entropy analysis were set as previously. As shown in
Figure 12, the performance with two open-circuit faults can be observed on phase
a with faulty
and on phase
b with faulty
. We investigate the phases where the two open-circuit faults occur by performing
,
,
and
. The larger difference between the entropy of the faulty phases (
a and
b) and the entropy of phase
c is given by
.
The parameters that will be discussed in the following subsections include the data length N, embedding dimension m, time lag and tolerance r. In order to verify the incidence of the variation parameters on the entropy, we only present the case of one open-circuit fault on .
5.4. Varied Embedding Dimension (m)
Usually, for the implementation of entropy-based algorithms, the embedding dimension and data length are interdependent and mutually coupled. In practice, recorded signals have a finite length and are generally limited by processing time and memory space. Higher values of embedding dimension with a shortage of data size will cause unstable entropy estimation. Therefore, the embedding dimension is commonly set to m = 2 or m = 3 for a signal with 1000 samples. A signal with 30,000 samples approves a bigger embedding dimension with a stable entropy estimation. Here, data length, time delay and tolerance are fixed as: N = 30,000 samples, = 1 and r = 0.2.
Figure 13a,b show
and
as functions of the embedding dimension
m. The results of
and
are shown in
Figure 14a,b. In order to study the effect of
m on these approaches (
,
,
and
), we change
m from 2 to 8.
and
are gradually decreased when the embedding dimension
m increases.
is unchanged as
m increases, keeping a constant entropy value. In
Figure 14b, the relationships between
and the embedding dimension
m show that
increases for
m in the range (2, 4) and decreases for (6, 8).
is nearly constant when
m is in the range (4, 6). With regards to entropy analysis and to ensure a large difference between the entropies of phases
a and
b, it is appropriate to choose the following values:
m = 2 for
and
,
m = 4 for
and any value of
m for
.
5.5. Varied Data Length (N)
The data length of the signal is another limitation, together with the embedding dimension, when implementing the entropy calculation. The algorithms require at least 1000 data points to guarantee a consistent estimation, as for the usual entropy functions and .
Figure 15a and
Figure 16b illustrate the performance of
,
,
and
. The parameter values used to calculate the entropy are:
m = 2,
= 1 and
r = 0.2. The data lengths are:
= 6000 points, approximately two periods of the signal (
Figure 4b); then the length increases up to
= 10,000 samples;
= 18,000 represents three periods of the signal; then 24,000 points are saved of
data; and, finally,
= 30,000 samples, performing six periods of the signal (
Figure 4a).
The analysis of the data length, changing from 6000 samples to 30,000 samples, was only performed on
,
,
and
.
Figure 15a,b show
and
as functions of the data length. The results of
and
are shown in
Figure 16a,b.
gradually increases with an increase in data length, as shown in
Figure 15a. At the end of the scale (
= 30,000 samples),
doubles compared to the initial values at
= 6000 samples.
Figure 15b shows that
remains constant when the data length increases, as in the overhead subsection. In
Figure 16a,
increases slowly for
–
in the sample range (6000, 10,000) and
–
in the sample range (18,000, 30,000).
is nearly constant when
N is in the range (10,000, 18,000). The
and
results depend on the data length. In
Figure 16b,
increases slowly for
–
in the sample range (6000, 10,000), decreases for
–
in the sample range (10,000, 18,000), and is followed by an increase in the range for
–
in the sample range (18,000, 30,000). With regards to the entropy analysis, and to ensure a large difference between the entropy of phase
a and the entropy of phases
b and
c, it is appropriate to choose a large data length for
,
and
. For
, it is appropriate to choose
because the entropy values are, independent of the length.
5.6. Varied Time Lag ()
We now have a look at some representative entropies that are influenced by another indicator, such as time lag . In this example, the time lag varies from 2 to 8. Having illustrated how entropy performs with data length and embedding dimension variation, we now examine the performance of , , and with variation in time lag . The data length, embedding dimension and tolerance were separately fixed at N = 30,000, m = 2 and r = 0.2 in the following analysis.
Figure 17a shows the impact of different
values on
: a monotonic decrease of
of phase
a and a nearly constant value of phase
b can be observed when
increases. The most important difference between the two curves is for a small
. It can be observed in
Figure 17b that the shape of
as a function of
shows a very small increase. Then, as for
, its values tend to decrease as
increases. Both
and
show similar behavior for the medium and large values of time lag
.
Figure 17a,b suggest that
= 2 is suitable for the calculation of
and
values.
entropy of phases
a and
b with one open-circuit fault is shown in
Figure 18a: only lower time-lag entropies show a relevant significance. For
equals 1,
of the
Figure 10 is 6.5, exceeding 12.5 for
. Furthermore, the time-lag analysis reveals additional entropy information not previously observed at time-lag 1 or 2.
clearly presents one peak for
: the difference between
of phase
a and phase
b is then maximal. However, as the time lag increases, the difference between the red and black curves becomes smaller. At the end of the
interval, the two curves merge and the open-circuit fault on phase
a cannot be detected any more. Only a lower time-lag (
= 3) entropy has relevant significance, as in
Figure 18a.
The difference between
of phases
a and
b is nearly constant, suggesting homogeneity between them, as plotted in
Figure 18a in the middle of the interval
: significant differences between
of phases
a and
b were mostly observed at the extremities of the interval
.
5.7. Varied Tolerance (r)
The tolerance factor r varies among the experiments and is multiplied by the standard deviation entropy of phases a, b or c. In this example, the tolerance varies from 0.2 to 0.7. The data length, time lag and embedding dimension were separately fixed at N = 30,000, = 2 and m = 2.
Figure 19a,b present the impact of several
r values on
and
, respectively. Increase in
r results in a monotonic increase in
of phase
a, where the fault occurs (
Figure 19a).
of phase
b with no-fault is nearly constant, showing a very small decrease. The largest difference between the two curves is for a large
r. The difference between
of phases
a and
b is nearly constant, as plotted in
Figure 19b. These figures suggest that
r = 0.7 is suitable for the calculation of the
and
values.
5.8. New Parameters Setting
The parameters in entropy analysis are now set to
N = 30,000,
m = 2,
= 2,
r = 0.7 for
, and
N = 30,000,
m = 2,
= 3,
r = 0.2 for
.
Figure 20 shows
and
using
,
,
and the usual entropy functions
and
, respectively, for three cases: (1) one open-circuit fault on
, (2) two open-circuit faults on
and
and (3) two open-circuit faults on
and
.
Entropy function
:
Figure 20 presents a larger difference between
of phases
a and
b than with the initial set of parameters (
Figure 10) for the case (1). For cases (2) and (3), the use of the new set of parameters (
Figure 20) does not lead to an improvement (cf.
Figure 11 and
Figure 12).
Entropy function
: for (1) the distance between
of the faulty phase
a and the non-faulty phase
b, considering the new setting parameters, is approximatively 12 (
Figure 20), while it is near 4 with the initial set of parameters (
Figure 10). Furthermore, for (2) this distance is nearly 23, as shown in
Figure 20, and 8 in
Figure 11, respectively. Then, for the last case (3), the distance between
of the faulty phases
a and
b is 26, as shown in
Figure 20, and 8, respectively, with the initial set of parameters (
Figure 12). Therefore, the new parameter settings enable better detection of one or several open-circuit faults, with
.
6. Conclusions
Most entropy algorithms are limited to single-scale analysis, ignoring the information of other scales. Unfortunately, single-scale entropy is unable to describe the complexity of the signal and does not distinguish the faults from different phases. Multiscale analysis can fully describe the microstructural complexity and amplitude information of the signal, making it more suitable for various time-series analyses. This is why several usual entropies and multiscale entropies have been proposed to quantify the complexity of the converter output currents. For some values of tolerance r and time lag , and (using ) are capable of exhibiting the complex features of the system. This paper shows the great ability of and to distinguish between a healthy state and an open-circuit faulty state. Moreover, the simulation results show that these entropies are able to detect and locate the arms of the bridge with one or even two open-circuit faults. Finally, this paper only studies single and double open-circuit faults in a three-phase inverter and checks the effectiveness of the proposed method.
In the future, we will increase the number of multilevel converter levels and introduce more fault types for comprehensive fault diagnosis. Moreover, it will be possible to study the diagnosis method of the inverter, where the open-circuit fault occurs, and to add a fault-tolerant strategy to ensure that the inverter can work normally.