Next Article in Journal
Smart Mobility for Smart Cities—Electromobility Solution Analysis and Development Directions
Next Article in Special Issue
Prevention of Wildfires Using an AI-Based Open Conductor Fault Detection Method on Overhead Line
Previous Article in Journal
Photometric Method to Determine Membrane Degradation in Polymer Electrolyte Fuel Cells
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Open-Circuit Fault Detection and Location in AC-DC-AC Converters Based on Entropy Analysis

Ecole Supérieure des Techniques Aéronautiques et de Construction Automobile, ESTACA’Lab Paris-Saclay, 12 Avenue Paul Delouvrier–RD10, 78180 Montigny-le-Bretonneux, France
*
Author to whom correspondence should be addressed.
Energies 2023, 16(4), 1959; https://doi.org/10.3390/en16041959
Submission received: 24 November 2022 / Revised: 27 January 2023 / Accepted: 14 February 2023 / Published: 16 February 2023

Abstract

:
Inverters and converters contain more and more power electronics switches which may subsequently affect their reliability. Therefore, fault detection and location are essential to improve their reliability and to ensure continuous operation. In this paper, an  A C D C A C  converter with three-phase inverter is investigated under permanent, single and multiple open-circuit fault scenarios. Many entropies and multiscale entropies are then proposed to evaluate the complexity of the output currents by quantifying their entropies over a range of temporal scales. Among the multitude of entropies, only some entropies are able to differentiate healthy from open-circuit faulty conditions. Moreover, the simulation results show that these entropies are able to detect and locate the arms of the bridge with open-circuit faults.

1. Introduction

Recently, with the rapid development of power electronics technologies, numerous studies have been conducted on multilevel inverters [1,2] used in various engineering fields [3,4], such as high-power transmission in grids [5], motor drives [6,7] and energy storage devices [3]. Multilevel inverters have become popular because of the improvement in output voltage quality, reduction in switching frequency and the extensibility of the structure. However, the main drawbacks are the higher number of components compared to classical inverters and more complex control.
The safety of power electronics switches is one of the most important issues in the normal operation of inverters. Several researchers have studied the behavior of inverters with internal failure, focusing particularly on the open-circuit fault of a switch. Short-circuit and open-circuit are the most common faults in power switches. Furthermore, short-circuits need to be detected to avoid destructive overcurrents, e.g., using a dedicated gate drive protection hardware circuit [8]. An open-circuit does not necessarily cause system shutdown and can remain undetected for an extended period of time. If an internal fault occurs in the inverter, serious distortion to the voltage and current could result. This may lead to secondary faults in the remaining drive components, resulting in high repair costs.
Open-circuit fault diagnosis methods can be classified as voltage-based or current-based. The former extracts diagnostic features by additional voltage sensors or electrical circuits. The diagnosis strategy presented in [9] is based on pole voltage estimation, requiring knowledge of the line parameters. Through voltage analysis, on the inverter side, it is also possible to detect fault occurrence [8,10]. Under open-circuit faults, these voltages show some characteristic irregularities that can be detected, which enable direct localization of the faulty device. Recently, many current-based, fault diagnosis methods have attracted much attention [5,11,12]. The authors of [13] present an effective open-circuit fault diagnosis approach based on the brushless motor characteristics, observing phase currents and other parameters. The fault diagnosis strategy proposed in [11] is based on an analysis of current waveforms—it identifies the faulty half-leg, but cannot identify the faulty component. In order to improve the diagnosis speed, in [5,12] a current-based diagnosis algorithm is proposed that identifies the reference current errors and their average absolute values. Open-circuit fault detection is of critical importance for the stable operation of inverters. Recently, several fault detection approaches have been developed for power inverters and are discussed below.
Fault detection and location methods are hardware-based or software-based methods: the hardware-based methods require additional sensors, making fault detection complex and costly, whereas software-based methods do not require extra hardware.
There are many fault detection approaches for multi-level converters, including comparison between measured and reference values, the sliding mode observer [14], Lyapunov theory [15], the Kalman filter algorithm [16,17], Fourier transform [18], Park transformation [19], wavelet transform [20], machine learning [21], neural network approaches [22], and the Filippov method [23].
In [5], a diagnosis method for a two-level voltage source inverter based on the average absolute value of current is proposed. The detection of several kinds of faults can be completed using the ratio of the theoretical and the practical voltage values on the capacitor of each inverter’s sub-module [24]. The authors of [14] proposed a sliding mode observer to diagnose any open-circuit fault of the switch and to avoid interference caused by sampling error and system fluctuation. The authors of [16] proposed a Kalman-filter-based approach: the comparison of measured and estimated voltages and currents obtained by the Kalman filter enables detection of the open-circuit fault of a modular multilevel converter sub-module. The amplitude and argument of each phase current harmonic can be used to detect and localize the faults.
An analysis of the first harmonics in [18] showed that the difference between the healthy state and the open-circuit fault case lay in the zero-order harmonic, i.e., the presence of a  D C  component in the signal. The argument of the zero harmonic with respect to the fundamental enables determination of the type of fault.
The authors of [19,25] proposed a Park technique followed by the polarity of the trajectory slope in the complex  α - β  frame to identify the faulty switch. A neural network [22], a mixed kernel support tensor machine [21], an adaptive one-convolutional neural network, or an adaptive linear neural-recursive least squares algorithm, have been utilized to detect and identify faulty switches for multilevel inverters. These strategies can be applied directly to the voltage and current data. However, a large amount of reliable data is required to train the neural network in advance. Fault diagnosis methods are mainly machine-learning based. In [26], an “optimized support vector machine method” is proposed: the fault characteristics are deduced from the average value of the three-phase currents. In [27], wavelet analysis and an improved neural network are used. The authors of [28] use the wavelet energy spectrum entropy to perform a statistical analysis of the energy distribution of the signal in each frequency band. Finally, ref. [29] employed “weighted-amplitude permutation” entropy, with better feature extraction than standard permutation entropy.
Our goal herein is to study the output current of an  A C D C A C  converter using the usual and multiscale entropies, and to evaluate their ability to differentiate a healthy state from an open-circuit faulty state. We are able to pick up the entropies which detect and locate the arm of the bridge with an open-circuit fault. We first present the usual entropies and multiscale entropies and describe the commonly used entropy algorithms. Section 4 presents the dataset we used for the three-phase output currents of the inverter under a healthy state, with one and two open-circuit faults. Then, in Section 5, the results are detailed and discussed, including evaluation of the entropy with variation in the data length, embedding dimension, time lag and tolerance. We end with conclusions in Section 6.

2. Entropy Methods

Serving as an algebraic quantification parameter of time series, the entropy measures the system’s complexity, including the degree of irregularity of time series, to evaluate the probability of finding m-similar patterns. A number of methods to quantify the entropy of a data series have been proposed in recent decades. They aim for higher discriminating power, less noise and parameter dependence. Since the introduction of approximate entropy— A p E n t r o p y  [30], other methods have been proposed, such as  S a m p E n t r o p y —sample entropy [30],  K 2 E n t r o p y —Kolmogorov entropy [31],  C o n d E n t r o p y —conditional entropy [32],  C o S i E n t r o p y —cosine similarity entropy [33],  F u z z E n t r o p y —fuzzy entropy [33,34,35],  E n of E n t r o p y —entropy of entropy [36],  M S E n t r o p y —multiscale entropy functions [37,38,39,40,41],  r M S E n t r o p y —refined multiscale entropy [42,43], and  c M S E n t r o p y —composite multiscale entropy [44,45].
  • A p E n t r o p y  and  S a m p E n t r o p y : approximate entropy was proposed by Pincus in 1991. The objective of  A p E n t r o p y  is to determine how often different patterns of data are found in the dataset. According to [30], the bias of  A p E n t r o p y  means that the results suggest more regularity than there is in reality. This bias is obviously more important for samples with a small length of the series. Eliminating this bias by preventing each vector from being counted with itself would make  A p E n t r o p y  unstable in many situations, leaving it undetermined if each vector does not find at least one match. To avoid those two problems, Richman [30] defined  S a m p E n , a statistic which does not have self-counting. For a time series  { x i } i = 1 N  with a given embedding dimension m, tolerance r and time lag  τ , the Algorithm 1 for determining the  A p E n t r o p y  of a sequence is as follows:
Algorithm 1: Approximation Entropy
(a)
Construct the embedding vectors  x i m  =  [ x i , x i + τ , . . . , x i + ( m 1 ) τ ] .
(b)
Compute the Chebyshev distance
                   C h e b D i s t i , j m = m a x k = 1 , m ¯ { | x i m [ k ] x j m [ k ] | } .
(c)
Construct the number of similar patterns  P i m ( r ) , when  C h e b D i s t i , j m r .
(d)
Compute the local probability of occurrences of similar patterns
                               B i m ( r ) = 1 N m + 1 P i m ( r ) .
(e)
Compute the global probability of occurrences of similar patterns
                               B m ( r ) = 1 N m + 1 i = 1 N m + 1 B i m ( r ) .
(f)
Compute  B m + 1 ( r ) = 1 N m i = 1 N m B i m + 1 ( r )  for the vector  m + 1 .
(g)
Obtain approximation entropy  A p E n t r o p y ( m , τ , r , N ) = l n B m ( r ) B m + 1 ( r ) .
  • K 2 E n t r o p y : certain information provided by times series can only be extracted by analyzing the dynamical characteristics. The conventional entropy gives information about the number of states available for the dynamical system in phase space. Therefore, it does not provide any information on how the system evolves. On the contrary, the  K 2 E n t r o p y  [31] takes into account how often these states are visited by a trajectory, i.e., it provides information about the dynamic evolution of the system.
    The initial time series  { x i } i = 1 N  can be divided into a finite partition  α = { C 1 , C 2 , . . . , C k } , according to  C k = [ x ( i τ ) , x ( ( i + 1 ) τ ) , . . . , x ( ( i + k 1 ) τ ) ] , where  τ  is the time delay parameter. The Shannon entropy of such a partition is given as:
    K ( τ , k ) = C α p ( C ) · log p ( C ) .
    The Kolmogorov entropy [31] is then defined by
    K 2 E n t r o p y = sup α f i n i t e p a r t i t i o n lim N 1 N n = 0 N 1 ( K n + 1 ( τ , k ) K n ( τ , k ) ) .
    The difference  K n + 1 ( τ , k ) K n ( τ , k )  is the average information needed to predict which partition will be visited.
  • C o n d E n t r o p y : Porta [32] reports the conditional entropy ( C o n d E n t r o p y ) to distinguish the entropy variation over short data sequences. A time series  { x i } i = 1 N  is reduced to a process with zero mean and unitary variance by means of the normalisation
    x ( i ) = X ( i ) a v [ X ] s t d [ X ] ,
    where  a v [ X ]  and  s t d [ X ]  are the mean and standard deviation of the series. From the normalised series, a reconstructed L dimensional phase space [32] is obtained by considering  N L + 1  vectors  x L ( i ) = [ x ( i ) , x ( i 1 ) , . . . , x ( i L + 1 ) ] , for a pattern of L consecutive samples. The  C o n d E n t r o p y  can be obtained as the variation in the Shannon entropy of  x L ( i )
    C o n d E n t r o p y ( L ) = L p L · l o g p L + L 1 p L 1 · l o g p L 1 .
    Small Shannon entropy values are obtained when a length L pattern appears several times. Small  C o n d E n t r o p y  values are obtained when a length L pattern can be predicted by a length  L 1  pattern.  C o n d E n t r o p y  quantifies the variation in information necessary to specify a new state in a one-dimensional incremented phase space.
  • C o S i E n t r o p y : cosine similarity entropy is amplitude-independent and robust to spikes and short time series, two key problems that occur with the  S a m p E n t r o p y . The  C o S i E n t r o p y  algorithm [33] replicates the computational steps in the  S a m p E n t r o p y  approach with the following modifications: the angle between two embedding vectors is evaluated instead of the Chebyshev distance; then the estimated entropy is based on the global probability of occurrences of similar patterns  B m ( r )  using the local probability of occurrences of similar patterns  B i m ( r ) . For a time series  { x i } i = 1 N , with a given embedding dimension m, tolerance r and time lag  τ , the Algorithm 2 for determining the  C o S i E n t r o p y  of a sequence is as follows:
Algorithm 2: Cosine Similarity Entropy
(a)
Construct the embedding vectors  x i m = [ x i , x i + τ , . . . , x i + ( m 1 ) τ ]
(b)
Compute the angular distance for all pairwise embedding vectors as
                         A n g D i s t i , j m = 1 π c o s 1 x i m · x j m | x i m | · | x j m | , i j .
(c)
The number of similar patterns  P i m ( r )  is obtained when  A n g D i s t i , j m r .
(d)
The local probability of occurrences of similar patterns is computed as
                               B i m ( r ) = 1 N m 1 P i m ( r ) .
(e)
The global probability of occurrences of similar patterns is computed from
                               B m ( r ) = 1 N m i = 1 N m B i m ( r ) .
(f)
The cosine similarity entropy  C o S i E n  is estimated as
C o S i E n t r o p y ( m , τ , r , N ) = B m ( r ) · l o g 2 B m ( r ) 1 B m ( r ) · l o g 2 1 B m ( r ) .
  • F u z z E n t r o p y : fuzzy entropy introduces the concept of uncertainty reasoning to solve the drawback of sample entropy  S a m p E n t r o p y , which adopts a hard threshold as a discriminant criterion, which may result in unstable results. Several fuzzy membership functions, including triangular, trapezoidal, Z-shaped, bell-shaped, Gaussian, constant-Gaussian, and exponential functions, have been employed in  F u z z E n t r o p y . In [34], it was found that  F u z z E n t r o p y  has a stronger relative consistency and less dependence on data length.
    F u z z E n t r o p y  introduces two modifications to the  S a m p E n t r o p y  algorithm: (1) reconstructing the embedding vectors in  S a m p E n t r o p y , they are centred using their own means in order to become zero-mean. (2) The  F u z z E n t r o p y  calculates the  S i m ( r , η )  fuzzy similarity obtained from a fuzzy membership function [35], where  η  is the order of the Gaussian function.
    For a time series  { x i } i = 1 N , the steps of the  F u z z E n t r o p y  approach are summarized in the third Algorithm 3 [33] as follows:
Algorithm 3: Fuzzy Entropy
(a)
Construct the zero-mean embedding vectors as  q i m = x i m μ i m , where
                         x i m = [ x i , x i + τ , . . . , x i + ( m 1 ) τ ]  and  μ i m = 1 m k = 1 m x i m [ k ] .
(b)
Compute the Chebyshev distance
                         C h e b D i s t i , j m = m a x k = 1 , m ¯ { q i m [ k ] q j m [ k ] } , i j .
(c)
Using the Gaussian function, construct the  S i m ( r , η )  fuzzy similarity
                               S i m ( r , η ) = e C h e b D i s t i , j m η / r .
(d)
Compute the local probability of occurrences of similar patterns
                               B i m ( r ) = 1 N m 1 S i m ( r , η ) .
(e)
Compute the global probability of occurrences of similar patterns
                               B m ( r ) = 1 N m + 1 i = 1 N m + 1 B i m ( r ) .
(f)
Obtain  B m + 1 ( r ) = 1 N m i = 1 N m B i m + 1 ( r ) , for the vector  m + 1 .
(g)
Obtain fuzzy entropy  F u z z E n t r o p y ( m , τ , r , N ) = l n B m ( r ) B m + 1 ( r ) .
  • E n of E n t r o p y  [36] consists of two steps. First, the Shannon entropy is used to characterize the state of a system within a time window, which represents the information contained in the time period. The one-dimensional discrete time series  { x 1 , x 2 , . . . , x N }  of length N is divided into consecutive non-overlapping windows. Secondly, the Shannon entropy is used instead of the sample entropy, to characterize the degree of change in the states. As the Shannon entropy is computed twice, this algorithm is called entropy of entropy.
  • M S E n t r o p y r M S E n t r o p y c M S E n t r o p y : the analysis of time series at different temporal scales is also frequent. The multiple time scales are constructed from the original time series by averaging the data points within non-overlapping windows of increasing length.  M S E n t r o p y  [37,38,39,40,41], that represents the system dynamics on different scales, relies on the computation of the sample entropy over a range of scales. The  M S E n t r o p y  algorithm is composed of two steps:
    (i)
    A coarse-graining procedure. To represent the time-series dynamics on different time scales, a coarse-graining procedure is used to derive a set of time series. For a discrete signal  { x 1 , x 2 , . . . , x N }  of length N, the coarse-grained time series  { y ( s ) }  is computed as
    y j ( s ) = 1 τ i = ( j 1 ) s + 1 j s x i , 1 j [ N / s ] .
    The length of the coarse-grained time series  { y ( s ) }  for a scale factor s is  N / s . Figure 1 presents the coarse-grained time series  { y ( s ) }  for a scale 4.
    (ii)
    Sample entropy computation. Sample entropy is a conditional probability measure that a sequence of m consecutive data points will also match the other sequence when one more point is added to each sequence [30]. Sample entropy is determined as
    S a m p E n t r o p y ( m , r , N ) = l n A m ( r ) B m ( r )
    where  A m ( r )  is the probability that two sequences will match for  m + 1  points and  B m ( r )  is the probability that two sequences will match for m points (self-matches are excluded). They are computed as described in [40]. From Equation (6),  M S E n t r o p y  can be written as
    M S E n t r o p y ( m , r , s ) = l n A s m ( r ) B s m ( r )
    where  A s m ( r )  and  B s m ( r )  are calculated from the coarse-grained time series at the scale factor s.
The  M S E n t r o p y  algorithm can be used with a sample entropy approach or other usual entropy functions, such as  S a m p E n t r o p y K 2 E n t r o p y C o n d E n t r o p y C o S i E n t r o p y A p E n t r o p y F u z z E n t r o p y E n of E n t r o p y . Other algorithms have been developed to improve the  M S E n t r o p y  algorithm and to overcome some limitations of  M S E n t r o p y  when dealing with short time series. While the coarse-graining procedure reduces the length of the time series with a scale factor s, the sample entropy algorithm may give imprecise or undefined values for short time series. Moreover, the coarse-graining procedure (that eliminates the fast temporal scales) cannot prevent aliasing in the presence of fast oscillations and is, therefore, suboptimal. In 2009, ref. [42] proposed the refined  M S E n t r o p y  ( r M S E n t r o p y ). Its algorithm removes the fast temporal scales [43] and prevents the influence of reduced variance on the complexity evaluation. The  c M S E n t r o p y  [44,45] aims at reducing the variance of estimated entropy values at large scales. The sample entropy values of all coarse-grained time series are computed at a scale factor s. The means of these entropy values define  c M S E n t r o p y
c M S E n t r o p y ( m , r , s ) = 1 s k = 1 s l n n k , s m + 1 n k , s m ,
where  n k , s m  represents the total number of m-dimensional matched vector pairs; it is computed from the kth coarse-grained time series.

3. System Description

The  A C D C A C  converter is one of several configurations explored in the literature [46,47] to decouple two frequencies. The  A C D C A C  converter topology can be divided into an  A C D C  rectifier conversion and a  D C A C  inverter conversion, as shown in Figure 2. The schematic diagram of the chosen topology employs a transformer at the input, a rectifier, a DC filter, a 3-phase, 2-level inverter and an output filter before the load. The power supply is connected to the primary of the transformer. The transformer changes the primary winding voltage 25 kV into a 600 V, 60 Hz voltage obtained at the secondary of the transformer. It is then rectified by a six-pulse diode bridge. Series snubber circuits are connected in parallel with each switch device. The rectifier is expected to accomplish two main tasks: to provide a constant link voltage and to provide an almost unitary power factor. The filtered DC voltage (responsible for filtering the harmonic components at the switching frequency) is applied to an inverter generating 50 Hz. Figure 3 shows the three-phase inverter modeled with IGBT ( T o p i  and  B o t i , i = 1, 2, 3), which has two active switches with anti-parallel diodes controlled by pulse-width modulation produced by a generator. The inverter outputs are connected to a three-phase second-order filter and the filter output voltage is supplied to the load. A 60 Hz, voltage source feeds a 50 Hz 50 kW load through an  A C D C A C  converter. Table 1 summarizes the main specifications of this  A C D C A C  converter. The three-phase inverter simulation model was built using Matlab/Simulink.
One open-circuit fault may occur in the switch:  T o p 1  or  B o t 1  of the first phase a T o p 2  or  B o t 2  of the second phase b T o p 3  or  B o t 3  of the phase c. The case of two open-circuit faults on the upper and lower arms are  T o p 1  and  B o t 2 T o p 1  and  B o t 3 T o p 2  and  B o t 1 T o p 2  and  B o t 3 T o p 3  and  B o t 1 T o p 3  and  B o t 2 . If the upper arms are affected by the open-circuit, the two open-circuit faults can be  T o p 1  and  T o p 2 T o p 1  and  T o p 3 T o p 2  and  T o p 3 . The two open-circuit faults,  B o t 1  and  B o t 2 B o t 1  and  B o t 3 B o t 2  and  B o t 3  are the symmetrical faults of the lower arms. With no loss of generality, this work is focused on the open-circuit fault on the first switch  T o p 1  of the first-phase a. Two open-switch faults are also considered: on the first switch  T o p 1  of the first-phase and on the second switch  B o t 2  of the second-phase; then, on the first switch  T o p 1  of the first-phase and on the first switch  T o p 2  of the second-phase.

4. Datasets

The growing interest in entropy approaches can be explained by their ability to analyse large sets of signals and to provide information related to their complexity. The three-phase output currents  i a i b  and  i c  collected from the inverter are measured and recorded as a one-dimensional time series to create a dataset. First, we observe the current of phase a under normal conditions when no-fault occurs in any switch of the inverter. Figure 4a shows this time-series data, sampled with sampling time T = 4  μ s and composed of N = 30,000 samples. In normal conditions, the currents of phases b and c are similar to those of phase a. Then, an open-circuit fault occurs in phase a on the  T o p 1  switch. The output currents of phases a, b and c corresponding to an open-circuit fault are shown in Figure 4a and Figure 5a,b. The  D C  side of the output current of phase a can be observed, where two periods of  i a  are represented in Figure 4b. The output current of phases b and c do not change, except for a very small amplitude decrease.
Next, two open-circuit faults occur on  T o p 1  and  B o t 2 . The corresponding output currents of phases a, b and c are shown in Figure 6a,b and Figure 7a. The open-circuit faults do not cause system shutdown but degrade the system performance. Then, two open-circuit faults occur in  T o p 1  and  T o p 2 . The corresponding output currents of phases a, b and c are shown in Figure 7b and Figure 8a,b, where the  D C  side can be observed.
The mean of the output currents of phases a, b and c, with no-fault, one open-circuit fault and two open-circuit faults occurring on the switches, is illustrated on Table 2.

5. Results and Discussion

Serving as an algebraic quantification parameter, entropy is used here to characterize the complexity of different electrical signals, such as the healthy and faulty waveforms in the open-circuit case, as in Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. The block diagram of the fault detection method is presented in Figure 9. Three sensors are added to the circuit to measure the currents of the load ( i a i b  and  i c  of phases a, b and c), which can be used to identify the fault in the switch. For convenience of comparison, the entropy of phases a, b and c, when one open-circuit fault occurs on  T o p 1 , is divided by the entropy of phase a under healthy conditions. In the same way, the entropy of phases a, b and c, when two open-circuit faults occur on  T o p 1  and  T o p 2  or  B o t 2 , is divided by the entropy of phase a under healthy conditions.
All the entropies are formed as an average over two, three, or four implemented realizations. Relevant values of  M S E n t r o p y  and  c M S E n t r o p y  obtained with  S a m p E n t r o p y  were averaged to give a mean value named  S a m p E n S a m p E n t r o p y  and  r M S E n t r o p y  obtained with  S a m p E n t r o p y  do not distinguish the faults occurring at different locations. Similarly,  A p E n  represents the mean values  M S E n t r o p y  and  c M S E n t r o p y  using  A p E n t r o p y . Implementing the mean of  K 2 E n t r o p y M S E n t r o p y  and  c M S E n t r o p y  and  r M S E n t r o p y , used with  K 2 E n t r o p y , gives  K 2 E n . The mean of  C o S i E n t r o p y M S E n t r o p y c M S E n t r o p y  and  r M S E n t r o p y  (three multiscale entropies using  C o S i E n t r o p y ) gives  C o S i E n E n of E n  was computed as the mean of the usual entropy function ( E n of E n t r o p y ),  M S E n t r o p y c M S E n t r o p y  and  r M S E n t r o p y , where multiscale entropy was deduced with  E n of E n t r o p y . In the same way,  C o n d E n  is obtained as the mean of the pertinent values of  M S E n t r o p y c M S E n t r o p y  and  r M S E n t r o p y  using  C o n d E n t r o p y . For the last mean value,  F u z z E n  is the mean of  M S E n t r o p y c M S E n t r o p y  and  r M S E n t r o p y  using  F u z z E n t r o p y .

5.1. One Open-Circuit Fault on  T o p 1  on the Phase a

In this study, the efficiency of  S a m p E n K 2 E n C o n d E n C o S i E n A p E n F u z z E n  and  E n of E n  is investigated with several parameters, such as data length N = 30,000 samples, embedding dimension m = 2, time delay  τ  = 1, and tolerance r = 0.2 (commonly set by default). The entropy values of the 30,000 samples were calculated and are shown in Figure 10. The samples of phase a, where the open-circuit fault occurs, have larger entropy values with  S a m p E n C o n d E n A p E n F u z z E n  and  E n of E n  (represented in red) than the entropy of the phase b and c (represented in black). Obviously, they are clearly separated. Even the entropy of the phase a is lower than the entropy of the phases b and c K 2 E n  and  C o S i E n , Figure 10 also shows the separation of the three phases a, b and c: the phases b and c have an entropy very close to each other, and different to that of phase a.
Each entropy is able to detect the phase where one open-circuit fault occurs. As we can see in Figure 10, the larger difference between the entropy of phase a and the entropy of phases b and c is given by  A p E n .

5.2. Two Open-Circuit Faults on  T o p 1 —Phase a and on  B o t 2 —Phase b

The embedding dimension m, data length N, time delay  τ  and the choice of tolerance r, remain unchanged. Figure 11 illustrates the performance with two open-circuit faults: on phase a with faulty  T o p 1  and on phase b with faulty  B o t 2 . As shown below, the phases with open-circuit fault are represented in red. This time, only  S a m p E n C o n d E n A p E n F u z z E n  and  E n of E n  entropies are able to detect the phase where two open-circuit faults occur.  K 2 E n  and  C o S i E n  are not able to detect the two open-circuit faults on phases a and b. As we can see in Figure 11, the larger difference between the entropy of the faulty phase (a or b) and the entropy of phase c is given by  A p E n  and  F u z z E n .

5.3. Two Open-Circuit Faults on  T o p 1 —Phase a and  T o p 2 —Phase b

The parameters involved in entropy analysis were set as previously. As shown in Figure 12, the performance with two open-circuit faults can be observed on phase a with faulty  T o p 1  and on phase b with faulty  T o p 2 . We investigate the phases where the two open-circuit faults occur by performing  S a m p E n C o n d E n F u z z E n  and  E n of E n . The larger difference between the entropy of the faulty phases (a and b) and the entropy of phase c is given by  F u z z E n .
The parameters that will be discussed in the following subsections include the data length N, embedding dimension m, time lag  τ  and tolerance r. In order to verify the incidence of the variation parameters on the entropy, we only present the case of one open-circuit fault on  T o p 1 .

5.4. Varied Embedding Dimension (m)

Usually, for the implementation of entropy-based algorithms, the embedding dimension and data length are interdependent and mutually coupled. In practice, recorded signals have a finite length and are generally limited by processing time and memory space. Higher values of embedding dimension with a shortage of data size will cause unstable entropy estimation. Therefore, the embedding dimension is commonly set to m = 2 or m = 3 for a signal with 1000 samples. A signal with 30,000 samples approves a bigger embedding dimension with a stable entropy estimation. Here, data length, time delay and tolerance are fixed as: N = 30,000 samples,  τ  = 1 and r = 0.2.
Figure 13a,b show  S a m p E n  and  C o n d E n  as functions of the embedding dimension m. The results of  C o S i E n  and  A p E n  are shown in Figure 14a,b. In order to study the effect of m on these approaches ( S a m p E n C o n d E n C o S i E n  and  A p E n ), we change m from 2 to 8.  S a m p E n  and  C o n d E n  are gradually decreased when the embedding dimension m increases.  C o S i E n  is unchanged as m increases, keeping a constant entropy value. In Figure 14b, the relationships between  A p E n  and the embedding dimension m show that  A p E n  increases for m in the range (2, 4) and decreases for (6, 8).  A p E n  is nearly constant when m is in the range (4, 6). With regards to entropy analysis and to ensure a large difference between the entropies of phases a and b, it is appropriate to choose the following values: m = 2 for  S a m p E n  and  C o n d E n , m = 4 for  A p E n  and any value of m for  C o S i E n .

5.5. Varied Data Length (N)

The data length of the signal is another limitation, together with the embedding dimension, when implementing the entropy calculation. The algorithms require at least 1000 data points to guarantee a consistent estimation, as for the usual entropy functions and  M S E n t r o p y .
Figure 15a and Figure 16b illustrate the performance of  S a m p E n C o S i E n A p E n  and  F u z z E n . The parameter values used to calculate the entropy are: m = 2,  τ  = 1 and r = 0.2. The data lengths are:  L 1  = 6000 points, approximately two periods of the signal (Figure 4b); then the length increases up to  L 2  = 10,000 samples;  L 3  = 18,000 represents three periods of the signal; then 24,000 points are saved of  L 4  data; and, finally,  L 5  = 30,000 samples, performing six periods of the signal (Figure 4a).
The analysis of the data length, changing from 6000 samples to 30,000 samples, was only performed on  S a m p E n C o S i E n A p E n  and  F u z z E n . Figure 15a,b show  S a m p E n  and  C o S i E n  as functions of the data length. The results of  A p E n  and  F u z z E n  are shown in Figure 16a,b.  S a m p E n  gradually increases with an increase in data length, as shown in Figure 15a. At the end of the scale ( L 5  = 30,000 samples),  S a m p E n  doubles compared to the initial values at  L 1  = 6000 samples. Figure 15b shows that  C o S i E n  remains constant when the data length increases, as in the overhead subsection. In Figure 16a,  A p E n  increases slowly for  L 1 L 2  in the sample range (6000, 10,000) and  L 3 L 5  in the sample range (18,000, 30,000).  A p E n  is nearly constant when N is in the range (10,000, 18,000). The  A p E n  and  S a m p E n  results depend on the data length. In Figure 16b,  F u z z y E n  increases slowly for  L 1 L 2  in the sample range (6000, 10,000), decreases for  L 2 L 3  in the sample range (10,000, 18,000), and is followed by an increase in the range for  L 3 L 5  in the sample range (18,000, 30,000). With regards to the entropy analysis, and to ensure a large difference between the entropy of phase a and the entropy of phases b and c, it is appropriate to choose a large data length for  S a m p E n A p E n  and  F u z z E n . For  C o S i E n , it is appropriate to choose  L 1  because the entropy values are, independent of the length.

5.6. Varied Time Lag ( τ )

We now have a look at some representative entropies that are influenced by another indicator, such as time lag  τ . In this example, the time lag  τ  varies from 2 to 8. Having illustrated how entropy performs with data length and embedding dimension variation, we now examine the performance of  S a m p E n A p E n F u z z E n  and  E n of E n  with variation in time lag  τ . The data length, embedding dimension and tolerance were separately fixed at N = 30,000, m = 2 and r = 0.2 in the following analysis.
Figure 17a shows the impact of different  τ  values on  S a m p E n : a monotonic decrease of  S a m p E n  of phase a and a nearly constant value of phase b can be observed when  τ  increases. The most important difference between the two curves is for a small  τ . It can be observed in Figure 17b that the shape of  A p E n  as a function of  τ  shows a very small increase. Then, as for  S a m p E n , its values tend to decrease as  τ  increases. Both  A p E n  and  S a m p E n  show similar behavior for the medium and large values of time lag  τ . Figure 17a,b suggest that  τ  = 2 is suitable for the calculation of  A p E n  and  S a m p E n  values.  F u z z E n  entropy of phases a and b with one open-circuit fault is shown in Figure 18a: only lower time-lag entropies show a relevant significance. For  τ  equals 1,  F u z z E n  of the Figure 10 is 6.5, exceeding 12.5 for  τ = 2 . Furthermore, the time-lag analysis reveals additional entropy information not previously observed at time-lag 1 or 2.  F u z z E n  clearly presents one peak for  τ = 3 : the difference between  F u z z E n  of phase a and phase b is then maximal. However, as the time lag increases, the difference between the red and black curves becomes smaller. At the end of the  τ  interval, the two curves merge and the open-circuit fault on phase a cannot be detected any more. Only a lower time-lag ( τ  = 3) entropy has relevant significance, as in Figure 18a.
The difference between  E n of E n  of phases a and b is nearly constant, suggesting homogeneity between them, as plotted in Figure 18a in the middle of the interval  τ : significant differences between  E n of E n  of phases a and b were mostly observed at the extremities of the interval  τ .

5.7. Varied Tolerance (r)

The tolerance factor r varies among the experiments and is multiplied by the standard deviation entropy of phases a, b or c. In this example, the tolerance varies from 0.2 to 0.7. The data length, time lag and embedding dimension were separately fixed at N = 30,000,  τ  = 2 and m = 2.
Figure 19a,b present the impact of several r values on  S a m p E n  and  A p E n , respectively. Increase in r results in a monotonic increase in  S a m p E n  of phase a, where the fault occurs (Figure 19a).  S a m p E n  of phase b with no-fault is nearly constant, showing a very small decrease. The largest difference between the two curves is for a large r. The difference between  A p E n  of phases a and b is nearly constant, as plotted in Figure 19b. These figures suggest that r = 0.7 is suitable for the calculation of the  A p E n  and  S a m p E n  values.

5.8. New Parameters Setting

The parameters in entropy analysis are now set to N = 30,000, m = 2,  τ  = 2, r = 0.7 for  S a m p E n , and N = 30,000, m = 2,  τ  = 3, r = 0.2 for  F u z z E n . Figure 20 shows  S a m p E n  and  F u z z E n  using  M S E n c M S E n r M S E n  and the usual entropy functions  S a m p E n t r o p y  and  F u z z E n t r o p y , respectively, for three cases: (1) one open-circuit fault on  T o p 1 , (2) two open-circuit faults on  T o p 2  and  B o t 2  and (3) two open-circuit faults on  T o p 1  and  T o p 2 .
Entropy function  S a m p E n : Figure 20 presents a larger difference between  S a m p E n  of phases a and b than with the initial set of parameters (Figure 10) for the case (1). For cases (2) and (3), the use of the new set of parameters (Figure 20) does not lead to an improvement (cf. Figure 11 and Figure 12).
Entropy function  F u z z y E n : for (1) the distance between  F u z z E n  of the faulty phase a and the non-faulty phase b, considering the new setting parameters, is approximatively 12 (Figure 20), while it is near 4 with the initial set of parameters (Figure 10). Furthermore, for (2) this distance is nearly 23, as shown in Figure 20, and 8 in Figure 11, respectively. Then, for the last case (3), the distance between  F u z z E n  of the faulty phases a and b is 26, as shown in Figure 20, and 8, respectively, with the initial set of parameters (Figure 12). Therefore, the new parameter settings enable better detection of one or several open-circuit faults, with  F u z z E n .

6. Conclusions

Most entropy algorithms are limited to single-scale analysis, ignoring the information of other scales. Unfortunately, single-scale entropy is unable to describe the complexity of the signal and does not distinguish the faults from different phases. Multiscale analysis can fully describe the microstructural complexity and amplitude information of the signal, making it more suitable for various time-series analyses. This is why several usual entropies and multiscale entropies have been proposed to quantify the complexity of the  A C D C A C  converter output currents. For some values of tolerance r and time lag  τ S a m p E n  and  F u z z E n  (using  M S E n t r o p y ) are capable of exhibiting the complex features of the system. This paper shows the great ability of  S a m p E n  and  F u z z E n  to distinguish between a healthy state and an open-circuit faulty state. Moreover, the simulation results show that these entropies are able to detect and locate the arms of the bridge with one or even two open-circuit faults. Finally, this paper only studies single and double open-circuit faults in a three-phase inverter and checks the effectiveness of the proposed method.
In the future, we will increase the number of multilevel converter levels and introduce more fault types for comprehensive fault diagnosis. Moreover, it will be possible to study the diagnosis method of the inverter, where the open-circuit fault occurs, and to add a fault-tolerant strategy to ensure that the inverter can work normally.

Author Contributions

Formulation undertaken by C.M. and problem-solving by C.M., C.M. contributed to the numerical computations, the results and discussion, and to writing the manuscript. Revisions were undertaken by C.M. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, P.; Jiang, D.; Zhou, Y.; Liang, Y.; Guo, J.; Lin, Z. Energy-balancing control strategy for modular multilevel converters under SM fault conditions. IEEE Trans. Power Electron. 2014, 29, 5021–5030. [Google Scholar] [CrossRef]
  2. Yang, S.; Tang, Y.; Wang, P. Open-circuit fault diagnosis of switching devices in a modular multilevel converter with distributed control. In Proceedings of the IEEE Energy Conversion Congress and Exposition (ECCE), Cincinnati, OH, USA, 1–5 November 2017; pp. 4208–4214. [Google Scholar]
  3. Perez, M.A.; Bernet, S.; Rodriguez, J.; Kouro, S.; Lizana, R. Circuit topologies, modeling, control schemes, and applications of modular multilevel converters. IEEE Trans. Power Electon. 2015, 30, 4–17. [Google Scholar] [CrossRef]
  4. Rong, F.; Gong, X.; Huang, S. A Novel Grid-Connected PV System Based on MMC to Get the Maximum Power Under Partial Shading Conditions. IEEE Trans. Power Electron. 2015, 32, 4320–4333. [Google Scholar] [CrossRef]
  5. Estima, J.; Cordoso, A.J.M. A new approach for real-time multiple open-circuit fault diagnosis in volt-age-source inverters. IEEE Trans. Ind. Appl. 2011, 47, 2487–2494. [Google Scholar] [CrossRef]
  6. Li, B.; Zhou, S.; Xu, D.; Yang, R.; Xu, D.; Buccella, C.; Cecati, C. An improved circulating current injection method for modular multilevel converters in variable-speed drives. IEEE Trans. Ind. Electron. 2016, 63, 7215–7225. [Google Scholar] [CrossRef]
  7. Huo, Z.; Miguel Martínez-García, M.; Zhang, Y.; Yan, R.; Shu, L. Entropy Measures in Machine Fault Diagnosis: Insights and Applications. IEEE Trans. Instrum. Meas. 2020, 69, 2607–2620. [Google Scholar] [CrossRef]
  8. Ahmadi, S.; Poure, P.; Saadate, S.; Khaburi, D.A. A Real-Time Fault Diagnosis for Neutral-Point-Clamped Inverters Based on Failure-Mode Algorithm. IEEE Trans. Ind. Inform. 2021, 17, 1100–1110. [Google Scholar] [CrossRef]
  9. Caseiro, L.M.A.; Mendes, A.M.S. Real-time IGBT open-circuit fault diagnosis in three-level neutral-point-clamped voltage-source rectifiers based on instant voltage error. IEEE Trans. Ind. Electron. 2015, 62, 1669–1678. [Google Scholar] [CrossRef]
  10. Nsaif, Y.; Lipu, M.S.H.; Hussain, A.; Ayob, A.; Yusof, Y.; Zainuri, M.A. A New Voltage Based Fault Detection Technique for Distribution Network Connected to Photovoltaic Sources Using Variational Mode Decomposition Integrated Ensemble Bagged Trees Approach. Energies 2022, 15, 7762. [Google Scholar] [CrossRef]
  11. Lee, J.S.; Lee, K.B.; Blaabjerg, F. Open-switch fault detection method of a back-to-back converter using NPC topology for wind turbine systems. IEEE Trans. Ind. Appl. 2014, 51, 325–335. [Google Scholar] [CrossRef]
  12. Yu, L.; Zhang, Y.; Huang, W.; Teffah, K. A fast-acting diagnostic algorithm of insulated gate bipolar transistor open circuit faults for power inverters in electric vehicles. Energies 2017, 10, 552. [Google Scholar] [CrossRef] [Green Version]
  13. Park, B.G.; Lee, K.J.; Kim, R.Y.; Kim, T.S.; Ryu, J.S.; Hyun, D.S. Simple fault diagnosis based on operating characteristic of brushless direct-current motor drives. IEEE Trans. Ind. Electron. 2011, 58, 1586–1593. [Google Scholar] [CrossRef]
  14. Faraz, G.; Majid, A.; Khan, B.; Saleem, J.; Rehman, N. An Integral Sliding Mode Observer Based Fault Diagnosis Approach for Modular Multilevel Converter. In Proceedings of the 2019 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), Swat, Pakistan, 24–25 July 2019; pp. 1–6. [Google Scholar]
  15. Song, B.; Qi, G.; Xu, L. A new approach to open-circuit fault diagnosis of MMC sub-module. Syst. Sci. Control Eng. 2020, 8, 119–127. [Google Scholar] [CrossRef] [Green Version]
  16. Deng, F.; Chen, Z.; Khan, M.; Zhu, R. Fault Detection and Localization Method for Modular Multilevel Converters. IEEE Trans. Power Electron. 2015, 30, 2721–2732. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Hu, H.; Liu, Z. Concurrent fault diagnosis of a modular multi-level converter with kalman filter and optimized support vector machine. Syst. Sci. Control Eng. 2019, 7, 43–53. [Google Scholar] [CrossRef] [Green Version]
  18. Volonescu, C. Fault Detection and Diagnosis; IntechOpen: London, UK, 2018; ISBN 13 978-1789844368. [Google Scholar]
  19. Estima, J.O.; Freire, N.M.A.; Cardoso, A.J.M. Recent advances in fault diagnosis by Park’s vector approach. IEEE Workshop Electr. Mach. Des. Control Diagn. 2013, 2, 279–288. [Google Scholar]
  20. Wang, C.; Lizana, F.; Li, Z. Submodule short-circuit fault diagnosis based on wavelet transform and support vector machines for a modular multi-level converter with series and parallel connectivity. In Proceedings of the IECON 2017—43rd Annual Conference of the IEEE Industrial Electronics Society, Beijing, China, 29 October–1 November 2017; pp. 3239–3244. [Google Scholar]
  21. Xing, W. An open-circuit fault detection and location strategy for MMC with feature extraction and random forest. In Proceedings of the 2021 IEEE Applied Power Electronics Conference and Exposition (APEC), Phoenix, AZ, USA, 14–17 June 2021; pp. 1111–1116. [Google Scholar]
  22. Wang, Q.; Yu, Y.; Hoa, A. Fault detection and classification in MMC-HVDC systems using learning methods. Sensors 2020, 20, 4438. [Google Scholar] [CrossRef]
  23. Morel, C.; Akrad, A.; Sehab, R.; Azib, T.; Larouci, C. IGBT Open-Circuit Fault-Tolerant Strategy for Interleaved Boost Converters via Filippov Method. Energies 2022, 15, 352. [Google Scholar] [CrossRef]
  24. Li, T.; Zhao, C.; Li, L.; Zhang, F.; Zhai, X. Sub-module fault diagnosis and local protection scheme for MMC-HVDC system. Proc. CSEE 2014, 34, 1641–1649. [Google Scholar]
  25. Mendes, A.; Cardoso, A.; Saraiva, E.S. Voltage source inverter fault diagnosis in variable speed AC drives, by the average current Park’s vector approach. In Proceedings of the IEEE International Electric Machines and Drives Conference, Seattle, WA, USA, 9–12 May 1999; pp. 704–706. [Google Scholar]
  26. Ke, L.; Liu, Z.; Zhang, Y. Fault Diagnosis of Modular Multilevel Converter Based on Optimized Support Vector Machine. In Proceedings of the 2020 39th Chinese Control Conference, Shenyang, China, 27–29 July 2020; pp. 4204–4209. [Google Scholar]
  27. Geng, Z.; Wang, Q.; Han, Y.; Chen, K.; Xie, F.; Wang, Y. Fault Diagnosis of Modular Multilevel Converter Based on RNN and Wavelet Analysis. In Proceedings of the 2020 Chinese Automation Congress, Shanghai, China, 6–8 November 2020; pp. 1097–1101. [Google Scholar]
  28. Wang, S.; Bi, T.; Jia, K. Wavelet entropy based single pole grounding fault detection approach for MMC-HVDC overhead lines. Power Syst. Technol. 2016, 40, 2179–2185. [Google Scholar]
  29. Shen, Y.; Wang, T.; Amirat, Y.; Chen, G. IGBT Open-Circuit Fault Diagnosis for MMC Submodules Based on Weighted-Amplitude Permutation Entropy and DS Evidence Fusion Theory. Machines 2021, 9, 317. [Google Scholar] [CrossRef]
  30. Richman, J.; Randall Moorman, J. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circ. Physiol. 2000, 278, 2039–2049. [Google Scholar] [CrossRef] [Green Version]
  31. Unakafova, V.; Unakafov, A.; Keller, K. An approach to comparing Kolmogorov-Sinai and permutation entropy. Eur. Phys. J. 2013, 222, 353–361. [Google Scholar] [CrossRef] [Green Version]
  32. Porta, A.; Baselli, G.; Liberati, D.; Montano, N.; Cogliati, C.; Gnecchi-Ruscone, T.; Malliani, A.; Cerutti, S. Measuring regularity by means of a corrected conditional entropy in sympathetic outflow. Biol. Cybern. 1998, 78, 71–78. [Google Scholar] [CrossRef] [PubMed]
  33. Theerasak, C.; Mandic, D. Cosine Similarity Entropy: Self-Correlation-Based Complexity Analysis of Dynamical Systems. Entropy 2017, 19, 652. [Google Scholar]
  34. Azami, H.; Li, P.; Arnold, S.; Escuder, J.; Humeau-Heurtier, A. Fuzzy Entropy Metrics for the Analysis of Biomedical Signals: Assessment and Comparison. IEEE Access 2016, 4, 1–16. [Google Scholar] [CrossRef]
  35. Chen, W.; Wang, Z.; Xie, H.; Yu, W. Characterization of surface EMG signal based on fuzzy entropy, Neural Systems and Rehabilitation Engineering. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 266–272. [Google Scholar] [CrossRef] [PubMed]
  36. Chang, F.; Sung-Yang, W.; Huang, H.P.; Hsu, L.; Chi, S.; Peng, C.K. Entropy of Entropy: Measurement of Dynamical Complexity for Biological Systems. Entropy 2017, 19, 550. [Google Scholar]
  37. Costa, M.; Goldberger, A.; Peng, C. Multiscale entropy analysis of complex physiologic time series. Phys. Rev. Lett. 2002, 89, 068102. [Google Scholar] [CrossRef] [Green Version]
  38. Costa, M.; Goldberger, A.; Peng, C. Multiscale entropy analysis of biological signals. Phys. Rev. E 2005, 71, 021906. [Google Scholar] [CrossRef] [Green Version]
  39. Liu, T.; Cui, L.; Zhang, J.; Zhang, C. Research on fault diagnosis of planetary gearbox based on Variable Multi-Scale Morphological Filtering and improved Symbol Dynamic Entropy. Int. J. Adv. Manuf. Technol. 2023, 124, 3947–3961. [Google Scholar] [CrossRef]
  40. Silva, L.; Duque, J.; Felipe, J.; Murta, L.; Humeau-Heurtier, A. Twodimensional multiscale entropy analysis: Applications to image texture evaluation. Signal Process. 2018, 147, 224–232. [Google Scholar] [CrossRef]
  41. Morel, C.; Humeau-Heurtier, A. Multiscale permutation entropy for two-dimensional patterns. Pattern Recognit. Lett. 2021, 150, 139–146. [Google Scholar] [CrossRef]
  42. Valencia, J.; Porta, A.; Vallverdu, M.; Claria, F.; Baranovski, R.; Orlowska-Baranovska, E.; Caminal, P. Refined multiscale entropy: Application to 24-h Holter records of heart period variability in hearthy and aortic stenosis subjects. IEEE Trans. Biomed. Eng. 2009, 56, 2202–2213. [Google Scholar] [CrossRef] [PubMed]
  43. Wu, S.; Wu, C.; Lin, S.; Lee, K.; Peng, C. Analysis of complex time series using refined composite multiscale entropy. Phys. Lett. A 2014, 378, 1369–1374. [Google Scholar] [CrossRef]
  44. Wu, S.; Wu, C.; Lin, S.; Wang, C.; Lee, K. Time series analysis using composite multiscale entropy. Entropy 2013, 15, 1069–1084. [Google Scholar] [CrossRef] [Green Version]
  45. Humeau-Heurtier, A. The Multiscale Entropy Algorithm and Its Variants: A Review. Entropy 2015, 17, 3110–3123. [Google Scholar] [CrossRef] [Green Version]
  46. Dai, M.; Marwali, M.N.; Jung, J.-W.; Keyhani, A. A Three-Phase Four-Wire Inverter Control Technique for a Single Distributed Generation Unit in Island Mode. IEEE Trans. Power Electron. 2008, 23, 322–331. [Google Scholar] [CrossRef]
  47. Raja, N.; Mathewb, J.; Ga, J.; George, S. Open-Transistor Fault Detection and Diagnosis Based on Current Trajectory in a Two-level Voltage Source Inverter. Procedia Technol. 2016, 25, 669–675. [Google Scholar] [CrossRef] [Green Version]
Figure 1. MSEntropy.
Figure 1. MSEntropy.
Energies 16 01959 g001
Figure 2. Power circuit structure of a  A C D C A C  converter.
Figure 2. Power circuit structure of a  A C D C A C  converter.
Energies 16 01959 g002
Figure 3. Schematic representation of a three-phase, two-level inverter.
Figure 3. Schematic representation of a three-phase, two-level inverter.
Energies 16 01959 g003
Figure 4. The output current of phase a: (a) Current  i a  under no-fault case; (b) Current  i a  when open-circuit fault on  T o p 1 .
Figure 4. The output current of phase a: (a) Current  i a  under no-fault case; (b) Current  i a  when open-circuit fault on  T o p 1 .
Energies 16 01959 g004
Figure 5. The output currents: (a) Current  i b  when open-circuit fault on  T o p 1 ; (b) Current  i c  when open-circuit fault on  T o p 1 .
Figure 5. The output currents: (a) Current  i b  when open-circuit fault on  T o p 1 ; (b) Current  i c  when open-circuit fault on  T o p 1 .
Energies 16 01959 g005
Figure 6. The output currents: (a) Current  i a  when open-circuit faults on  T o p 1  and  B o t 2 ; (b) Current  i b  when open-circuit faults on  T o p 1  and  B o t 2 .
Figure 6. The output currents: (a) Current  i a  when open-circuit faults on  T o p 1  and  B o t 2 ; (b) Current  i b  when open-circuit faults on  T o p 1  and  B o t 2 .
Energies 16 01959 g006
Figure 7. The output currents: (a) Current  i c  when open-circuit faults on  T o p 1  and  B o t 2 ; (b) Current  i a  when open-circuit faults on  T o p 1  and  T o p 2 .
Figure 7. The output currents: (a) Current  i c  when open-circuit faults on  T o p 1  and  B o t 2 ; (b) Current  i a  when open-circuit faults on  T o p 1  and  T o p 2 .
Energies 16 01959 g007
Figure 8. The output currents: (a) Current  i b  when open-circuit faults on  T o p 1  and  T o p 2 ; (b) Current  i c  when open-circuit faults on  T o p 1  and  T o p 2 .
Figure 8. The output currents: (a) Current  i b  when open-circuit faults on  T o p 1  and  T o p 2 ; (b) Current  i c  when open-circuit faults on  T o p 1  and  T o p 2 .
Energies 16 01959 g008
Figure 9. Block diagram representation of the faults detection method.
Figure 9. Block diagram representation of the faults detection method.
Energies 16 01959 g009
Figure 10. Entropy evaluation using  M S E n c M S E n r M S E n  and the usual entropy functions for one open-circuit fault on  T o p 1 : entropy of phase a with open-circuit fault represented in red and entropy of phases b and c in black.
Figure 10. Entropy evaluation using  M S E n c M S E n r M S E n  and the usual entropy functions for one open-circuit fault on  T o p 1 : entropy of phase a with open-circuit fault represented in red and entropy of phases b and c in black.
Energies 16 01959 g010
Figure 11. Entropy evaluation using  M S E n c M S E n r M S E n  and the usual entropy functions for two open-circuit faults on  T o p 1  and on  B o t 2 : entropy of the phases a and b with open-circuit fault represented in red and the entropy of phase c in black.
Figure 11. Entropy evaluation using  M S E n c M S E n r M S E n  and the usual entropy functions for two open-circuit faults on  T o p 1  and on  B o t 2 : entropy of the phases a and b with open-circuit fault represented in red and the entropy of phase c in black.
Energies 16 01959 g011
Figure 12. Entropy evaluation using  M S E n c M S E n r M S E n  and the usual entropy functions for two open-circuit faults on  T o p 1  and on  T o p 2 : entropy of phases a and b with open-circuit fault represented in red and entropy of phase c in black.
Figure 12. Entropy evaluation using  M S E n c M S E n r M S E n  and the usual entropy functions for two open-circuit faults on  T o p 1  and on  T o p 2 : entropy of phases a and b with open-circuit fault represented in red and entropy of phase c in black.
Energies 16 01959 g012
Figure 13. (a) Mean of  M S E n  and  c M S E n  computed with  S a m p E n t r o p y  and (b) mean of  M S E n c M S E n  and  r M S E n  computed with  C o n d E n t r o p y  for the embedding dimension m: entropy of phase a with open-circuit fault represented in red and entropy of phase b in black.
Figure 13. (a) Mean of  M S E n  and  c M S E n  computed with  S a m p E n t r o p y  and (b) mean of  M S E n c M S E n  and  r M S E n  computed with  C o n d E n t r o p y  for the embedding dimension m: entropy of phase a with open-circuit fault represented in red and entropy of phase b in black.
Energies 16 01959 g013
Figure 14. (a) Mean of  M S E n c M S E n  and  r M S E n  computed with  C o S i E n t r o p y  and (b) mean of  M S E n  and  c M S E n  computed with  A p E n t r o p y  for the embedding dimension m: entropy of phase a with open-circuit fault represented in red and entropy of phase b in black.
Figure 14. (a) Mean of  M S E n c M S E n  and  r M S E n  computed with  C o S i E n t r o p y  and (b) mean of  M S E n  and  c M S E n  computed with  A p E n t r o p y  for the embedding dimension m: entropy of phase a with open-circuit fault represented in red and entropy of phase b in black.
Energies 16 01959 g014
Figure 15. (a) Mean of  M S E n  and  c M S E n  computed with  S a m p E n t r o p y  and (b) mean of  M S E n c M S E n  and  r M S E n  computed with  C o S i E n t r o p y  for the data length N: entropy of phase a with open-circuit fault represented in red and entropy of phases b and c in black.
Figure 15. (a) Mean of  M S E n  and  c M S E n  computed with  S a m p E n t r o p y  and (b) mean of  M S E n c M S E n  and  r M S E n  computed with  C o S i E n t r o p y  for the data length N: entropy of phase a with open-circuit fault represented in red and entropy of phases b and c in black.
Energies 16 01959 g015
Figure 16. (a) Mean of  M S E n  and  c M S E n  computed with  A p E n t r o p y  and (b) mean of  M S E n c M S E n  and  r M S E n  computed with  F u z z E n t r o p y  for the data length N: entropy of phase a with open-circuit fault represented in red and entropy of phases b and c in black.
Figure 16. (a) Mean of  M S E n  and  c M S E n  computed with  A p E n t r o p y  and (b) mean of  M S E n c M S E n  and  r M S E n  computed with  F u z z E n t r o p y  for the data length N: entropy of phase a with open-circuit fault represented in red and entropy of phases b and c in black.
Energies 16 01959 g016
Figure 17. (a) Mean of  M S E n  and  c M S E n  computed with  S a m p E n t r o p y  and (b) mean of  M S E n  and  c M S E n  computed with  A p E n t r o p y  for the time lag  τ : entropy of phase a with open-circuit fault represented in red and entropy of phase b in black.
Figure 17. (a) Mean of  M S E n  and  c M S E n  computed with  S a m p E n t r o p y  and (b) mean of  M S E n  and  c M S E n  computed with  A p E n t r o p y  for the time lag  τ : entropy of phase a with open-circuit fault represented in red and entropy of phase b in black.
Energies 16 01959 g017
Figure 18. (a) Mean of  M S E n c M S E n  and  r M S E n  computed with  F u z z E n t r o p y  and (b) mean of  M S E n c M S E n  and  r M S E n  computed with  E n of E n t r o p y  for the time lag  τ : entropy of phase a with open-circuit fault represented in red and entropy of phase b in black.
Figure 18. (a) Mean of  M S E n c M S E n  and  r M S E n  computed with  F u z z E n t r o p y  and (b) mean of  M S E n c M S E n  and  r M S E n  computed with  E n of E n t r o p y  for the time lag  τ : entropy of phase a with open-circuit fault represented in red and entropy of phase b in black.
Energies 16 01959 g018
Figure 19. (a) Mean of  M S E n  and  c M S E n  computed with  S a m p E n t r o p y  and (b) mean of  M S E n  and  c M S E n  computed with  A p E n t r o p y  for the tolerance r: entropy of phase a with open-circuit fault represented in red and entropy of phase b in black.
Figure 19. (a) Mean of  M S E n  and  c M S E n  computed with  S a m p E n t r o p y  and (b) mean of  M S E n  and  c M S E n  computed with  A p E n t r o p y  for the tolerance r: entropy of phase a with open-circuit fault represented in red and entropy of phase b in black.
Energies 16 01959 g019
Figure 20. Entropy evaluation using  M S E n c M S E n r M S E n  and usual entropy functions: for open-circuit fault on  T o p 1  (phase a in red, phases b and c in black), for two open-circuit faults on  T o p 1  (phase a in red) and  B o t 2  (phase b in red) phase c in black, and two open-circuit faults on  T o p 1  (phase a in red) and  T o p 2  (phase b in red) phase c in black.
Figure 20. Entropy evaluation using  M S E n c M S E n r M S E n  and usual entropy functions: for open-circuit fault on  T o p 1  (phase a in red, phases b and c in black), for two open-circuit faults on  T o p 1  (phase a in red) and  B o t 2  (phase b in red) phase c in black, and two open-circuit faults on  T o p 1  (phase a in red) and  T o p 2  (phase b in red) phase c in black.
Energies 16 01959 g020
Table 1. Mean specifications of the  A C D C A C  converter.
Table 1. Mean specifications of the  A C D C A C  converter.
EspecificationParameterValue
Input voltageInput phase-to-phase voltage25 kV
Input power10 MVA
Grid frequency60 Hz
Three-phase transformer Winding 1Phase-to-phase voltage25 kV
Resistance0.004  Ω
Inductance0.02 H
Three-phase transformer Winding 2Phase-to-phase voltage600 V
Resistance0.004  Ω
Inductance0.02 H
RectifierSnubber resistance100  Ω
Snubber capacitance0.1  μ F
Ron0.001  Ω
Foward voltage0.8 V
Filtre DCInductance200  μ H
Capacitance5 mF
InverterSwitching frequency50 Hz
Output filtreInductance2 mH
Capacitive reactive power3 kVAR
LoadPhase-to-phase voltage380 V
Output power50 kW
Frequency50 Hz
Table 2. Mean of the output currents of phases a, b and c with no-fault, one open-circuit fault and two open−circuit faults on the switches.
Table 2. Mean of the output currents of phases a, b and c with no-fault, one open-circuit fault and two open−circuit faults on the switches.
Mean   i a   i b   i c
No fault−0.0034−0.0034−0.0034
Open circuit— T 1 −9.28484.72364.5612
Open circuit— T 1  and  B 2 −6.80136.9350−0.1337
Open circuit— T 1  and  T 2 −6.2230−6.238812.4618
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Morel, C.; Akrad, A. Open-Circuit Fault Detection and Location in AC-DC-AC Converters Based on Entropy Analysis. Energies 2023, 16, 1959. https://doi.org/10.3390/en16041959

AMA Style

Morel C, Akrad A. Open-Circuit Fault Detection and Location in AC-DC-AC Converters Based on Entropy Analysis. Energies. 2023; 16(4):1959. https://doi.org/10.3390/en16041959

Chicago/Turabian Style

Morel, Cristina, and Ahmad Akrad. 2023. "Open-Circuit Fault Detection and Location in AC-DC-AC Converters Based on Entropy Analysis" Energies 16, no. 4: 1959. https://doi.org/10.3390/en16041959

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop