Next Article in Journal
Knowledge-Based Remote E-Coaching Framework Using IoT Devices for In-Home ADL Rehabilitation Treatment of Degenerative Brain Disease Patients
Next Article in Special Issue
Online Multivariate Anomaly Detection and Localization for High-Dimensional Settings
Previous Article in Journal
Prediction of Metal Additively Manufactured Surface Roughness Using Deep Neural Network
Previous Article in Special Issue
Intelligent Traffic Monitoring through Heterogeneous and Autonomous Networks Dedicated to Traffic Automation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Detection and Classification of Power Quality Disturbances

Electrical Engineering Department, University of South Florida, Tampa, FL 33620, USA
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(20), 7958; https://doi.org/10.3390/s22207958
Submission received: 16 September 2022 / Revised: 13 October 2022 / Accepted: 14 October 2022 / Published: 19 October 2022

Abstract

:
This paper considers the problem of real-time detection and classification of power quality disturbances in power delivery systems. We propose a sequential and multivariate disturbance detection method (aiming for quick and accurate detection). Our proposed detector follows a non-parametric and supervised approach, i.e., it learns nominal and anomalous patterns from training data involving clean and disturbance signals. The multivariate nature of the method enables joint processing of data from multiple meters, facilitating quicker detection as a result of the cooperative analysis. We further extend our supervised sequential detection method to a multi-hypothesis setting, which aims to classify the disturbance events as quickly and accurately as possible in a real-time manner. The multi-hypothesis method requires a training dataset per hypothesis, i.e., per each disturbance type as well as the ’no disturbance’ case. The proposed classification method is demonstrated to quickly and accurately detect and classify power disturbances.

1. Introduction

Power quality (PQ) has become a major concern in power grids. The increasing penetration of renewable energy sources, increasing energy consumption, and the proliferation of modern electrical equipment are some of the sources of power quality disturbances (PQDs) that may cause major/minor damages to sensitive equipment and power system operations, such as blackouts. Due to the catastrophic damages caused by power losses to the safety, economy, and society, it is important to improve the grid’s reliability, security, and stability. To that end, the monitoring of the power system is crucial for assessing the PQ and overcoming the PQ problems in the system [1].
PQD, referring to the voltage/current quality, is the deviation of the voltage/current waveform from the ideal. In this paper, without loss of generality, we only consider the voltage quality disturbances. Voltage quality monitoring deals with analyzing the voltage waveform over time in order to detect and mitigate the voltage issues. Power quality monitoring allows for gaining better insights about the disturbances in the system, which in turn can help prevent potential damages, identify sources of disturbances, and make appropriate mitigating/preventive countermeasures in the system. Therefore, it is highly important to detect and identify the PQDs, as quickly and accurately as possible, so that the countermeasures could be taken in time. Fortunately, new technologies employed in a smart grid, such as high computational power and devices for real-time monitoring, communications, and automation, can facilitate the real-time detection and identification of the disturbances.
Although power quality monitoring has been studied for decades, new approaches are needed due to the emerging technological capabilities of smart grids and the integration of power grids with renewable energy resources and modern electrical equipment, such as electric vehicles and Internet-of-Things (IoT) devices.

2. Related Work

Many existing PQD detection and classification methods rely on frequency-domain or time-frequency domain analyses of the signals to extract informative features for further analysis to identify the type of disturbances, e.g., wavelet transform [2], Fourier transform, short-time Fourier transform (STFT), S-transform [3], etc. These methods are usually assisted with machine learning (ML)-based classification methods, including decision tree (DT) [3,4], support vector machine (SVM) [5,6], k-nearest-neighbor-based methods (kNN) [7,8,9], and neural networks [10,11,12,13,14,15,16,17,18].
In [16], the combination of S-transform-based feature extraction and a probabilistic neural network is used for the classification of eleven power quality disturbances. S-transform has an advantage over wavelet transform in detecting disturbances under noisy conditions. Reference [3] proposes extracting five features from the S-transform of the voltage waveform. These methods are effective in accurately classifying the disturbances; however, they lack the ability to be applied in real time due to their high computational complexity. While the problem of detection and classification of PQDs has been studied a lot, there is limited research on real-time approaches that focus on quick and accurate detection and classification. A real-time S-transform-based method has been proposed in [19], where the authors have proposed the use of dynamics to reduce the run-time of the transform and feature extraction. Despite the lower computational burdens of this method, it lacks the ability to quickly react to the disturbances due to the relatively large windows required by such methods. Although the proper window size is typically not discussed in the relevant literature, it is seen from the presented simulations that usually 10 or 12 cycles of the waveform are used to extract features in 50 and 60 Hz systems, respectively.
While the majority of existing works consider the concurrent detection and classification of PQDs, several other methods focus only on the detection, aimed at detecting the disturbances as quickly as possible. The methods in references [20] and [21] attempted to detect (as quickly and accurately as possible) after the PQD occurrences. These methods attempted to model the nominal and disturbance signals, employing techniques to deal with the unknown disturbance probability distributions. These methods are effective in detecting the PQDs very quickly and accurately; however, they do not provide any information regarding the type of the detected disturbances, and conventional classification methods are required to be further employed in order to help with the identification of PQDs. In this paper, we propose a method that is simple enough to be applied in real time and is able to quickly and accurately detect and classify the disturbances (we were motivated by the gap in accurate and timely joint detection and the classification of PQDs).

Contributions

In summary, our contributions to this paper are as follows:
  • The quick and accurate detection of PQDs in real-time; we propose a novel sequential, non-parametric, and supervised disturbance detector. The proposed detector, thanks to its multivariate nature, facilitates cooperative detection by multiple meters for coping with noisy measurements.
  • The proposed detection method is proven to be asymptotically (as the training sets grow) optimal in the minimax sense in terms of minimizing the expected detection delay while satisfying a desired false alarm constraint.
  • Extending the proposed detection method, a novel PQD detection and classification method is proposed, which is empirically shown to outperform the state-of-the-art techniques in terms of quickness and accuracy.
The remainder of the paper is organized as follows. Section 3 presents the system model for PQD detection and classification. Section 4.1 focuses on the derivation and analysis of the proposed sequential PQD detection method. Section 5.3 introduces the proposed joint detection and classification method for PQD. Finally, Section 6 concludes the paper with general remarks and future work directions.

3. System Model

Voltage waveform in the ideal form is a sinusoidal with constant frequency and magnitude, i.e.,
s ( t ) = a sin ( 2 π f t + ϕ ) , t R ,
where a, f, and  ϕ are the nominal magnitude, frequency, and phase angle, respectively. In practice, even in the nominal case without disturbance, the observed voltage values z ( t ) = s ( t ) + v ( t ) are distorted by the measurement noise v ( t ) . After a disturbance occurs in the system, the voltage measurements become further distorted by an additional disturbance waveform δ ( t ) , i.e.,  z ( t ) = s ( t ) + v ( t ) + δ ( t ) . Therefore, we can view the voltage disturbance detection as the change in the distribution of the observed waveform. Let us define y ( t ) as the distortion signal added to the ideal waveform s ( t ) . Before and after the occurrence of disturbance, y ( t ) consists of the noise v ( t ) and the noisy disturbance waveform measurements v ( t ) + δ ( t ) , respectively. Since the ideal waveform parameters are deterministic and fixed, y ( t ) is easily calculated by subtracting the deterministic measurements s ( t ) from the voltage measurement z ( t ) , i.e.,  y ( t ) = z ( t ) s ( t ) .
Assume that the voltage measurements are nominal initially, and an unknown disturbance occurs at an unknown time τ . The occurrence of a disturbance in the voltage waveform can be considered as a change in the distribution of the sampled observations:
y n = v n P 0 , n S < τ ; y n = δ n + v n P 1 , n S τ ,
where y n is the sampled observation at time n Z , S is the sampling period, P 0 is the probability distribution of pre-change observations, i.e., typically N ( 0 , σ 2 ) , δ n is the disturbance at time n, and  P 1 is the post-change probability distribution, which is unknown due to the fact that it depends on the type of disturbance occurring in the system. The objective of this problem is to detect a PQD as soon as possible and identify the type of PQD among a given list of known classes.
Sequential change detection (or change-point detection) methods are a class of statistical methods that have been extensively and successfully applied to many real-time applications (e.g., [21,22,23,24]) with the aim of detecting a change in the statistical distribution of the observations as quickly and accurately as possible after the occurrence of change in the observation [25]. In this paper, we aimed for the quick detection and classification of PQDs, and employed a sequential change detection approach for real-time detection and classification of PQDs.

4. Sequential Detection of Power Quality Disturbances

CUSUM is a well-known sequential change detection method that is applied in many application domains to detect changes in the statistical distribution of data [26]. CUSUM is optimal in the minimax sense [27] in terms of minimizing the detection delay (the time elapsed from the change time τ until the detection time T) while controlling the false alarm rate:
inf T sup τ ess sup X τ E τ [ ( T τ ) + | X τ ] s . t . E [ H ] β .
In ((3)), E τ represents the expectation given the change occurs at time τ , ( . ) + = max ( . , 0 ) , E indicates the expectation given that the change never occurs, i.e., expected false alarm period. The  “ess sup” indicates essential supremum, which in practice is equivalent to supremum. To put it simply, the minimax performance criterion minimizes the average detection delay for the least favorable change-point τ and the least favorable history of measurements X τ up to the change-point while the average false alarm period is constrained by β .
Despite being minimax optimal in minimizing the detection delay for a given false alarm constraint, CUSUM has the drawback of being parametric, i.e., it requires the perfect knowledge of the pre-change and post-change probability distributions and their parameters. Even if the correct probability distributions are known, the minimax optimality only holds asymptotically (as the available data size grows) when the parameters are estimated from data. The parametric nature of CUSUM limits its applicability in applications such as power quality monitoring in which the post-change parameters are typically unknown.
The non-parametric and data-driven methods on the other hand are suitable to deal with unknown probability distributions. A recent non-parametric and sequential anomaly detection method, called the online discrepancy test (ODIT), was proposed in [28]. It has been proven effective for achieving quick and accurate anomaly detection in real-world scenarios with many unknowns in the system model. However, ODIT is a semi-supervised method that only trains on nominal data. Even though this semi-supervised nature allows ODIT to be generic and not restricted to a certain list of anomaly types, it also prevents it from improving its performance on detecting known anomaly types by training on available data. Specifically, in PQD detection, a detector can be trained on sample data from the anomaly types of interest, as opposed to other real-world problems where obtaining anomalous training data are not tractable or desired. Hence, in this section, exploiting the sequential and data-driven properties of ODIT, we propose a novel supervised PQD detection method. In the next section, we further propose a multi-class extension for joint detection and classification.

4.1. Proposed Supervised Detection Method

Given the observed waveforms z ( t ) and y ( t ) , the d-dimensional feature vector x n R d is extracted using a time-domain or frequency-domain analysis during the time window [ ( n 1 ) S , n S ] . Consider the nominal training set X N = { x 1 , x 2 , , x N } consisting of N nominal data points, as well as an anomaly training set X M = { x 1 , x 2 , , x M } containing M disturbance data points. Let us define g i ( x n ) as the Euclidean distance between the observation x n and its ith nearest neighbor in X N . Moreover, define L n as the sum of the k nearest neighbor (kNN) distances of observation x n with respect to the set X N :
L n = i = k s + 1 k g i ( x n ) ,
where s { 1 , , k } is a fixed number introduced for convenience. Similarly, L n denotes the total kNN distance of x n with respect to the anomaly train set X M .
In the testing phase, our method computes the evidence for the anomaly in each observation x n by comparing the L n and L n . This is in contrast with ODIT, which compares L n with a baseline statistic computed from nominal training data since it does not utilize any anomalous training data. Assuming sufficiently large nominal and anomaly sets, x n is more likely to be nominal if L n < L n , i.e., the observation is closer to the nominal dataset than the anomalous one. On the other hand, in the case of L n > L n , the observation is more likely to be anomalous. In the proposed supervised detector, the anomaly evidence for each observation is computed by:
D n = d ( log L n log L n ) + log ( N / M ) ,
where d is the dimensionality of data, and N and M are the sizes of the nominal and anomaly datasets, respectively. In practice, due to the inherent difficulty of acquiring anomalous observations, there is typically an imbalance between nominal and anomaly datasets. The kNN distances in a dense nominal dataset are expected to be smaller than those in a sparse anomaly dataset. Hence, log ( N / M ) serves as a correction factor, introduced to treat the imbalance between two datasets. In particular, log ( N / M ) > 0 compensates for L n being unfairly smaller than L n . D n denotes the positive/negative evidence for the anomaly. Negative D n suggests that the observation is more similar to the nominal dataset while the positive D n means the observation is more similar to the anomalous dataset. The update and stopping rules of the proposed method, given by
Δ n = max { Δ n 1 + D n , 0 } , Δ 0 = 0 , T = min { n : Δ n h } ,
are similar to those of the ODIT and CUSUM. That is, it recursively updates a detection statistic Δ n by accumulating the anomaly evidence over time and raising an alarm as soon as Δ n exceeds a predefined threshold h, selected in a way to strike a balance between the detection delay and false alarm rates.
As the training datasets grow, the detector proposed in Equations (4)–(6) achieves asymptotic optimality in the minimax sense, as shown in the following theorem.
Theorem 1.
When the nominal distribution f 0 ( x n ) and anomalous distribution f 1 ( x n ) are finite and continuous, as the training sets grow, the statistic D n given by (5) converges in probability to the log-likelihood ratio,
D n p log f 1 ( x n ) f 0 ( x n ) as M , N ,
i.e., the method converges to CUSUM, which is minimax optimum in minimizing the expected detection delay while satisfying a false alarm constraint.
Proof. 
Consider a hypersphere S t R d centered at x n with radius g k ( x n ) , the kNN distance of x n with respect to nominal set X N . The maximum likelihood estimate for the probability of a point being inside S t under f 0 is given by k / N . It is known that, as the total number of points grows, this binomial probability estimate converges to the true probability mass in S t in the mean square sense [29], i.e., k / N L 2 S t f 0 ( x ) d x as N . Hence, the probability density estimate f ^ 0 ( x n ) = k / N V d g k ( x n ) d , where V d g k ( x n ) d is the volume of S t with the appropriate constant V d , converges to the actual probability density function, f ^ 0 ( x n ) p f 0 ( x n ) as N since S t shrinks and g k ( x n ) 0 . Similarly, we can show that k / M V d g k ( x n ) d p f 1 ( x n ) as M , where g k ( x n ) is the kNN distance of x n in the anomalous training set X M . Hence, we conclude with log k / M V d g k ( x n ) d k / N V d g k ( x n ) d = d log g k ( x n ) log g k ( x n ) + log ( N / M ) p log f 1 ( x n ) f 0 ( x n ) as M , N , where L n = g k ( x n ) and L n = g k ( x n ) for s = 1 .    □
Remark 1.
In practice, the nominal and anomalous datasets may overlap. While the extent of overlap depends on the application, this may happen due to either the non-ideality of the feature space in terms of differentiating the nominal and anomalous data or the difficulty and inaccuracy inherent in anomalous data acquisition, e.g., some data points labeled as anomalous may be nominal in nature. For this reason, the proposed detector may require a pre-processing step, in which the anomalous dataset is cleaned of any data point, which is very similar to the nominal dataset. Specifically, given a statistical significance level α (e.g., 0.05 ), we eliminate any x m X M from the anomalous training set whose total kNN distance is smaller than the N α th largest kNN distance in the nominal training set with respect to itself, i.e.,
X M clean = X M \ { x m X M : L x m L ( N α ) } ,
where · is the floor operator. Following the pre-processing step, in Equation (5), L n is calculated with respect to X M clean , and M is the size of X M clean .

4.2. Simulation Results

In the simulations, we generate the disturbance signals using the Matlab/Simulink SimPowerSystems toolbox. Following [21], the voltage sag, swell, and oscillatory transient disturbances are induced by a distribution line fault, a sudden reduction in load, and capacitor bank switching, simulated by the circuits shown in Figure 1, Figure 2 and Figure 3). For example, in Figure 2, initially, the switch connecting Load 1 to the system is closed, and approximately at time 0.02 s the switch opens and the load of the system suddenly decreases. The voltage in the system is monitored through the three meters shown in the figure. In the experiments, the nominal waveform frequency is set to 60 Hz, normalized to the unit magnitude. The signal sampling frequency (at meters) is set to be 64 samples per cycle. The measurement noise variance is set to σ 2 = 0.1 .
In this section, we apply the proposed detector to the detection of the common voltage disturbances: sag, swell, and oscillatory transients. We evaluate our proposed detector in terms of the average detection delay versus the false alarm rate and compare it with the semi-supervised ODIT [28] and the GLLR method proposed for sequential PQD detection in [21].
For evaluating the methods, we generated 2000 voltage waveforms for each disturbance type, where the disturbance occurs at sample 101 in the observations, e.g., Figure 4. After isolating the disturbance signal by subtracting the deterministic sine wave from the test waveform, we compute simple statistical features including average, standard deviation, RMS value, and auto-correlation within a moving window of size 5, shifted by 1 instance in time.
Figure 5a demonstrates the performance of the three methods, averaged over all three disturbance types, in terms of the average detection delay versus the probability of false alarm. We should note that all three methods detect the disturbances 100% of the time. The decision statistics of the methods (e.g., for voltage sag as depicted in Figure 5b) show an abrupt steady increase for all methods with the disturbance onset, whereas the average performance demonstrates that the proposed Supervised ODIT outperforms the GLLR and semi-supervised ODIT. Comparing the semi-supervised and supervised ODITs, we see that utilizing additional disturbance data improves the performance. Figure 6 depicts the average performance of the methods for each disturbance type individually. This figure confirms that supervised ODIT achieves the lowest detection delay for detecting all disturbance types. While all three detectors are able to detect the sag and transient disturbances in a few samples for practical false alarm rates, they need much more samples to detect the swell disturbance for the same level of false alarm rate. Due to this inherent difficulty in detecting the swell disturbance, the performance improvement of Supervised ODIT over the competing methods seems to be small on the linear scale. Its performance improvement is more clearly seen in the sag and transient cases. Since even very small thresholds for Supervised ODIT yield false alarm probabilities smaller than 10 1.5 (around 0.03) in these simulations, its delay performance for larger false alarm probabilities is not shown. Nevertheless, false alarm rates greater than 3% are usually not of interest in many applications.
Figure 7 demonstrates the performance improvement for the detection of sag, swell, and transient disturbances by the three methods as the number of meters employed in the system increases. It is seen that the performance of the proposed supervised ODIT detector improves faster and achieves a much smaller delay than the other two methods. The figures are obtained for a fixed false alarm rate of 0.01.

5. Classification of Power Quality Disturbances

Power quality disturbances, if not handled and mitigated properly, may cause serious damage to the grid. In order for proper and quick mitigation of the disturbance, it is important to identify the type of the event. Early identification of the event type would allow proper countermeasures to be taken in time. Thus, not only the accurate classification of the events are important, but also the quick classification of the disturbances is desirable. To that end, in this section, we consider the online classification of power quality disturbances as a sequential joint detection and classification problem, in which the goal is to detect a disturbance event in the observed system (and to accurately classify it as quickly as possible).
In the context of change detection, we can view online classification as a multi-hypothesis change detection problem, where there are several post-change hypotheses. Thus, the goal is to detect the change as quickly as possible and identify the post-change hypotheses correctly. Next, in Section 5.1, we formulate the problem of disturbance classification as a multi-hypotheses change detection problem, and in Section 5.2, Section 5.3 and Section 5.4 we present and evaluate our multi-hypothesis change detection method.

5.1. Problem Formulation

Consider a disturbance of type q Q happens at time τ and it changes the probability distribution f of the observed feature vector x n . We formulate the problem as a multi-hypotheses change-detection problem, as:
f = f 0 , t < τ ; f = f q ( f 0 ) , t τ , q Q = { 1 , , Q } ,
where f is the true probability distribution of the observations, f 0 is the nominal probability distribution, and  f q , q Q , is the post-change probability distribution for disturbance type q. The objective of this problem is to find the decision time T which minimizes the average detection delay while satisfying a constraint on the false alarm and false identification, which is equivalent to a classification error for the disturbance type:
inf T sup q Q sup τ ess sup X τ E τ q [ ( T τ ) + | X τ ] s . t . E q = 0 [ H ] β , inf q Q inf τ inf q ^ Q \ q E τ q [ ( T q ^ τ ) ] α ,
where E τ q is the expectation given that change occurs at τ and post-change disturbance type is q, E q = 0 is the expectation given that no change occurs, and  T q ^ is the time of false identification as type q ^ Q \ q . Put simply, this criterion aims to minimize the average detection delay for the least favorable change point, post-change hypothesis, and history of observations, while the average false alarm period is bounded by β , and the average worst-case false identification period is bounded by α .

5.2. Feature Extraction

Feature extraction is an important step toward the successful detection and classification of PQDs. It mainly aims to characterize the observed signal with lower dimensional data, i.e., extract useful information from sequential batches of the observed signal. For lightweight methods which can be deployed in real-time, it is important to compute simple features in rather small batches (i.e., time windows less than a cycle of sinusoidal signal). In this work, we employ statistical features that can be computed with small computational overhead while providing useful information to effectively distinguish between nominal and disturbance waveforms.
Given the observed voltage samples z n and isolated distortion samples y n = z n s n , where s n is the deterministic ideal waveform sample, the feature vector x n = [ x n 1 , , x n d ] is computed within a sliding window of size w i for each feature i = 1 , , d . Specifically, at time instance n, the ith feature x n i is computed using either { z n w i + 1 , , z n } or { y n w i + 1 , , y n } . Note that unlike the existing methods in the literature, we calculate some features using the original voltage readings and the rest using the voltage distortion measurements. The features and their corresponding window sizes are given in Table 1. Features, such as the mean value, root mean square, standard deviation, autocorrelation, and entropy are commonly used statistical features used for PQD classification [30]. Waveform length is another time-domain feature mostly used in electromyographic (EMG) pattern recognition [31,32,33]. Zero crossing is a measure of the frequency of the signal in the time domain, which counts the number of times the voltage amplitude crosses zero. Waveform length measures the complexity of the signal within the window frame. We also introduce average fluctuation (AF), which measures the average of the absolute fluctuation value between consecutive points at which the slope of the signal changes. To calculate AF, as given in (11), first the set I of samples within the window frame at which the slope of signal changes is found. Next, AF is calculated as the average absolute change between consecutive indexes m k m k + 1 , where k refers to the index of elements in I, and  m k denotes its time index.

5.3. Proposed Disturbance Classification Method: Vector-ODIT

A matrix-CUSUM method was proposed in [34] for online user activity detection. It performs multi-alternative change detection using a CUSUM-based method. Similar to CUSUM, matrix-CUSUM requires the probability distributions for all of the post-change disturbance types, which limits its applicability in PQD classification as the post-change disturbance parameters are typically unknown. Motivated by matrix-CUSUM, we here propose vector-ODIT based on the supervised ODIT detector introduced in Section 4.1. Vector-ODIT not only detects the onset of disturbance but also identifies the type of disturbance in a sequential and data-driven manner.
Assume Q = { 1 , 2 , , Q } is the set of post-change disturbance types, and we have Q + 1 training datasets X N q q , q 0 Q , where X N 0 0 is the nominal dataset of size N 0 , and the rest are the datasets of size N q containing observations of disturbances of type q Q . For each q, we define the complement set q ˜ = Q \ q and subsequently define the dataset X N q ˜ q ˜ = j q ˜ X N j j . For each observation at time n, the anomaly evidence D n q for each q Q
D n q = d ( log L n q ˜ log L n q ) + log ( N q ˜ / N q ) ,
where L n q and L n q ˜ are the total kNN distances of feature vector x n with respect to the datasets X N q q and X N q ˜ q ˜ , respectively (see Equation (4)). According to Theorem 1, D n q approximates the log-likelihood ratio log f q ( x n ) f q ˜ ( x n ) . Each element of the decision statistic vector Δ n = [ Δ n 1 , , Δ n Q ] is recursively updated as
Δ n q = max { Δ n 1 q + D n q , 0 } , Δ 0 q = 0 .
T = min { n : Δ n q h q , q = 1 , , Q } ,
and identifies the disturbance type as the index q which causes the alarm. The vector-ODIT algorithm is summarized in Algorithm 1.
Algorithm 1 The proposed vector-ODIT procedure for PQD classification
1:
Input: k , s , α , { X N 0 0 , , X N Q Q } , { h 1 , , h Q }
2:
Initialize: Δ 0 Q × 1 , n 0
3:
Training phase:
4:
Clean datasets according to (8).
5:
Test phase:
6:
while Δ n q < h q , q Q do
7:
      n n + 1
8:
    Obtain new voltage observation z n , compute distortion value y n , and compute features x n of Table 1.
9:
     For each q Q , compute D n q and Δ n q as in Equations (12) and (13).
10:
Declare PQD at time n and identify the type as q for which Δ n q h q .

5.4. Simulation Results

In this section, we evaluate our PQD classification method in terms of classifying the disturbances into four classes, voltage sag, swell, oscillatory transient, and harmonics, using MATLAB. Following the common practice in the literature, signals are generated synthetically using the following equation [35]
z ( t ) = δ 1 ( t ) sin ( 2 π f t ) + δ 2 ( t ) .
For voltage sag and swell, δ 1 ( t ) 0 and δ 2 ( t ) = 0 . Specifically,
δ 1 ( t ) = 1 a [ u ( t t 1 ) u ( t t 2 ) ]
for sag, and
δ 1 ( t ) = 1 + a [ u ( t t 1 ) u ( t t 2 ) ]
for swell, where u ( t ) denotes the unit step function, a [ 0.1 , 0.8 ] is randomly selected from uniform distribution, and the starting and ending times are also randomly chosen as t 2 t 1 [ T , 9 T ] ( T = 1 / f ). For each PQD class, as well as the nominal class ( δ 1 ( t ) = δ 2 ( t ) = 0 ), we generate signals of length 10 cycles with fundamental frequency of f = 50 Hz (i.e., T = 0.02 s) and sampling frequency of 50 × 64 Hz. For transient and harmonics disturbances, δ 1 ( t ) = 0 and δ 2 ( t ) 0 . Specifically,
δ 2 ( t ) = i { 3 , 5 , 7 } k i sin ( i 2 π f t )
for harmonics, and
δ 2 ( t ) = a e ( t t 1 ) / τ [ u ( t t 1 ) u ( t t 2 ) ] sin ( j 2 π f ( t t 1 ) )
for transient, where all parameters are uniformly random with k i [ 0.05 , 0.3 ] , τ [3 ms, 50 ms], j { 6 , , 18 } , a [ 0.3 , 0.5 ] , and t 2 t 1 [ 0.5 T , 3 T ] . We populate the per class training datasets by performing feature extraction according to Section 5.2 within moving window blocks of the specified sizes, shifted by 1 point at a time. The proposed classification method does not need any training process, but the training datasets are needed to be cleaned according to (8) in order to remove the overlapping data instances.
During the test phase, 200 signals of duration 0.2 s per each disturbance type are generated randomly, i.e., the signal parameters, such as the disturbance starting and ending time, magnitude, and phase are selected uniformly random within the allowed range. Figure 8 shows four sample paths for the decision statistics vector Δ n over time. The onset and end of each disturbance in the signal are shown with the vertical gray dashed lines in the figures. As the figures suggest, after the occurrence of the disturbance, the decision statistic corresponding to the correct disturbance type starts to increase persistently, leading to the detection and classification by the corresponding threshold. Whereas, the other three decision statistics (representing the cumulative evidence for the other disturbance types) remain zero or fluctuate subtly above zero. The selection of proper thresholds is of crucial importance to strike the desired balance between the false alarm rate, classification accuracy, and classification delay. We empirically set the thresholds (given in Equation (14)) to maximize the classification accuracy while also keeping the delay suitable for real-time decision-making. Typically, smaller thresholds would result in smaller detection/classification delays, but also larger false alarm rates and lower classification accuracy, and vice versa for larger thresholds. In simulations, setting the four thresholds to proper values, we achieve 0.0038 false alarm rate and 98.38 % classification accuracy with the detection/classification delay of 39.46 data samples on average, as shown in Table 2. The misclassifications are mainly due to the failure to detect the oscillatory transient disturbance signals or misclassifying them as harmonics. Note that by vector-ODIT, the detection and classification happen at the same time. The additional classification capability comes with some degree of larger delays compared to the detection-only results reported in Section 4.2.
In Table 3, the average performance of the proposed method in terms of the classification accuracy for each disturbance type is compared with several state-of-the-art methods in the literature. The accuracy of each method has been reported for noisy conditions with signal-to-noise ratio (SNR) value being 20 (or higher as reported in the corresponding paper). To evaluate the real-time detection and classification capability of methods, we also present the average delay performance in terms of waveform cycle in Table 3. The proposed method achieves the presented accuracy in less than one cycle for each disturbance type. The overall average delay of 39.46 samples, shown in Table 2, corresponds to 0.61 cycles. However, the existing methods in the literature except [30] require multiple waveform cycles, typically 10–12, to extract features from frequency-domain analysis such as Fourier, wavelet, and S transform. Furthermore, in the existing works, how to run the proposed methods sequentially is not discussed. Hence, we consider moving their feature extraction windows by the window length after analyzing and classifying each batch. This makes these methods considerably (around 10 times) slower than the proposed method in terms of detecting and classifying PQDs. To calculate the exact average delay values for these methods, we need to know how many disturbance samples are required in the feature extraction window for successful detection and classification. Since such information is not reported in [6,14,16,19,35,36,37,38,39], we assume that at least one cycle of the disturbance is required to be in the feature extraction window. Therefore, we approximate the average delay as 5.5 cycles, 10 cycles in the worst case, and 1 cycle in the best case.
The FFT and ANN methods [30], as opposed to the other existing methods, uses 16 time-domain and frequency-domain features computed in windows of size 1 cycle (or 128 samples) shifted by one time unit at each time. Although it achieves above 90% classification accuracy, we should note that it considers relatively low noise levels with SNR changing between 35 and 40 dB). Our proposed vector-ODIT method, on the other hand, achieves above 98 % classification accuracy for a higher noise level of 20 dB. We also tested vector-ODIT under 30 dB. With this lower noise level, it is able to achieve 100 % accuracy and a 0 % false alarm rate. Moreover, feature extraction proposed in FFT & ANN [30] relies on the calculation of total harmonic distortion of the signals, up to the 25th harmonic, which is much more computationally expensive than the features Vector-ODIT uses.

6. Conclusions

Detecting and classifying power quality disturbances (PQD) in a timely and accurate manner was considered. A novel data-driven sequential detector was proposed and its asymptotic optimality in terms of minimizing the average detection delay in the minimax sense was proven. Through voltage disturbance simulations, we showed that the proposed method outperforms the existing sequential detectors, ODIT and GLLR, in terms of quick detection while satisfying the same false alarm rate. We also proposed a novel sequential classifier by extending the proposed detector to the multi-hypothesis testing setup. The performance of the proposed classifier was evaluated on four voltage disturbance types (sag, swell, oscillatory transient, and harmonics) by comparing it with a number of existing methods. For all disturbance types, it achieved accurate classification ( 98.38 % accuracy with 0.38 % false alarm rate under 20 dB SNR, and 100 % accuracy with 0 % false alarm under 30 dB SNR) within a period of less than a waveform cycle (on average 0.61 cycle, which corresponds to 39.46 samples or 0.0123 s). Thanks to its sequential design, it is much quicker than the existing methods, which typically take more than 5 cycles to achieve the same accuracy levels.

Author Contributions

Conceptualization, M.M., K.D. and Y.Y.; methodology, Y.Y.; software, M.M.; validation, M.M., K.D. and Y.Y.; formal analysis, Y.Y.; investigation, M.M., K.D. and Y.Y.; resources, M.M., K.D. and Y.Y.; data curation, M.M.; writing—original draft preparation, M.M.; writing—review and editing, Y.Y.; visualization, M.M.; supervision, Y.Y.; project administration, Y.Y.; funding acquisition, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Foundation (NSF), grant number 2040572.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Kaselimi, M.; Protopapadakis, E.; Voulodimos, A.; Doulamis, N.; Doulamis, A. Towards Trustworthy Energy Disaggregation: A Review of Challenges, Methods, and Perspectives for Non-Intrusive Load Monitoring. Sensors 2022, 22, 5872. [Google Scholar] [CrossRef] [PubMed]
  2. Karimi, M.; Mokhtari, H.; Iravani, M.R. Wavelet based on-line disturbance detection for power quality applications. IEEE Trans. Power Deliv. 2000, 15, 1212–1220. [Google Scholar] [CrossRef]
  3. Lee, I.W.; Dash, P.K. S-transform-based intelligent system for classification of power quality disturbance signals. IEEE Trans. Ind. Electron. 2003, 50, 800–805. [Google Scholar] [CrossRef]
  4. Yılmaz, A.; Küçüker, A.; Bayrak, G. Automated classification of power quality disturbances in a SOFC&PV-based distributed generator using a hybrid machine learning method with high noise immunity. Int. J. Hydrogen Energy 2022, 47, 19797–19809. [Google Scholar]
  5. Lin, W.M.; Wu, C.H.; Lin, C.H.; Cheng, F.S. Detection and classification of multiple power-quality disturbances with wavelet multiclass SVM. IEEE Trans. Power Deliv. 2008, 23, 2575–2582. [Google Scholar] [CrossRef]
  6. Thirumala, K.; Prasad, M.S.; Jain, T.; Umarikar, A.C. Tunable-Q wavelet transform and dual multiclass SVM for online automatic detection of power quality disturbances. IEEE Trans. Smart Grid 2016, 9, 3018–3028. [Google Scholar] [CrossRef]
  7. Cecílio, I.M.; Ottewill, J.R.; Fretheim, H.; Thornhill, N.F. Multivariate detection of transient disturbances for uni-and multirate systems. IEEE Trans. Control. Syst. Technol. 2014, 23, 1477–1493. [Google Scholar] [CrossRef] [Green Version]
  8. Cecílio, I.M.; Ottewill, J.R.; Pretlove, J.; Thornhill, N.F. Nearest neighbors method for detecting transient disturbances in process and electromechanical systems. J. Process Control. 2014, 24, 1382–1393. [Google Scholar] [CrossRef] [Green Version]
  9. Liu, Y.; Jin, T.; Mohamed, M.A.; Wang, Q. A novel three-step classification approach based on time-dependent spectral features for complex power quality disturbances. IEEE Trans. Instrum. Meas. 2021, 70, 1–14. [Google Scholar] [CrossRef]
  10. Suganthi, S.; Vinayagam, A.; Veerasamy, V.; Deepa, A.; Abouhawwash, M.; Thirumeni, M. Detection and classification of multiple power quality disturbances in Microgrid network using probabilistic based intelligent classifier. Sustain. Energy Technol. Assessments 2021, 47, 101470. [Google Scholar] [CrossRef]
  11. Wang, J.; Zhang, D.; Zhou, Y. Ensemble deep learning for automated classification of power quality disturbances signals. Electr. Power Syst. Res. 2022, 213, 108695. [Google Scholar] [CrossRef]
  12. Gonzalez-Abreu, A.D.; Delgado-Prieto, M.; Osornio-Rios, R.A.; Saucedo-Dorantes, J.J.; Romero-Troncoso, R.d.J. A novel deep learning-based diagnosis method applied to power quality disturbances. Energies 2021, 14, 2839. [Google Scholar] [CrossRef]
  13. Mian Qaisar, S. Signal-piloted processing and machine learning based efficient power quality disturbances recognition. PLoS ONE 2021, 16, e0252104. [Google Scholar] [CrossRef] [PubMed]
  14. Monedero, I.; Leon, C.; Ropero, J.; Garcia, A.; Elena, J.M.; Montano, J.C. Classification of electrical disturbances in real time using neural networks. IEEE Trans. Power Deliv. 2007, 22, 1288–1296. [Google Scholar] [CrossRef]
  15. Valtierra-Rodriguez, M.; de Jesus Romero-Troncoso, R.; Osornio-Rios, R.A.; Garcia-Perez, A. Detection and classification of single and combined power quality disturbances using neural networks. IEEE Trans. Ind. Electron. 2013, 61, 2473–2482. [Google Scholar] [CrossRef]
  16. Mishra, S.; Bhende, C.; Panigrahi, B. Detection and classification of power quality disturbances using S-transform and probabilistic neural network. IEEE Trans. Power Deliv. 2007, 23, 280–287. [Google Scholar] [CrossRef]
  17. Salles, R.S.; Ribeiro, P.F. The use of deep learning and 2-D wavelet scalograms for power quality disturbances classification. Electr. Power Syst. Res. 2023, 214, 108834. [Google Scholar] [CrossRef]
  18. Chen, Y.C.; Berutu, S.S.; Hung, L.C.; Syamsudin, M. A New Approach for Power Signal Disturbances Classification Using Deep Convolutional Neural Networks. Int. J. Netw. Secur. 2022, 24, 765–775. [Google Scholar]
  19. He, S.; Li, K.; Zhang, M. A real-time power quality disturbances classification using hybrid method based on S-transform and dynamics. IEEE Trans. Instrum. Meas. 2013, 62, 2465–2475. [Google Scholar] [CrossRef]
  20. He, X.; Pun, M.O.; Kuo, C.C.J.; Zhao, Y. A change-point detection approach to power quality monitoring in smart grids. In Proceedings of the 2010 IEEE International Conference on Communications Workshops, Cape Town, South Africa, 23–27 May 2010; pp. 1–5. [Google Scholar]
  21. Li, S.; Wang, X. Cooperative change detection for voltage quality monitoring in smart grids. IEEE Trans. Inf. Forensics Secur. 2015, 11, 86–99. [Google Scholar] [CrossRef] [Green Version]
  22. Doshi, K.; Mozaffari, M.; Yilmaz, Y. RAPID: Real-time Anomaly-based Preventive Intrusion Detection. In Proceedings of the ACM Workshop on Wireless Security and Machine Learning, Miami, FL, USA, 14 May 2019; pp. 49–54. [Google Scholar]
  23. Lai, L.; Fan, Y.; Poor, H.V. Quickest detection in cognitive radio: A sequential change detection framework. In Proceedings of the IEEE GLOBECOM 2008 IEEE Global Telecommunications Conference, New Orleans, LA, USA, 30 November–4 December 2008; pp. 1–5. [Google Scholar]
  24. Yilmaz, Y.; Uludag, S. Mitigating IOT-based cyberattacks on the smart grid. In Proceedings of the 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, Mexico, 18–21 December 2017; pp. 517–522. [Google Scholar]
  25. Poor, H.V.; Hadjiliadis, O. Quickest Detection; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  26. Basseville, M.; Nikiforov, I.V. Detection of Abrupt Changes: Theory and Application; Prentice Hall: Englewood Cliffs, NJ, USA, 1993; Volume 104. [Google Scholar]
  27. Lorden, G. Procedures for reacting to a change in distribution. Ann. Math. Stat. 1971, 42, 1897–1908. [Google Scholar] [CrossRef]
  28. Yilmaz, Y. Online nonparametric anomaly detection based on geometric entropy minimization. In Proceedings of theEEE International Symposium on Information Theory, Aachen, Germany, 25–30 June 2017; pp. 3010–3014. [Google Scholar]
  29. Agresti, A. An Introduction to Categorical Data Analysis; Wiley: Hoboken, NJ, USA, 2018. [Google Scholar]
  30. Borges, F.A.; Fernandes, R.A.; Silva, I.N.; Silva, C.B. Feature extraction and power quality disturbances classification using smart meters signals. IEEE Trans. Ind. Inform. 2015, 12, 824–833. [Google Scholar] [CrossRef]
  31. Altın, C.; Er, O. Comparison of different time and frequency domain feature extraction methods on elbow gesture’s EMG. Eur. J. Interdiscip. Stud. 2016, 2, 35–44. [Google Scholar] [CrossRef]
  32. Bhattacharya, A.; Sarkar, A.; Basak, P. Time domain multi-feature extraction and classification of human hand movements using surface EMG. In Proceedings of the 2017 4th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 6–7 January 2017; pp. 1–5. [Google Scholar]
  33. Tkach, D.; Huang, H.; Kuiken, T.A. Study of stability of time-domain features for electromyographic pattern recognition. J. Neuroeng. Rehabil. 2010, 7, 21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Oskiper, T.; Poor, H.V. Online activity detection in a multiuser environment using the matrix CUSUM algorithm. IEEE Trans. Inf. Theory 2002, 48, 477–493. [Google Scholar] [CrossRef]
  35. Li, J.; Teng, Z.; Tang, Q.; Song, J. Detection and classification of power quality disturbances using double resolution S-transform and DAG-SVMs. IEEE Trans. Instrum. Meas. 2016, 65, 2302–2312. [Google Scholar] [CrossRef]
  36. Borrás, M.D.; Bravo, J.C.; Montaño, J.C. Disturbance ratio for optimal multi-event classification in power distribution networks. IEEE Trans. Ind. Electron. 2016, 63, 3117–3124. [Google Scholar] [CrossRef]
  37. Manikandan, M.S.; Samantaray, S.; Kamwa, I. Detection and classification of power quality disturbances using sparse signal decomposition on hybrid dictionaries. IEEE Trans. Instrum. Meas. 2014, 64, 27–38. [Google Scholar] [CrossRef]
  38. Wang, S.; Chen, H. A novel deep learning method for the classification of power quality disturbances using deep convolutional neural network. Appl. Energy 2019, 235, 1126–1140. [Google Scholar] [CrossRef]
  39. Achlerkar, P.D.; Samantaray, S.R.; Manikandan, M.S. Variational mode decomposition and decision tree based detection and classification of power quality disturbances in grid-connected distributed generation system. IEEE Trans. Smart Grid 2016, 9, 3122–3132. [Google Scholar] [CrossRef]
Figure 1. Simulink system for generating voltage sag disturbance induced by line fault.
Figure 1. Simulink system for generating voltage sag disturbance induced by line fault.
Sensors 22 07958 g001
Figure 2. Simulink system for generating voltage swell disturbance induced by sudden load decrease.
Figure 2. Simulink system for generating voltage swell disturbance induced by sudden load decrease.
Sensors 22 07958 g002
Figure 3. Simulink system for generating voltage oscillatory transient induced by capacitor switching.
Figure 3. Simulink system for generating voltage oscillatory transient induced by capacitor switching.
Sensors 22 07958 g003
Figure 4. Voltage waveforms obtained from the circuits are shown in Figure 1, Figure 2 and Figure 3. Disturbances start at sample 101.
Figure 4. Voltage waveforms obtained from the circuits are shown in Figure 1, Figure 2 and Figure 3. Disturbances start at sample 101.
Sensors 22 07958 g004
Figure 5. Comparison between the proposed supervised ODIT detector and competing methods GLLR [21] and semi-supervised ODIT [28]. (a) Performance comparison averaged over three voltage disturbance types, sag, swell, and oscillatory transients. (b) Sample decision statistic for the sag disturbance.
Figure 5. Comparison between the proposed supervised ODIT detector and competing methods GLLR [21] and semi-supervised ODIT [28]. (a) Performance comparison averaged over three voltage disturbance types, sag, swell, and oscillatory transients. (b) Sample decision statistic for the sag disturbance.
Sensors 22 07958 g005
Figure 6. Performance comparison between the methods for detection of sag, transient, and swell disturbances with variance σ 2 = 0.1 .
Figure 6. Performance comparison between the methods for detection of sag, transient, and swell disturbances with variance σ 2 = 0.1 .
Sensors 22 07958 g006aSensors 22 07958 g006b
Figure 7. Average detection delay vs. number of meters for sag, sell, and transient disturbances. Detection delays are calculated for the fixed false alarm rate of 0.01.
Figure 7. Average detection delay vs. number of meters for sag, sell, and transient disturbances. Detection delays are calculated for the fixed false alarm rate of 0.01.
Sensors 22 07958 g007
Figure 8. Decision statistics of vector-ODIT for four voltage disturbance types: sag, swell, oscillatory transient, and harmonics. The disturbance onset and ending times are shown with vertical dashed gray lines. When the disturbance starts, the corresponding decision statistic successfully increases steadily, while the other decision statistics remain around zero.
Figure 8. Decision statistics of vector-ODIT for four voltage disturbance types: sag, swell, oscillatory transient, and harmonics. The disturbance onset and ending times are shown with vertical dashed gray lines. When the disturbance starts, the corresponding decision statistic successfully increases steadily, while the other decision statistics remain around zero.
Sensors 22 07958 g008
Table 1. Description of the extracted features. Features defined over y n values use the isolated distortion observations while others defined over z n values use the original voltage meter readings. 𝟙 { · } denotes the indicator function, which takes the value 1 when the inner argument is true and 0 otherwise. In (11), k is the index for the set I ; m k is the time index of the kth element in I ; and | I | denotes the number of elements in I .
Table 1. Description of the extracted features. Features defined over y n values use the isolated distortion observations while others defined over z n values use the original voltage meter readings. 𝟙 { · } denotes the indicator function, which takes the value 1 when the inner argument is true and 0 otherwise. In (11), k is the index for the set I ; m k is the time index of the kth element in I ; and | I | denotes the number of elements in I .
FeatureEquationWindow Size
1Distortion at time n y n w 1 = 1
2Root mean square (RMS) R M S = 1 w 2 m = 0 w 2 1 y n m 2 w 2 = 64
3Standard deviation σ = m = 0 w 3 1 ( y n m y ¯ ) 2 w 3 1 w 3 = 64
4Autocorrelation R = m = 1 w 4 ( z n m z ¯ ) ( z n m + 1 z ¯ ) m = 0 w 4 1 ( z n m z ¯ ) 2 w 4 = 64
5Entropy E = m = 0 w 5 1 log y n m 2 w 5 = 64
6Waveform length W L = m = 1 w 6 | z n m + 1 z n m | w 6 = 64
7Zero crossing Z C = m = 1 w 7 𝟙 { ( z n m × z n m + 1 ) < 0 } w 7 = 64
8Average fluctuation
c A F = 1 | I | k = 1 | I | | y m k + 1 y m k | , I = { m : 1 m w 8 , ( y n m + 1 y n m ) ( y n m y n m 1 ) < 0 }
w 8 = 64
Table 2. The performance of the vector-ODIT in terms of classification delay. The thresholds are set in a way to achieve the maximum classification accuracy and the minimum false alarm probability.
Table 2. The performance of the vector-ODIT in terms of classification delay. The thresholds are set in a way to achieve the maximum classification accuracy and the minimum false alarm probability.
Disturbance TypeClassification Delay in Samples (and in Seconds)
Sag26.68 (0.0083 s)
Swell34.62 (0.0108 s)
Oscillatory transient45.59 (0.0142 s)
Harmonics51.23 (0.0160 s)
Overall average39.46 (0.0123 s)
Table 3. Performance comparison for classification of sag, swell, oscillatory transient, and harmonics. The performances are presented in terms of the classification accuracy, the average delay of correct classification, and false alarm (false classification of the normal signal as a disturbance).
Table 3. Performance comparison for classification of sag, swell, oscillatory transient, and harmonics. The performances are presented in terms of the classification accuracy, the average delay of correct classification, and false alarm (false classification of the normal signal as a disturbance).
Classification Accuracy/Average Delay (Cycle)
Classification MethodSagSwellTransientHarmonicsAverageNormalSNR (dB)
ADALINE & FFNN [15]98/5.599/5.586/5.590/5.594.3/5.5-20
ST & PNN [16]98/5.592/5.586/5.595/5.592/5.510020
FFT & ANN [30]91.64/0.8496.46/0.8492.37/0.8497.74/0.8493.49/0.84-35
DRST & DAG-SVM [35]99/5.598.5/5.597.5/5.599.5/5.598.33/5.510020
Dynamics & ST [19]95/5.597/5.597/5.597/5.596.33/5.59620
TQWT & MSVM [6]98/5.5100/5.594/5.5100/5.597.33/5.5-20–50
WT & SVM [36]89/5.589/5.598/5.597/5.592/5.5-20
SSD Hybrid Dict. [37]100/5.5100/5.5100/5.5100/5.5100/5.510030
Deep CNN [38]99.20/5100/599.50/5100/5.599.56/5.597.7020
VMD & DT [39]98.2/5.597.6/5.598.2/5.598.5/5.598/5.510030
Proposed99/0.41100/0.5495.5/0.7199/0.8098.38/0.6199.6220
100/0.41100/0.54100/0.71100/0.80100/0.6110030
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mozaffari, M.; Doshi, K.; Yilmaz, Y. Real-Time Detection and Classification of Power Quality Disturbances. Sensors 2022, 22, 7958. https://doi.org/10.3390/s22207958

AMA Style

Mozaffari M, Doshi K, Yilmaz Y. Real-Time Detection and Classification of Power Quality Disturbances. Sensors. 2022; 22(20):7958. https://doi.org/10.3390/s22207958

Chicago/Turabian Style

Mozaffari, Mahsa, Keval Doshi, and Yasin Yilmaz. 2022. "Real-Time Detection and Classification of Power Quality Disturbances" Sensors 22, no. 20: 7958. https://doi.org/10.3390/s22207958

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop