Next Article in Journal
Identification of Shale Lithofacies from FMI Images and ECS Logs Using Machine Learning with GLCM Features
Next Article in Special Issue
Research on Diesel Engine Speed Control Based on Improved Salp Algorithm
Previous Article in Journal
Design and Implementation of Active Clamp Flyback Converter for High-Power Applications
Previous Article in Special Issue
Detection of Cotton Seed Damage Based on Improved YOLOv5
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Disconnector Fault Diagnosis Based on Multi-Granularity Contrast Learning

1
School of Electrical Engineering, Xi’an University of Technology, Xi’an 710048, China
2
Institute of Water Resources and Hydro-Electric Engineering, Xi’an University of Technology, Xi’an 710048, China
*
Author to whom correspondence should be addressed.
Processes 2023, 11(10), 2981; https://doi.org/10.3390/pr11102981
Submission received: 6 September 2023 / Revised: 26 September 2023 / Accepted: 4 October 2023 / Published: 14 October 2023

Abstract

:
Most disconnector fault diagnosis methods have high accuracy in model training. However, it is a challenging task to maintain high accuracy, a faster diagnosis speed, and less computation in practical situations. In this paper, we propose a multi-granularity contrastive learning (MG-CL) framework. First, the original disconnector current data are transformed into two different but related classes: strongly enhanced and weakly enhanced data, by using the strong and weak enhancement modules. Second, we propose the coarse-grained contrastive learning module to preliminarily judge the possibility of faults by learning the features of strongly/weakly enhanced data. Finally, in order to further judge the fault causes, we propose a fine-grained contrastive learning module. By comparing the differences in the data, the final fault type was judged. Our proposed MG-CL framework shows higher accuracy and speed compared with the previous model.

Graphical Abstract

1. Introduction

The disconnectors play a vital role in the power field. In comparison with other power equipment, disconnectors rank first in view of the application scale and the number of failures of high-voltage power transmission and transformation equipment. The outdoor disconnector, due to different weather conditions in an open environment and long-term deposition of dust and other particulate pollutants in the air [1], experiences a high failure rate. According to survey statistics, the common faults of the high-voltage disconnector can be summarized as follows: mechanism jammed, spring failure, control failure, poor porcelain conductive contact, loose conductive circuit, abnormal heating, inadequate connection, porcelain conductive cracking, etc., among which the mechanical defects mainly include mechanism jammed, spring failure, and inadequate connection. State Grid statistics on HV disconnector faults show that institutional faults account for 73% of all fault numbers. There is a high incidence of faults in the high-voltage disconnector, which seriously affects the safety of the power supply [2]. Therefore, it is necessary to carry out research on the fault diagnosis of the mechanical state of the high-voltage disconnector, to detect the mechanical fault in time, and to prevent the accident from deteriorating and causing greater losses.
At present, the commonly used fault diagnosis methods mainly include the expert scoring system [3] and the data-driven method [4], among others. The expert scoring system is mainly based on plan maintenance, and its judgment depends, to a large extent, on the experience of operation and maintenance personnel. The vague criteria and inability of this method to diagnose multiple faults lead to its poor operability. The data-driven method has become the mainstream in recent years, but it faces a shortage of labeled data. Unlike images, time series data generally do not contain human-recognizable patterns to distinguish different classes, and it is not easy to assign class labels to the data. Therefore, different time series applications require well-trained experts to perform challenging data-labeling tasks. As a result, very few time series data are labeled in practical applications compared with the collected data [5]. The essence of fault diagnosis is a pattern recognition process, including signal preprocessing, feature extraction, and fault classification [6]. The accuracy of feature extraction is a key step in fault diagnosis for directly affecting the recognition results. Therefore, it is very important to study how to achieve a clear and effective feature representation method with a small amount of labeled data [7].
In recent years, with the advancement of technology and the development of big data, data-driven fault diagnosis methods have become a popular research direction [8]. At the same time, the current acquisition of the drive motor of the disconnector is portable, and the type of fault in the rotating machinery can be identified by analyzing the motor current signals in different states. However, due to the small amount of experimental data, the labeled data are not representative, which greatly increases the difficulty of feature extraction. To address the diagnostic problem with few fault data, the authors of [9] proposed a method, namely, multilabel cycle translating adversarial network (MCTAN). The architecture can translate healthy vibration data into auxiliary fault data. However, it needs to adjust the network architecture through the attributes of vibration data, which is troublesome in practical use. Typical oversampling approaches are widely used in machine learning training when using a small dataset. Random oversampling (RO-Sampling) [10] and the synthetic minority oversampling technique (SMOTE) [11] are two popular approaches. However, the auxiliary data acquired from this type of oversampling approach are generated through copying or a linear combination of the original data. They can hardly grasp the temporal correlation in a vibration signal and have a risk of overfitting. If we want to obtain a lot of data, one solution is to first preprocess the signal, which is known as data augmentation. It is worth noting that data augmentation is a general model-independent, data-side solution. Data augmentation attempts to improve the generalization ability of a trained model by reducing overfitting and expanding the decision boundary of the model [12]. The need for generalization is especially important for real-world data, which can help networks overcome small [13] or category-imbalanced datasets [14,15]. Most time series data augmentation techniques are based on random transformations of training data, such as adding random noise [16], slicing or cropping [17], scaling [18], random warping in the time dimension [13,15], frequency warping [19], etc.
Nowadays, machine learning methods have been well applied in the feature extraction of disconnector faults. The authors of [20] judged the state of the disconnector by the similarity between the vibration signal and the standard signal of each state. However, the actual collected vibration signal contains noise, which will affect the accuracy of the model. The authors of [21] used the Relief F algorithm and BP neural network to identify the fault type and location diagnosis. However, the Relief F algorithm assigns a higher weight to features with high category correlation, so the limitation of the algorithm is that it cannot effectively remove redundant features. The authors of [22] used the synchronous squeeze wavelet transform to perform time–frequency analysis and signal reconstruction and established a fault diagnosis system with support vector machine (SVM) as a classifier. However, SVM does not have a universal solution for nonlinear problems, and sometimes it is hard to find a good kernel. It is also difficult to apply to multi-classification problems. The authors of [23] proposed a novel fault diagnosis model by combining binarized DNNs (BDNNs) with improved random forests (RFs). However, it cannot solve some small problems in time, resulting in multiple or combined failures of the system. Although the fault diagnosis method based on machine learning has achieved certain results, it still requires further improvement due to its weak interpretation ability [24], the large uncertainty in the prediction results of model diagnosis, and its difficulties in universal application. However, in the field of bearing fault diagnosis, some researchers have obtained good results with deep learning methods. Tang et al. [25] proposed a bidirectional deep belief network (Bi-DBN) fault diagnosis method, which limits the quality of training data, realizing a high-precision diagnosis. However, this method is not particularly satisfactory in dealing with vibration signals with heavy background noise. Secondly, the computing time of the proposed method exceeds that of other deep learning models. The transformer-based attention module proposed by Li et al. [26] shows that the proposed method can extract representative data features under imbalanced data conditions. However, it is also noted that the diagnosis accuracy depends on the data size, which should be further studied in future works.
At present, contrastive learning has recently shown its advantages in the field of fault diagnosis due to its ability to learn invariant representations from augmented data [27,28,29]. However, these image-based contrastive learning methods do not work well with time series data, for the following reasons. Firstly, they may not be able to address the temporal dependence of the data, which is a key characteristic of time series data. Secondly, some enhancement techniques used for images, such as color distortion, usually do not adapt well to time series data. So far, few works on contrastive learning based on time series data have been proposed. For example, the authors of [30,31] developed contrastive learning methods for biological signals. The authors of [32] are aiming to develop a semi-supervised contrastive learning framework that can target different types of fault data. However, these methods are proposed for specific applications.
Based on the above analysis, this paper proposes a multi-granularity contrastive learning (MG-CL) model to realize fault feature extraction and diagnosis. Using data enhancement methods, first, we create two different but related views of the input disconnector current data. Next, we propose a coarse-grained contrast module. It learns fault features through contrastive learning to cope with the perturbation of the enhanced data. In addition, we propose a fine-grained contrast module in MG-CL to complete the judgment of faults. In doing so, we aim to learn more fault features.
In summary, the main contributions of this work are as follows:
  • This paper proposes a novel contrastive learning framework based on time series representation learning.
  • This paper provides a simple yet effective augmentation for time series data in a contrastive learning framework.
  • This paper proposes two contrastive modules with different granularities.

2. MG-CL Formulation

This paper proposes a disconnector fault diagnosis judgment method based on multi-granularity contrastive learning. As shown in Figure 1, the main method is divided into three modules: data enhancement module, coarse-grained contrast module, and fine-grained contrast module. We first preprocessed the fault sample data to generate two data classes: strongly and weakly enhanced data. Then, the coarse-grained contrast module was used to compare and learn the time series features extracted by two encoders. This module learns different types of fault features and coarsely classifies the faults. Finally, the fine-grained contrast module was used for fault diagnosis.

2.1. Data Enhancement Module

In reality, the amount of data obtained from experiments simulating different fault conditions of disconnectors is far less than that of real faults, so the existing datasets often cannot represent the characteristics of various working conditions. In order to improve the accuracy and generalization ability of the diagnostic module for fault recognition, this paper introduces the time series data enhancement technology, which can reasonably increase the amount of data and improve the performance of the diagnostic module.
In the field of time series recognition, many datasets tend to be inadequate, and one of the solutions is data enhancement. Data enhancement is a general module that is based on random transformations of the training data, e.g., adding random noise, slicing, dithering, cropping, scaling, random warping in the time dimension, and frequency warping, regardless of data-side solutions. Data enhancement attempts to improve the generalization ability of the trained module by reducing overfitting and enlarging the decision boundary of the module. The generalization is especially important for real-world data to help diagnose and analyze small datasets or datasets with imbalanced classes.
First, the disconnector’s current data are enhanced. We argue that using different enhancements can improve the robustness of the learned representation. Therefore, we propose to use two separate enhancements, such that one enhancement is weak and the other is strong. In this paper, weak enhancement is a dithering and scaling strategy. Specifically, we added random variation to the signal and amplified its magnitude. For strong enhancement, we applied a time-warping and dithering strategy, where time warping will perform perturbations using a smooth warping path or through randomly positioned fixed windows. Next, we added random jitter to the time-bending signal [33].
  • Jittering
One of the simplest yet effective methods of transform-based data enhancement is dithering or adding noise to the time series. Jitter can be defined as:
x = x 1 + α 1 , , x t + α t , , x T + α T
where α is typically Gaussian noise added to each time step t and α ~ N 0 , δ 2 . The standard deviation, σ, of the added noise is a hyperparameter that needs to be predetermined. In addition, jittering has been shown to help mitigate time series drift for various neural network modules. Time series drift refers to changes in the data distribution due to the introduction of new data.
2.
Scaling
Scaling changes the global magnitude, or intensity, of a time series by a random scalar value. With scaling parameter β, scaling is a multiplication of β and the entire time series, or:
x = β x 1 , , β x t , , β x T
The scaling parameter β can be determined by a Gaussian distribution with β as a hyperparameter, or by a random value from a pre-defined set. It should be noted that “scaling” in terms of time series is different from that in the image domain. For time series, it refers to just increasing the magnitude of the elements but not enlarging the time series.
3.
Time Warping
Time warping is the act of perturbing a pattern in the temporal dimension. This can be performed using a smooth warping path or through a randomly located fixed window [34]. When using time warping with a smooth warping path, the augmented time series becomes:
x = x γ 1 , , x γ t , , x γ T
where γ · is a warping function that warps the time steps based on a smooth curve. The smooth curve is defined by a cubic spline, S(U), with knots: U = u 1 , , u i , , u L . The height of the knots ui taken from u i ~ N 0 , δ 2 . In this way, the time steps of the series have a smooth transition between stretches and contractions.

2.2. Coarse-Grained Contrast Module

In this module, the disconnector current data enter two fault feature extraction encoders. After that, continuous wavelet transform will change the one-dimensional current data into two-dimensional image data. It can locate the features more clearly. Finally, the time series contrastive learning is carried out to learn fault features. A diagram of the module is shown in Figure 1.

2.2.1. Coarse-Grained Fault Feature Extraction

In order to identify the state of the fault condition, a deep learning method is introduced to extract the fault features of the original signal. Commonly used deep fault extraction methods include convolutional neural networks, recurrent neural networks, multilayer perceptron, and so on. In order to extract fault features in-depth, we adopted a momentum encoder and an encoder for extraction.
The extraction encoder is similar to the momentum encoder in that they are both composed of residual networks [35]. However, the momentum encoder adds the feature library dictionary to build a dynamic dictionary with a queue and moving average encoder from the perspective of dictionary lookup. This enables the immediate construction of a large, consistent dictionary, and thus enhances the learning efficiency.
In fault feature learning, the momentum encoder will establish a fault feature library in the diagnosis module; that is, the learned characteristics under different working conditions are placed in the library, and the characteristics are compared when the fault occurs. Here, the library is called the dictionary, and the fault signal to be identified is called the key. When the type of fault is identified in comparison to the fault signals in the dictionary, the relationship between the two is regarded as the key and typed into the dictionary.
The residual network extends the network to very deep structures by adding shortcut connections in each residual block to lift the data features to high dimensions and directly through the bottom layer via gradient flow. The residual network in this paper is composed of three residual blocks, each of which is composed of a convolutional layer, a normalization layer, and a ReLU activation layer, and its basic block is:
y = W X + b s = B N y h = R E L U s
Each residual block is constructed out of the basic block in Equation (5), denoted by Block k, and the residual block is formalized as:
h 1 = B l o c k K 1 x h 2 = B l o c k K 2 h 1 h 3 = B l o c k K 3 h 2 y = h 3 + x h ^ = R e L U y

2.2.2. Continuous Wavelet Transform

In the present work, we applied continuous wavelet transform [36], not only because it is an excellent tool for feature extraction, but also because it can convert a dimension number into an image. This step is necessary because the fine-grained contrast module we used yields better results on 2D or 3D objects.
Continuous wavelet transform (CWT) is a mathematical operation that represents a real-value function, S(t), as the following integral:
W s a , b = 1 a S t ψ t b a d t ,
depending on a scale a > 0 a R + and translocational value b b R . Discrete wavelet transform (DWT) carries with it a similar idea, with the difference that parameters a and b are discrete:
a = a 0 n ,   b = k b 0
Though CWT and DWT have much in common, they are usually used for different purposes. While DWT is a perfect instrument for such coding problems as image compression, CWT is mostly applied for signal analysis tasks. Thus, CWT is the method we implemented in the current work.
As has already been mentioned, CWT provides an excellent opportunity to extract and investigate complicated spectral features of a signal. Function Ψ is a continuous time and frequency function and is called a mother wavelet. This mother wavelet is used to obtain a daughter wavelet for each possible pair (a, b):
ψ a , b t = 1 a ψ t b a
Then, the CWT is applied:
W s a , b = 1 a z t Ψ t b a d t = z t Ψ a , b t = z t , Ψ a , b t
Formula (16) shows the similarity between the signal in question and each of the daughter wavelets. These results can be represented as an image with a b-value set along the x-axis and an a-value set along the y-axis. The intensity of each pixel is determined by Formula (13), with corresponding a and b values.

2.2.3. Time Series Comparative Learning

Time series comparative learning is used to compare and learn the disconnector current data transformed by CWT. It can learn different types of fault features and perform rough classification of fault conditions at the same time. The time contrast module uses the contrast loss function of InfoNCE, as shown in the following equation:
L q = l o g e x p q · k + / τ i = 0 K e x p q · k i / τ
where τ is the temperature hyperparameter according to the literature, the feature dictionary learned by the encoder is q , and the feature learned by the momentum encoder is { k o , k 1 , k 2 , }. It is assumed that q in the feature dictionary matches the positive sample fault feature k + , and it is expected that the value of the loss function is relatively low. When q is not similar to other k features, the value of the loss function should also be small.
For the comparison of loss, in the method of this paper, the original data basic view z b is regarded as the same sample as strong enhancement z w and weak enhancement z s , respectively, and is recorded as z t w s , while other data are negative samples and recorded as Z t i . Subjecting the same sample to a comparative loss aims to minimize the dot product between representations of the same sample while maximizing the dot product with other negative samples, N t , k , within the minibatch. Accordingly, we computed a loss, L T C , as follows:
L T C = l o g e x p z t b z t w s / τ n N t , k z t b z t i / τ e x p
In this paper, the time series to be tested was compared with the fault conditions in the feature database to calculate the loss, and the time series to be tested was roughly classified into several similar fault conditions.
The time contrast module returns to the gradient of the encoder and the gradient of the momentum encoder, which will lead to the inconsistency of the two encoders. This results in the same fault data, the features learned by the two encoders will be very different, and effective feature data cannot be learned. Therefore, the parameters of the encoder are transmitted to the momentum encoder. Formally, the parameters of the encoder are denoted as θ k , the parameters of the momentum encoder are denoted as θ q , and θ k is updated as follows:
θ k m θ k + 1 m θ q
Here, m 0 , 1 is the momentum coefficient. Only the parameter θ q is updated by backpropagation. The momentum update in Equation (7) makes the evolution of θ k smoother than that of θ q . Although the two encoders are still different, the gap can be narrowed down.

2.3. Fine-Grained Contrast Module

Finally, in order to accurately judge the fault of the disconnector, the fine-grained contrast module will be used in this paper. This module aims to learn deeper fault features and improve fault accuracy.
The structure of the fine-grained contrast module is shown in Figure 1. It is mainly composed of multi-head attention (MHA), multi-layer perceptron (MLP), and context comparative learning.

2.3.1. Multi-Head Attention

The main idea of multi-head attention (MHA) is to use multiple identical attention layers to calculate self-attention within a local window.
As shown in Figure 2, these windows are arranged to uniformly segment high-dimensional features in a non-overlapping manner, through which the output predicted M u l t i H e a d Q , K , V is performed. The diagram of its network architecture is shown in Figure 3.
The input x is multiplied by different weights, w Q , W K , and W V , to obtain Q, K, and V, where Q = XWQ, K = XWK, and V = XWV. Then, Q, K, and V are transformed into h matrices, namely: Q 1 , Q 2 , , Q h , K 1 , K 2 , , K h ,   V 1 , V 2 , , V 3 . The corresponding dimensions are d k ,   d q ,   and   d v . In each group, Q i , K i ,   and   V i correspond to an attention layer, and the result after attention processing is the head, whose expression is:
h e a d i = f Q W i Q , K W i K , V W i V = s o f t m a x Q W i Q ( K W i K ) T d k V W i V = s o f t m a x X W Q W i Q X W K W i k T d k X W V W i V
The weight matrix: W Q R d m o d e l × d m o d e l , W K R d m o d e l × d m o d e l , W V R d m o d e l × d m o d e l , W i Q R d m o d l e × d q , W i K R d m o d l e × d k , W i V R d m o d l e × d v ,   d k , is the scaling factor.
The results processed by the attention mechanism are concatenated together and then linearly transformed to obtain the output M u l t i H e a d Q , K , V . The mathematical expression of the multi-head attention is:
M u l t i H e a d Q , K , V = C o n c a t h e a d 1 , , h e a d k , h e a d h W O
where C o n c a t · is the longitudinal concatenation of the matrix, and the weight matrix is W O   R h d v × d m o d e l .

2.3.2. Multi-Layer Perceptron

The main role of the multi-layer perceptron (MLP) is to convert the high-dimensional features of global data learned by MHA into a low-dimensional space, as shown in Figure 4.
The MLP consists of two fully connected layers with a nonlinear ReLU function and dropout in the middle. Its structural diagram is shown in Figure 5.
The inputs x and y represent the weighted sum of all inputs to the current node, and the expression is:
h j = i = 0 M W i j x j
where W i j represents the weight of each neuron in the previous layer compared to the current neuron, that is, the weight of neuron j .
The neuron output value is expressed as:
a j = g h j = g i = 0 M W i j X i j
where a j is the output value of the hidden-layer neurons, a j = X j k is the output value of the current-layer neurons, and is equal to the input value of the next layer’s neurons, g · is the activation function, and W i j X i j is the bias node.
The output value of the output layer is y. The formula is:
y = a k = g h k = g i = 0 M W j k X j k
Here, y represents the value of the output layer, which is the final result, and h k represents the sum of the input weights of neuron K in the output layer.

2.3.3. Context Comparative Learning Module

We further propose a context comparative learning module that aims to learn more discriminative representations.
Considering a batch of N input samples in this paper, two enhanced views set two comparison references for each sample, so there were 2N comparison samples. For the context of z ˜ l i , we denote z ˜ l i + as a positive sample of z ˜ l i from another augmented view of the same input; thus, z ˜ l i , z ˜ l i + is considered a positive pair. At the same time, the remaining (2N-2) contexts of other inputs in the same batch are used as negative samples of z ˜ l i , that is, z ˜ l i can form (2N-2) negative pairs with its negative samples. Therefore, we can derive a context contrastive loss to maximize the similarity between positive pairs and minimize the similarity between negative pairs. Thus, the final representation can be discriminative.
Equation (18) defines the contrastive loss function LSCC. Considering the context of z ˜ l i , we divide its similarity to its positive sample z ˜ l i + by its similarity to all other (2N-1) samples as follows, including pairs of positive samples and (2N-2) pairs of negative samples, to normalize the loss:
L S C C = i I 1 P i p P i l i , p
where P i is the index set of all samples of the same type as sample x ^ i , so for any p P i , (i, p) is a positive pair. P i is the cardinality of P i .
Considering a sample z ˜ l i , we divide its similarity to its positive sample z ˜ l i + by its similarity to all other (2N-1) samples as follows, including pairs of positive samples and (2N-2) pairs of negative samples, to normalize the loss:
l i , i + = i = 1 N l o g e x p s i m z t i , z t i + / τ m = 1 2 N N m i e x p s i m z t i , z t m / τ
Among them, s i m ( u , v ) = u T v / u v means the dot product between l 2 normalized u and v (i.e., cosine similarity), m i 0 , 1 is the indicator function, if and only if m i , and τ is the temperature parameter.
The overall self-supervised loss is a combination of two temporal contrastive losses and contextual contrastive losses, as shown in Equation (19).
L s e m i = λ 3 L T C + λ 4 L S C C
where λ 3 and λ 4 are fixed scalar hyperparameters representing the relative weight of each loss.

2.4. The Diagnosis Process Based on Contrastive Learning

In this study, the fault diagnosis steps were carried out sequentially according to the flow chart shown in Figure 6.
Algorithm 1 summarizes the computational steps of the fault diagnosis model. More useful feature information is learned by minimizing the L s e m i .
Algorithm 1: Calculating the context comparative learning loss.
1: procedure Loss ( z t i ,   z t i + )
2:  d = 1;
3:  while time length (z) > 1 do
4:   //The MHA, MLP operates along the time axis.
5:    z t i ← MHA, MLP;
6:    z t i + ← MHA, MLP;
7:    L s e m i = λ 3 L T C + λ 4 L S C C ;
8:   dd + 1;
9:    end while
10:   L s e m i L s e m i / d ;
11:  return L s e m i
12: end procedure

3. Experiments

3.1. Example Overview

A condition assessment experiment was carried out on a voltage disconnector from a domestic switchgear plant as a diagnostic object to collect the drive motor current of the disconnector. In this experiment, the current clamp of the current sensor was installed on one of the three phases of the drive motor power line, and the output electrical signal was used as the input signal for the data acquisition card. Using software programming, the motor current signal waveform could be collected and displayed in real time, and the waveform data could be stored.
The test conditions simulated five types of operating conditions with high failure rates, including normal operating conditions, slight jamming, severe jamming, balance spring failure, and inappropriate closing, with a total of 100 sets of samples and 260 sampling points for various operating conditions, to obtain typical failure data for offline modeling.

3.2. Data Processing Part

In an ideal situation, we would obtain motor current data for a variety of typical operating conditions from electric disjuncts operating in the field. However, due to the restrictions of the equipment safety management system, we could only obtain limited operating equipment condition data. To ensure that we had enough data input for the algorithm model, we conducted an analysis of the main faults that can occur in the disconnector based on manual experience. Then, we designed and simulated the corresponding type of disconnector fault test according to the cause of the fault and obtained more acquisition data through the data acquisition device set in the field. In addition, in the conditional case, in order to understand the development trend of different types of faults, we conducted multiple tests for the same types of faults to truly reflect the degradation trend of equipment faults in the disconnection switch operation mechanism.
The analysis and comparison work was carried out based on the data collected by the field equipment and the data obtained from the simulation test of the proposed fault. We continuously adjusted the equipment fault data acquisition scheme to obtain fault data that can truly reflect the operation status of the equipment. In order to facilitate the early implementation and optimization stage of the scheme, we have drawn up a preliminary scheme for the main fault simulation of outdoor electric disconnectors (refer to Table 1 for details). Of course, this preliminary scheme will be continuously optimized and improved with the problems arising in the simulation process, and we will continue to optimize the scheme through multiple positive adjustments.
The training dataset was first both strongly and weakly enhanced, and its enhanced result signal is shown in Figure 7.
It can be seen from Figure 7 that the stator rotating magnetic field cut the static rotor at a high speed when the disconnector is normally closed, so that the stator current reached the highest instance of 2.92 A, and there were three wave peaks. When there was a jam fault in the closing process, the stator current was as shown in the second and third figures in the table. When the mechanical parts of the disconnector are jammed, the switch closing torque will increase, and the drive motor current will also increase. It can be seen from Figure 7 that the current peak of slight jamming was 3.47 A, which is larger than the peak current of normal operation, and the current peak of severe jamming was 3.89 A, which is much larger than the peak current of normal operation, and this is due to the overloaded operation of the motor when it is closed. In addition, the first peak of the current will lag when the jam occurs, and the severe jam is the most obvious. Therefore, it can be judged that the jam fault will appear in the early closing stage. The current peak is the characteristic of the jam fault, and the occurrence time of the first current peak is the characteristic of the jam fault degree.
Some folding-arm-type high-voltage disconnectors mainly rely on the balance spring to generate torque to maintain the conductive rod gravity potential energy. When closing, the gravity is balanced, the required operating torque becomes small so that the operation is smooth, and the elastic potential energy storage can be used to prepare for closing when opening. However, when the balance spring fails, the elastic potential energy cannot be released in the closing process, which makes the drive motor perform extra work. In the later period of closing, the extra work of the motor reaches a peak, which makes the disconnector overcome the jam and release the elastic potential energy, and finally, the pole and the moving contact extend upward. The motor current in this scenario is shown in the fourth picture in the table. The current peak was postponed, and its peak value was 2.97 A, much larger than the rated current.
When the transmission system of the disconnector is out of order, it can easily cause the closing to not occur in place. Failure to close in place will cause the motor current to increase distortion, as shown in Figure 7, observing that the waveform will find many small spikes, which are caused by current distortion. The time lag of the current peak was 2.36 A.
Applying CWT to the current of the disconnector, it was equivalent to projecting the original one-dimensional signal into a two-dimensional time-scale plane. According to the correspondence between scale and frequency, the time–frequency plot of CWT reflects the relationship between the frequency components of the current signal and time. At the same time, the frequency distribution characteristics of current signals are different due to the different faults of the disconnectors, and the time–frequency maps of CWT generated by current signals are also different. Therefore, the identification of disconnecting switch faults is transformed into the identification problem of different time–frequency images. When wavelet transform is applied to signal processing, the results of the same signal processing using different wavelet bases may be different, so the selection of wavelet bases is particularly important in the setting of wavelet transform parameters. The disconnector current signal studied in this paper is a transient signal. When dealing with power transient signal detection and characteristic parameter extraction, the wavelet with a certain vanishing moment should be considered. To extract the transient signal with a wide frequency range and suppress the mixing of low-frequency carriers, the wavelet basis with a high center frequency should be selected when detecting the transient signal. Considering the moderate center frequency and window width of the Morlet wavelet, the Morlet wavelet basis is used for time–frequency analysis of isolated switching current signals.
The wavelet-transformed signal under these five working conditions is shown in Table 2.
From Table 2, we can see that after wavelet change, the timing signal of the current is transformed into graphical information, in which the characteristics of each fault can be more clearly distinguished, and more effective and accurate information can be provided for the subsequent final fault diagnosis. This improves the accuracy of diagnosis and reduces the false-positive rate.

4. Performance Testing of the Contrastive Learning Framework Based on Time Series Representation Learning

4.1. Implementation Details

We split the data into 60%, 20%, and 20% for training, validation, and testing. After 50 rounds of training, we noticed that the performance did not improve with further training. We applied a batch size of 128. We used the Adam optimizer with a learning rate of 8 × 103, weight decay of 8 × 103, α1 = 0.85, and α2 = 0.96. For the strong augmentation, we set Estr = 10, EEp = 12, and Eweak = 20, while for the weak augmentation, we set the scaling ratio to 2.5 for all the datasets. In multi-head attention, we set L = 4, and the number of heads as 6. In context comparative learning, we set τ = 0.3. Lastly, we built our model using PyTorch 1.4 and trained it on an NVIDIA GeForce RTX 3050 Ti GPU.

4.2. Analysis of Model Robustness

In order to demonstrate the performance of our proposed algorithm, and to be able to accurately monitor faults in disconnector switches, the fault diagnosis method used must be sufficiently accurate. In this paper, experimental measurements of this fault diagnosis algorithm were carried out using accuracy–robustness plots. To better reflect the performance of this algorithm, BP, LSTM, K-means, SVM, FS + Random Forest, and DTW + KNN algorithms were introduced in the experiments for comparison. The experiments used the drive motor current data of a 252 KV high-voltage disconnector switch from a domestic switchyard. The AR plots obtained from the experiments are shown in Figure 8.
As seen in Figure 8a, the robustness and accuracy of the method proposed in this article were the best. The closer the algorithm is to the upper right, the higher the accuracy and stability of the algorithm. It can be seen from Figure 8b that the accuracy and robustness of the MG-CL framework was the best compared with other benchmark algorithms.
We also verified the variation in the accuracy of different models as the noise increased. Figure 9a shows the variation in the accuracy of various models when different Gaussian noise was added. Figure 9b shows the variation in the accuracy of various models when different salt and pepper noise was added.
We can see from the figures that the accuracy of MG-CL, SVM, and LSTM decreased with the increase in noise. However, the accuracy of MG-CL was the highest, maintained above 80%. Thus, it was proven that our proposed model can maintain good accuracy and robustness in the case of different noises.

4.3. Ablation Study

To verify the effectiveness of the proposed components in MG-CL, a comparison between full MG-CL and its three variants in our experiment is shown in Table 3. The results show that all the above components of MG-CL are indispensable.

4.4. Comparison of Accuracy of Different Models

The following test results and comparison results are shown in Table 4. A total of five algorithms are included in the table for comparison and reference, as well as the algorithms used in this paper. Among them, SVM, FS + Random Forest, and DTW + KNN are machine learning methods, and BP network and LSTM are deep learning methods. After the comparison, it was found that the contrastive learning framework based on time series representation learning had the best diagnostic effect among the six types of methods and could effectively identify the operating status of the isolated switch.
The two methods with the highest fault accuracy were selected to draw their confusion matrixes, with 200 samples for each fault type, and the results are shown in Figure 10.
The elements on the main diagonal are the accuracy of each fault type, and the remaining elements in each row are its misjudgment rate. MG-CL was better than LSTM for each fault type. Moreover, it can be seen from the figure that Normal condition, Slight jamming, and Sever jamming were more prone to misjudgment between each other. This may be because the three current waveforms are similar.

4.5. Model Complexity Evaluation

The complexity of the model includes time and space complexity. Time complexity refers to the time taken to execute the current algorithm, while space complexity refers to how much memory space is required to execute the current algorithm.

4.5.1. Time Complexity

We chose to use the model running time as a measure of model time complexity. The proposed model was used for fault diagnosis with other models, and the time required for their fault identification was recorded. The result is shown in Figure 11.
By analyzing the data in Figure 11, it can be seen that the fault identification time required by the proposed method was 78.5 ms. It took the least amount of time compared to the other methods. Figure 11 can prove that the proposed model had the lowest time complexity.

4.5.2. Space Complexity

Here, we chose the computational load of the model as the measure of space complexity. Taking the proposed model in this paper as a baseline, the computational volume of different models is shown in Table 5.
As can be seen from the above table, the computational complexity of the proposed method was minimal, which can be understood as the model needs to calculate fewer parameters. Thus, the model has low space complexity.

5. Conclusions

In order to solve the problem that most existing machine learning methods rely on a large number of training samples containing markers and that the amount of training data in the model is not sufficient, a fault diagnosis model was designed for different operating conditions of the disconnectors. The data were preprocessed to generate one group of strongly and one group of weakly augmented data. The faults were then coarsely classified by the coarse-grained contrast module, after which they were entered into the fine-grained contrast module to discriminate the exact faults.
Compared with other models, the fault diagnosis accuracy of the proposed model reached 94%. The time complexity and space complexity of the proposed model were better compared to previous models. However, this article only mentioned four failure conditions of disconnectors. In practice, there are many other types of faults in disconnectors, and here, we chose current signals for the fault diagnosis. In future work, we can aim to add other features, such as vibration signals, for feature fusion fault diagnosis, which may be more effective.

Author Contributions

Conceptualization, Q.X. and H.L.; methodology, Q.X., H.T. and B.L.; software, Q.X., H.T. and B.L.; validation, Z.W. and J.D.; formal analysis, Q.X., H.T. and B.L.; investigation, Q.X. and J.D.; data curation, Q.X., H.T. and B.L.; writing—original draft preparation, Q.X., H.T. and B.L.; writing—review and editing, Q.X., H.T. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Nature Science Foundation of China (grant number 52009106).

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, T.; Ruan, J.; Liu, Y.; Wang, B. Defect Diagnosis of Disconnector Based on Wireless Communication and Support Vector Machine. IEEE Access 2020, 8, 30198–30209. [Google Scholar] [CrossRef]
  2. Wang, Q.; Zhang, K.; Lin, S. Fault Diagnosis Method of Disconnector Based on CNN and D-S Evidence Theory. IEEE Trans. Ind. Appl. 2023, 59, 5691–5704. [Google Scholar] [CrossRef]
  3. Holm, H.; Afridi, K.K. An expert-based investigation of the Common Vulnerability Scoring System. Comput. Secur. 2015, 53, 18–30. [Google Scholar] [CrossRef]
  4. Cheng, X.; Sun, H. A data-driven fine-management and control method of gas-extraction boreholes. Processes 2022, 10, 2709. [Google Scholar] [CrossRef]
  5. Yin, B.; Wang, Z.; Zhang, M.; Jin, Z.; Liu, X. A Transferable Thruster Fault Diagnosis Approach for Autonomous Underwater Vehicle under Different Working Conditions with Insufficient Labeled Training Data. Machines 2022, 10, 1236. [Google Scholar] [CrossRef]
  6. Liu, C.; Bai, J.; Wu, F. Fault Diagnosis Using Dynamic Principal Component Analysis and GA Feature Selection Modeling for Industrial Processes. Processes 2022, 10, 2570. [Google Scholar] [CrossRef]
  7. Yu, S.; Ma, J. Adaptive Composite Fault Diagnosis of Rolling Bearings Based on the CLNGO Algorithm. Processes 2022, 10, 2532. [Google Scholar] [CrossRef]
  8. Zhang, F.; Sun, W.L.; Wang, H.; Xu, T. Fault Diagnosis of a Wind Turbine Gearbox Based on Improved Variational Mode Algorithm and Information Entropy. Entropy 2021, 23, 794. [Google Scholar] [CrossRef]
  9. Zhao, B.; Niu, Z.; Liang, Q.; Xin, Y.; Qian, T.; Tang, W.; Wu, Q. Signal-to-Signal Translation for Fault Diagnosis of Bearings and Gears with Few Fault Samples. IEEE Trans. Instrum. Meas. 2021, 70, 1–10. [Google Scholar] [CrossRef]
  10. Kini, S.; Pai, S.N.; Kolekar, S.; Pai, V.; Balasubramani, R. Use of Machine Learning and Random OverSampling in Stroke Prediction. In Proceedings of the 2022 International Conference on Artificial Intelligence and Data Engineering (AIDE), Karkala, India, 22–23 December 2022. [Google Scholar]
  11. Zhang, Z.; Li, J. Synthetic Minority Oversampling Technique Based on Adaptive Local Mean Vectors and Improved Differential Evolution. IEEE Access 2022, 10, 74045–74058. [Google Scholar] [CrossRef]
  12. Ahmed, M.R.; Islam, S.; Islam, A.M.; Shatabda, S. An ensemble 1D-CNN-LSTM-GRU model with data augmentation for speech emotion recognition. Expert Syst. Appl. 2023, 218, 119633. [Google Scholar] [CrossRef]
  13. Pei, Z.; Liu, K. Optimized EKF algorithm using TSO-BP neural network for lithium battery state of charge estimation. J. Energy Storage 2023, 73, 108882. [Google Scholar] [CrossRef]
  14. Li, P.; Zhang, H.; Hu, X. High-Dimensional Multi-Label Data Stream Classification with Concept Drifting Detection. IEEE Trans. Knowl. Data Eng. 2023, 35, 8085–8099. [Google Scholar] [CrossRef]
  15. Wang, Y. Deep Learning Based Target Tracking Algorithm Model for Athlete Training Trajectory. Processes 2022, 10, 2710. [Google Scholar] [CrossRef]
  16. Cai, Z.; Ma, W.; Wang, X.; Wang, H. The Performance Analysis of Time Series Data Augmentation Technology for Small Sample Communication Device Recognition. IEEE Trans. Reliab. 2023, 72, 574–585. [Google Scholar] [CrossRef]
  17. Li, C.; Minati, L. Integrated Data Augmentation for Accelerometer Time Series in Behavior Recognition: Roles of Sampling, Balancing, and Fourier Surrogates. IEEE Sens. J. 2022, 22, 24230–24241. [Google Scholar] [CrossRef]
  18. Park, S.-S.; Kwon, H.-J.; Chung, K. Dimensional Expansion and Time-Series Data Augmentation Policy for Skeleton-Based Pose Estimation. IEEE Access 2022, 10, 112261–112272. [Google Scholar] [CrossRef]
  19. Jeon, H.; Lee, D. A New Data Augmentation Method for Time Series Wearable Sensor Data Using a Learning Mode Switching-Based DCGAN. IEEE Robot. Autom. Lett. 2021, 6, 8671–8677. [Google Scholar] [CrossRef]
  20. Huang, S.; Shang, B.; Song, Y. Research on Real-Time Disconnector State Evaluation Method Based on Multi-Source Images. IEEE Trans. Instrum. Meas. 2022, 71, 1–15. [Google Scholar]
  21. Zhang, Y.; Li, S.; Chen, S.; Gao, Q.W.; Song, Y.K.; Zhang, W.T. Fault Type and Position Diagnosis Method of High-voltage Disconnectors Based on ReliefF Characteristic Quantity Optimization and BP Neural Network Recognition. High Volt. Appar. 2018, 54, 12–19. [Google Scholar]
  22. Da Silva Souza, J.; Suzuki Bayma, R. Analysis of Window Size and Statistical Features for SVM-based Fault Diagnosis in Bearings. IEEE Lat. Am. Trans. 2021, 19, 243–249. [Google Scholar] [CrossRef]
  23. Li, H.; Hu, G.; Li, J.; Zhou, M. Intelligent Fault Diagnosis for Large-Scale Rotating Machines Using Binarized Deep Neural Networks and Random Forests. IEEE Trans. Autom. Sci. Eng. 2022, 19, 1109–1119. [Google Scholar] [CrossRef]
  24. Uralde, J.; Artetxe, E.; Barambones, O.; Calvo, I.; Martin, I. Ultraprecise Controller for Piezoelectric Actuators Based on Deep Learning and Model Predictive Control. Sensors 2023, 23, 1690. [Google Scholar] [CrossRef] [PubMed]
  25. Lin, X.; Srivastava, A.K. A Hybrid Short-Term Load Forecasting Approach for Individual Residential Customer. IEEE Trans. Power Deliv. 2023, 38, 26–37. [Google Scholar] [CrossRef]
  26. Li, J.; Liu, Y.H.; Li, Q.J. Intelligent fault diagnosis of rolling bearings under imbalanced data conditions using attention-based deep learning method. Measurement 2022, 189, 110500. [Google Scholar] [CrossRef]
  27. Kazemi, E.; Taherkhani, F.; Wang, L. Semisupervised Learning for Noise Suppression Using Deep Reinforcement Learning of Contrastive Features. IEEE Sens. Lett. 2023, 7, 1–4. [Google Scholar] [CrossRef]
  28. Dulek, B. On Mutual Information-Based Optimal Quantizer Design. IEEE Commun. Lett. 2022, 26, 1008–1011. [Google Scholar] [CrossRef]
  29. Huang, C.; Yang, Z.; Wen, J.; Xu, Y.; Jiang, Q.; Yang, J.; Wang, Y. Self-Supervision-Augmented Deep Autoencoder for Unsupervised Visual Anomaly Detection. IEEE Trans. Cybern. 2022, 52, 13834–13847. [Google Scholar] [CrossRef]
  30. Zhu, J.; Xia, Y.; Wu, L.; Deng, J.; Zhou, W.; Qin, T.; Liu, T.Y.; Li, H. Masked Contrastive Representation Learning for Reinforcement Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 3421–3433. [Google Scholar] [CrossRef]
  31. Cheng, H.; Li, H.; Qiu, H.; Wu, Q.; Zhang, X.; Meng, F.; Ngan, K.N. Unsupervised Visual Representation Learning via Multi-Dimensional Relationship Alignment. IEEE Trans. Image Process. 2023, 32, 1613–1626. [Google Scholar] [CrossRef]
  32. Chen, Q.; Zhao, M.; Zhang, Z. Self-supervised Representation Learning for Time Series via Temporal Contrasting and Transformation. In Proceedings of the 2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI), Taiyuan, China, 26–28 May 2023. [Google Scholar]
  33. Yue, Z.; Wang, Y.; Duan, J.; Yang, T.; Huang, C.; Tong, Y.; Xu, B. TS2Vec: Towards Universal Representation of Time Series. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 22 February–1 March 2022; pp. 8980–8987. [Google Scholar]
  34. Wang, D.; Dong, Y.; Wang, H.; Tang, G. Limited Fault Data Augmentation with Compressed Sensing for Bearing Fault Diagnosis. IEEE Sens. J. 2023, 23, 14499–14511. [Google Scholar] [CrossRef]
  35. Sun, K.; Huang, Z.; Mao, H.; Qin, A.; Li, X.; Tang, W.; Xiong, J. Multi-Scale Cluster-Graph Convolution Network with Multi-Channel Residual Network for Intelligent Fault Diagnosis. IEEE Trans. Instrum. Meas. 2022, 71, 1–12. [Google Scholar] [CrossRef]
  36. Liu, C.; Zhuo, F.; Wang, F. Fault Diagnosis of Commutation Failure Using Wavelet Transform and Wavelet Neural Network in HVDC Transmission System. IEEE Trans. Instrum. Meas. 2021, 70, 1–8. [Google Scholar] [CrossRef]
Figure 1. An overview of MG-CL.
Figure 1. An overview of MG-CL.
Processes 11 02981 g001
Figure 2. Schematic diagram of the multi-head attention window.
Figure 2. Schematic diagram of the multi-head attention window.
Processes 11 02981 g002
Figure 3. Structure diagram of multi-head attention.
Figure 3. Structure diagram of multi-head attention.
Processes 11 02981 g003
Figure 4. Multi-layer perceptron high-dimensional learning graph.
Figure 4. Multi-layer perceptron high-dimensional learning graph.
Processes 11 02981 g004
Figure 5. Multi-layer perceptron structure diagram.
Figure 5. Multi-layer perceptron structure diagram.
Processes 11 02981 g005
Figure 6. Flowchart of disconnector fault diagnosis.
Figure 6. Flowchart of disconnector fault diagnosis.
Processes 11 02981 g006
Figure 7. Original signal and its enhanced signal. (a) Normal condition, (b) slight jamming, (c) severe jamming, (d) balance spring failure, and (e) the closing is not in place.
Figure 7. Original signal and its enhanced signal. (a) Normal condition, (b) slight jamming, (c) severe jamming, (d) balance spring failure, and (e) the closing is not in place.
Processes 11 02981 g007aProcesses 11 02981 g007b
Figure 8. (a) Scatterplot of accuracy and robustness of different algorithms. (b) Accuracy–robustness line chart for different algorithms.
Figure 8. (a) Scatterplot of accuracy and robustness of different algorithms. (b) Accuracy–robustness line chart for different algorithms.
Processes 11 02981 g008
Figure 9. Accuracy of the model under different noises. (a) Gaussian noise and (b) Salt and Pepper noise.
Figure 9. Accuracy of the model under different noises. (a) Gaussian noise and (b) Salt and Pepper noise.
Processes 11 02981 g009
Figure 10. Troubleshooting confusion matrix: (a) MG-CL and (b) LSTM.
Figure 10. Troubleshooting confusion matrix: (a) MG-CL and (b) LSTM.
Processes 11 02981 g010
Figure 11. Comparison of fault identification times of different methods.
Figure 11. Comparison of fault identification times of different methods.
Processes 11 02981 g011
Table 1. Disconnector fault simulation preliminary scheme table.
Table 1. Disconnector fault simulation preliminary scheme table.
Serial NumberType of FaultThe Main Cause of FailureFault Simulation Method
1Slight jammingSlight corrosion of the connecting rod of the mechanism. Slight bending deformation of the connecting rod.In order to simulate the situation of faults 1 and 2, and to protect the motor from overloading and not causing mechanism wear to the equipment in operation, it is considered to use different amounts of elastic tape to bind and connect the moving contact and the pillar porcelain bottle.
2Severe jammingSerious corrosion of connecting rod of the mechanism. The bending deformation of the connecting rod is obvious.
3Balance spring failureWorkpiece corrosion. Influence of material processing technology.In order to simulate the fault situation, the plastic strapping belt is used to knot the contraction part at the end of the spring to increase the motor output force moment, to simulate the reduction of the spring stiffness coefficient of the balance spring.
4The closing is not in placeForeign body obstruction. Poor gear engagement. Auxiliary switch, limited switch adjustment is not appropriate.Adjust the mechanical travel switch, so that its travel break point lags behind the set value; that is, the moving contact is not fully contacted after adjustment, leaving 20% of the non-contact area.
Table 2. The original signal and its enhanced signal after wavelet transform.
Table 2. The original signal and its enhanced signal after wavelet transform.
Fault TypeOriginal SignalStrong Enhancement SignalWeak Enhancement Signal
Normal conditionProcesses 11 02981 i001Processes 11 02981 i002Processes 11 02981 i003
Slight jammingProcesses 11 02981 i004Processes 11 02981 i005Processes 11 02981 i006
Severe jammingProcesses 11 02981 i007Processes 11 02981 i008Processes 11 02981 i009
Balance spring failureProcesses 11 02981 i010Processes 11 02981 i011Processes 11 02981 i012
The closing is not in placeProcesses 11 02981 i013Processes 11 02981 i014Processes 11 02981 i015
Table 3. Ablation results.
Table 3. Ablation results.
Accuracy
MG-CL94%
Without time series contrast learning87%
Without context comparative learning88%
Without CWT90%
Table 4. Comparison of training results.
Table 4. Comparison of training results.
AlgorithmNumber of Test SamplesIdentify the Correct Number of SamplesIdentification Accuracy
BP504182.00%
SVM504386.00%
LSTM504488.00%
FS + Random Forest503774.00%
DTW + KNN503570.00%
MG-CL504794.00%
Table 5. Comparison between the computational loads of different models.
Table 5. Comparison between the computational loads of different models.
ModelAmount of Computation
MG-CL1.00
LSTM3.53
FS + Random Forest19.65
DTW + KNN6.71
BP26.95
SVM3.15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, Q.; Tang, H.; Liu, B.; Li, H.; Wang, Z.; Dang, J. Disconnector Fault Diagnosis Based on Multi-Granularity Contrast Learning. Processes 2023, 11, 2981. https://doi.org/10.3390/pr11102981

AMA Style

Xie Q, Tang H, Liu B, Li H, Wang Z, Dang J. Disconnector Fault Diagnosis Based on Multi-Granularity Contrast Learning. Processes. 2023; 11(10):2981. https://doi.org/10.3390/pr11102981

Chicago/Turabian Style

Xie, Qian, Haiyi Tang, Baize Liu, Hui Li, Zhe Wang, and Jian Dang. 2023. "Disconnector Fault Diagnosis Based on Multi-Granularity Contrast Learning" Processes 11, no. 10: 2981. https://doi.org/10.3390/pr11102981

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop