Next Article in Journal
Addressing the Non-Stationarity and Complexity of Time Series Data for Long-Term Forecasts
Previous Article in Journal
Inotrope Analysis for Acute and Chronic Reduced-EF Heart Failure Using Fuzzy Multi-Criteria Decision Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Fault Diagnosis in Industrial Processes through Adversarial Task Augmented Sequential Meta-Learning

1
College of Marine Electrical Engineering, Dalian Maritime University, Dalian 116026, China
2
Key Laboratory of Chemical Lasers, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023, China
3
Key Laboratory of Technology and System for Intelligent Ships of Liaoning Province, Dalian 116026, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(11), 4433; https://doi.org/10.3390/app14114433
Submission received: 17 April 2024 / Revised: 17 May 2024 / Accepted: 22 May 2024 / Published: 23 May 2024
(This article belongs to the Section Applied Industrial Technologies)

Abstract

:
This study introduces the Adversarial Task Augmented Sequential Meta-Learning (ATASML) framework, designed to enhance fault diagnosis in industrial processes. ATASML integrates adversarial learning with sequential task learning to improve the model’s adaptability and robustness, facilitating precise fault identification under varied conditions. Key to ATASML’s approach is its novel use of adversarial examples and data-augmentation techniques, including noise injection and temporal warping, which extend the model’s exposure to diverse operational scenarios and fault manifestations. This enriched training environment significantly boosts the model’s ability to generalize from limited data, a critical advantage in industrial applications where anomaly patterns frequently vary. The framework’s performance was rigorously evaluated on two benchmark datasets: the Tennessee Eastman Process (TEP) and the Skoltech Anomaly Benchmark (SKAB), which are representative of complex industrial systems. The results indicate that ATASML outperforms conventional meta-learning models, particularly in scenarios characterized by few-shot learning requirements. Notably, ATASML demonstrated superior accuracy and F1 scores, validating its effectiveness in enhancing fault-diagnosis capabilities. Furthermore, ATASML’s strategic incorporation of task sequencing and adversarial tasks optimizes the training process, which not only refines learning outcomes but also improves computational efficiency. This study confirms the utility of the ATASML framework in significantly enhancing the accuracy and reliability of fault-diagnosis systems under diverse and challenging conditions prevalent in industrial processes.

1. Introduction

The rapid progression of industrial automation and informatization has significantly elevated the importance of fault-diagnosis technologies in maintaining the reliability and safety of industrial systems [1]. In complex settings such as the chemical, manufacturing and energy sectors, the capacity for precise and timely fault detection is crucial for optimizing production efficiency and averting potential accidents [2,3,4]. Recent advancements in big data and the Internet of Things (IoT) have led to widespread sensor integration across these systems, generating substantial data volumes that enhance the monitoring and diagnostic processes [5,6]. This influx of operational data, while beneficial, often overshadows the sparse fault data, creating a significant imbalance that challenges traditional data-intensive machine learning and deep learning methodologies [7,8]. This disparity underscores the need for robust few-shot learning techniques that can effectively function in environments with limited fault data, marking a critical area of research in industrial intelligence [9,10].
Despite the significant achievements of Few-Shot Learning (FSL) in fields such as natural language processing and image classification [8,11,12,13], its application in the complex domain of fault diagnosis, particularly when handling intricate industrial data, continues to face numerous challenges. Research has shown that existing FSL approaches can be broadly categorized into generative and discriminative model methods, each with its strengths and limitations. For instance, generative models, as described by Lu et al. [11], often enhance model performance in data-scarce environments by generating new samples, whereas discriminative models focus on learning distinguishing features from limited data. Zhang et al. [8] have classified fault-diagnosis methods for small and imbalanced datasets into data augmentation, feature extraction and classifier design. Despite their excellence, these methods commonly exhibit weaknesses such as poor generalization capabilities, sensitivity to adversarial examples and high algorithmic complexity, which can lead to suboptimal performance in industrial applications that demand rapid and accurate diagnostics. Recent studies have shifted focus to addressing FSL fault-diagnosis challenges through meta-learning [14,15,16], with a significant proportion of recent research in time-series signal fault diagnosis concentrating on this approach [17]. However, the practical utility of meta-learning is limited by its high dependency on data quality and distribution, which can adversely affect performance in industrial settings due to data noise and outliers [18]. Meta-learning algorithms like MAML, although quick to adapt to new tasks, require multiple gradient updates, which increases the computational load and can lead to overfitting on sparse data, thus diminishing generalization capabilities [19]. Task sequencing meta-learning introduces a novel method for optimizing the selection and order of learning tasks, but significant differences between tasks can lead to unstable knowledge transfer, affecting the learning outcomes [20].
In recent years, GAN-based data-augmentation algorithms have garnered considerable attention for addressing small-sample problems in fault diagnosis. These algorithms typically follow a sequence of steps: initially collecting data under various fault conditions, then training a GAN model using real signals and finally training classifiers with both real and synthetic signals [21]. This approach enables GAN-based data augmentation to generate a diverse set of samples from limited fault data, thereby enhancing the performance of diagnostic models. For instance, Zhang et al. developed a novel multi-module gradient penalty GAN specifically for generating samples for mechanical fault diagnosis [22]. Furthermore, research by Li et al. utilized a WGAN-GP-based auxiliary classifier to generate high-quality spectra to overcome small-sample challenges [23]. Although methods for generating one-dimensional spectra or two-dimensional images have been extensively explored, Pan et al. proposed a feature generation network that creates one-dimensional feature sets rich in fault information, suitable for further fault identification [24]. Examples of GAN-based data augmentation for different signal types are shown in Table 1 below. Building on this foundation, our research has further evolved to incorporate adversarial sample augmentation techniques. This strategy, by introducing targeted adversarial perturbations directly into the data, creates adversarial samples for training. Compared to traditional GAN generation methods, adversarial sample augmentation not only enhances the model’s sensitivity and detection capabilities for complex fault conditions but also improves its robustness against unknown or varying fault patterns. By simulating potential fault variations, our model is better equipped to adapt to the complex and dynamic conditions encountered in actual operations, thereby increasing the accuracy and reliability of fault diagnosis.
Building on our previous discussion regarding the challenges and limitations of meta-learning in industrial fault diagnosis, the research framework provided by Wang et al. [12] offers a systematic approach to thinking about few-shot learning. They have meticulously categorized few-shot learning from three perspectives: data, models and algorithms. Following this framework, our study introduces a comprehensive strategy combining data augmentation, model optimization and algorithmic improvements to overcome the existing limitations of meta-learning techniques, particularly when dealing with complex industrial fault-diagnosis data.
To address the challenges in fault diagnosis, our research integrates adversarial sample generation with serialized task learning to enhance the model’s robustness and adaptability. This method leverages modular design and advanced data-augmentation strategies to enable effective management of diverse fault-diagnosis scenarios. Our approach includes generating adversarial samples to enhance the robustness of the model against varied fault conditions. This strategy not only increases the dataset’s diversity but also sharpens the model’s capability to recognize and respond to anomalies that are not typical of the training data, substantially improving its diagnostic accuracy under operational conditions. The model adopts a modular design that allows for greater structural flexibility, facilitating easy adjustments and optimizations tailored to specific fault-diagnosis tasks. This adaptability enhances the model’s efficiency and accuracy in identifying faults, making it particularly suited to dynamic industrial environments. Employing serialized task learning refines the learning process by structuring the sequence of tasks from simple to complex. This strategic arrangement ensures efficient utilization of limited sample data for training, thereby enhancing the model’s rapid adaptability to new fault types and maintaining performance stability across diverse industrial settings. Together, these elements substantially enhance the model’s fault-diagnosis capabilities, providing a robust framework that significantly improves adaptability and accuracy compared to traditional methods. The integration of adversarial learning within the serialized task framework is particularly effective in preparing the model for the unpredictability of real-world operational conditions.
The Adversarial Task Augmented Sequential Meta-Learning (ATASML) framework introduced in this study enhances diagnostic models by integrating adversarial examples within a meta-learning architecture, significantly improving generalization capabilities. Embedding adversarial tasks during the training phase, ATASML prepares the model to handle unexpected or novel fault scenarios, thus robustly boosting adaptability and accuracy—ideal for industrial settings with variable and complex sensor data.
The principal contributions of this work are encapsulated as follows:
  • Development of the ATASML framework that integrates adversarial tasks during training, enhancing the model’s adaptability and robustness. This dynamic integration is crucial for improving diagnostic accuracy and reliability under diverse industrial fault conditions.
  • Implementation of a comprehensive data-augmentation strategy in ATASML, incorporating Gaussian noise, temporal warping and adversarial example generation. These techniques broaden training data coverage, significantly improving the model’s anomaly detection and diagnostic capabilities in variable environments.
  • Application of a sequential task learning approach where tasks are prioritized based on complexity and informational value. This optimizes the learning process, improving training efficiency and diagnostic precision, and making the process computationally economical.
  • Validation of ATASML using industrial-relevant datasets, including the Tennessee Eastman Process (TEP) and Skoltech Anomaly Benchmark (SKAB), shows its superior performance over other well-established models, particularly in few-shot learning scenarios. The outcomes underscore ATASML’s improved accuracy, F1 scores and robust generalization capabilities across a range of complex fault conditions.
The remainder of this paper is organized as follows: Section 2 describes the proposed method in detail. Section 3 presents a case study that applies the method to specific datasets. Section 4 discusses the implications of the experimental results, and Section 5 concludes the paper with a summary of the findings and directions for future research.

2. Proposed Method

This chapter delineates our proposed approach, the Adversarial Task Augmented Sequential Meta-Learning (ATASML) framework, designed to enhance the efficacy of fault diagnosis in complex systems. Initially, we explore the foundational concepts of meta-learning, highlighting its significance in enabling models to generalize from limited data. Subsequently, the discussion transitions to the exploration of task-oriented meta-learning algorithms, acknowledging their efforts to refine the adaptability and computational efficiency of meta-learning models. Building on these insights, we introduce the ATASML framework, focusing on its innovative strategies for data augmentation and the generation of adversarial samples. This structured presentation aims to illustrate the logical progression from meta-learning fundamentals to the sophisticated mechanisms underlying the ATASML framework.

2.1. Fundamentals of Meta-Learning

Meta-learning, or learning to learn, is a significant paradigm in machine learning aimed at enabling models to generalize from limited experiences to perform well on new tasks [18]. This approach leverages accumulated knowledge to quickly adapt to novel learning challenges with minimal data. Meta-learning operates at two levels: the meta-level, focusing on the learning strategy across tasks, and the base-level, where task-specific learning occurs.
The core objective in meta-learning is to fine-tune the model parameters, θ , to optimize performance not just on a single task’s training data but across a diverse set of tasks represented by the distribution p ( T ) . This goal is pursued through episodic training, which involves sampling tasks T p ( T ) and dividing each into a support set S for learning and a query set Q for evaluation. The model’s parameters are updated based on the performance on Q, facilitating the model’s ability to assimilate information from S.
Formally, the meta-learning objective for a given task T, with support set S and query set Q, is to minimize the expected loss L Q on Q after learning from S, as defined by:
min θ E T p ( T ) [ L Q ( f θ ; S ( x Q ) , y Q ) ]
where θ represents parameters updated from learning on S, and ( x Q , y Q ) denote the samples and labels in Q. The ultimate aim is identifying θ that facilitates rapid adaptation to new tasks from p ( T ) .
Various meta-learning models exist, including model-based, optimization-based and metric-based approaches. Each type aims to enhance the model’s ability to learn efficiently from minimal data, thereby broadening the scope of tasks it can handle—from few-shot learning challenges to rapid adaptation in reinforcement learning contexts.
Model-Agnostic Meta-Learning (MAML) [36,37] is a notable strategy that exemplifies meta-learning’s essence by preparing the model to significantly improve performance on new tasks with only a few adjustments. MAML seeks an initial parameter set θ that is optimal for quick learning across tasks. After initial adaptation through gradient updates on S, resulting in task-specific parameters θ T , the model’s performance is evaluated on Q:    
θ T = θ α θ L T ( f θ ; S )
This approach’s generality allows its application across various models and tasks, highlighting the versatility and potential of meta-learning.
Despite meta-learning’s and MAML’s advancements, challenges such as high computational demands, assumptions of task homogeneity and sensitivity to hyperparameters remain areas for future research. Addressing these limitations is crucial for advancing meta-learning towards more practical and scalable applications.

2.2. Task-Oriented Meta-Learning Algorithms

Recent developments in task-oriented meta-learning have demonstrated significant advancements in optimizing learning processes through past experiences to enhance performance across a variety of tasks. Notably, task sequencing meta-learning techniques have been introduced to improve adaptability and performance in few-shot learning scenarios by optimizing the order of task presentation.
  • Task-Sequencing Meta-Learning (TSML): Introduced by Hu et al. [38], TSML organizes meta-training tasks from simple to complex, enhancing a model’s adaptability in few-shot fault-diagnosis scenarios.
  • Task-Specific Pseudo Labelling: Developed by Lee et al. [39], this method employs pseudo labelling to enhance transductive meta-learning by generating synthetic labels for unannotated query sets, significantly improving model performance.
  • Task Weighting with Trajectory Optimization: Proposed by Do & Carneiro [40], this approach uses trajectory optimization to automate task weighting, showing improved performance over traditional methods in few-shot learning benchmarks.
Despite these innovative approaches, challenges remain, particularly with the variability between tasks which can lead to unstable knowledge transfer, impacting learning efficacy. This issue is exacerbated in real-world applications where task heterogeneity is prevalent and tasks often deviate significantly from the training distribution. The assumption of task uniformity by these algorithms often does not hold in complex scenarios, leading to a gap between theoretical efficiency and practical applicability. The fixed nature of task prioritization may also fail to accommodate the dynamic variability of real-world data, posing further challenges to model adaptability.
The exploration of these meta-learning algorithms underscores the necessity for a robust framework that intelligently sequences tasks and is resilient to the unpredictable nature of practical applications. The upcoming section introduces the Adversarial Task Augmented Sequential Meta-Learning (ATASML) Framework, which proposes innovative solutions to address these challenges. ATASML integrates data augmentation and adversarial examples within its learning process, significantly enhancing the model’s resilience and generalization capabilities. These features enable ATASML to perform effectively even under the challenging conditions presented by complex fault scenarios, thereby providing a substantial improvement over existing meta-learning frameworks.

2.3. Adversarial Task Augmented Sequential Meta-Learning (ATASML) Framework

The ATASML framework introduces a novel approach to enhance model adaptability and robustness, particularly in fault-diagnosis applications. By integrating data augmentation and adversarial training, ATASML aims to prepare models for a wide array of operational scenarios. This section elaborates on the framework’s methodology, emphasizing data augmentation, adversarial sample generation and the overall algorithmic procedure.

2.3.1. Data Augmentation in the ATASML Framework

Data augmentation is critical in the ATASML framework, ensuring that models are well-equipped for diverse operational scenarios. Techniques such as Gaussian noise addition and temporal warping are employed to enhance data variability and complexity. These techniques are depicted in Figure 1, where the transformation from the original dataset X o r i g to the augmented datasets X g a u s s and X w a r p is illustrated.
The original dataset X o r i g consists of samples x 1 , x 2 , , x n that capture the true operational dynamics. To enhance the dataset, a window of size W is utilized to segment the continuous time-series data into discrete, overlapping segments. This sliding window technique allows for comprehensive capture of temporal patterns within each segment. Subsequently, the augmentation methods applied to each segment are as follows:
  • Noise Addition (NoiseAdd): Gaussian noise ϵ is added to the data within each window, creating a set of noise-augmented samples X g a u s s :
    X g a u s s = X o r i g + ϵ , ϵ N ( 0 , σ 2 )
    where σ 2 denotes the variance of the noise, modeling the sensor noise and other environmental variations.
  • Temporal Warping (TimeWarp): The temporal spacing of data points within each segment is modified, resulting in a set of time-warped samples X w a r p :
    X w a r p ( t ) = X o r i g ( a · t ) , a U ( a m i n , a m a x )
    where a is a scaling factor that simulates the acceleration or deceleration of process dynamics.
Following augmentation, a selection process ensures that the resulting samples contribute positively to model training. This selection process, including quality assessment and anomaly detection, is critical to maintaining a high-quality augmented dataset X a u g . The entire augmentation process, including the application of windowing and augmentation techniques, is schematically depicted in Figure 1.

2.3.2. Design of Adversarial Samples

Adversarial samples within the ATASML framework are specifically designed to evaluate and enhance the model’s resilience against operational conditions that mimic real-world disturbances. These samples are derived from both the augmented dataset X a u g , which comprises samples modified by standard data-augmentation techniques, and the original dataset X o r i g .
The generation of adversarial samples, depicted in Figure 2, involves the following steps:
  • Dataset Composition: The dataset for generating adversarial samples, X a d v , is formulated using:
    • X g a u s s : Samples augmented with Gaussian noise.
    • X w a r p : Samples modified through temporal warping.
    These augmented samples, along with unaltered original samples from X o r i g , are used to construct X a d v .
  • Strategic Sample Selection (S): A subset S = { x i } from both X a u g (comprising X g a u s s and X w a r p ) and X o r i g is selected to create instances that exhibit potential vulnerabilities under varied operational conditions. This targeted selection process is designed to challenge the model realistically and robustly:
    • Focusing on samples that represent critical transitional states or dynamic conditions.
    • Electing samples that simulate rare operational disruptions or extreme conditions.
    • Choosing instances that might indicate potential system failures or significant performance anomalies.
  • Adversarial Optimization Problem: The adversarial samples X a d v are generated by solving an optimization problem formulated to maximize the predictive error, thereby testing the model’s robustness:
    X a d v = x a d v i x a d v i = arg max x ˜ i L ( f ( x ˜ i ) , y i ) , s . t . x ˜ i x i p ϵ
    where x ˜ i = x i + δ , δ = ϵ · sign ( x i L ( f ( x i ) , y i ) ) is the perturbation designed to maximally disrupt the model’s predictions, and y i represents the labels corresponding to the strategically selected samples, denoting their operational states as normal or anomalous.
  • Gradient Sign Method: This method employs the gradient of the loss function with respect to the inputs from S to determine the most effective direction for perturbations, thereby ensuring the adversarial samples are optimally challenging:
    sign ( x i L ) = sign ( x i L ( f ( x i ) , y i ) )
This systematic approach of employing both X a u g (including X g a u s s and X w a r p ) and X o r i g to generate X a d v ensures the model is comprehensively evaluated against synthetic distortions as well as baseline conditions. Such rigorous testing is essential for verifying the model’s operational reliability across diverse and potentially disruptive conditions.

2.4. ATASML Algorithmic Procedure

The ATASML framework is designed to enhance fault diagnosis through a systematic approach incorporating meta-learning and adversarial training, which improve model robustness and adaptability.
Refer to Figure 3 for a visual representation of the ATASML algorithm’s architecture, which illustrates the critical components and their interactions within the framework.
  • Parameter Initialization: Begin with a random initialization of model parameters θ 0 , which helps in avoiding local minima and promotes better convergence towards optimal solutions.
  • Task Sampling: Sample a set of tasks { T i } i = 1 B from the task distribution p ( T ) . This step ensures that the model is trained across a diverse set of scenarios, enhancing its generalization capabilities.
  • Data Augmentation: Utilize the pre-augmented dataset X a u g to select augmented tasks T i , a u g . This dataset already includes transformations applied via Gaussian noise and temporal warping, thus directly enhancing the data’s variability and complexity:
    T i , a u g = SelectFrom ( X a u g , T i )
  • Adversarial Task Generation: Generate adversarial tasks T i , a d v using the dataset X a d v , which has been specifically prepared to challenge the model’s robustness under simulated adversarial conditions:
    T i , a d v = GenerateAdversarial ( X a d v , T i )
  • Task Combination and Difficulty Assessment: Combine the original tasks T i from p ( T ) , the augmented tasks T i , a u g and the adversarial tasks T i , a d v into composite tasks T i , c o m p . This step evaluates the complexity of integrating different types of tasks, which helps in assessing the model’s robustness and adaptability across a diverse range of conditions:
    T i , c o m p = Combine ( T i , T i , a u g , T i , a d v )
    Assess the difficulty of these composite tasks to ensure the model can handle varying levels of challenge and complexity. This assessment aids in tuning the model’s sensitivity to subtle and extreme variations alike:
    D T i , c o m p = AssessDifficulty ( T i , c o m p )
  • Wasserstein Distance Calculation: Compute the Wasserstein distance to measure how closely the composite tasks T i , c o m p align with the original task distribution, aiding in maintaining the integrity of the model’s training process:
    W T i , c o m p = ComputeWassersteinDistance ( T i , c o m p , p ( T ) )
  • Task Ranking: Rank the tasks based on assessed difficulty and Wasserstein distance, which helps in prioritizing the tasks that will most effectively enhance the model’s learning:
    { T j } = RankTasks ( { T i , c o m p } , D T i , c o m p , W T i , c o m p )
  • Support and Query Set Sampling: From each ranked task T j , extract support and query sets S j and Q j , respectively. These sets are crucial for tuning the model parameters specifically to each task:
    S j , Q j = SampleSets ( T j )
  • Model Optimization: Update the model parameters θ i using the support set S j and evaluate the performance on the query set Q j . This step is vital for iterative learning and adaptation of the model:
    θ i = θ i α θ i L S j ( f θ i )
    L Q j ( f θ i ) = EvaluateLoss ( Q j , f θ i )
  • Global Parameter Update: Perform a global update of the model parameters θ using the aggregated losses from the query sets across all tasks. This final step ensures the model is refined and ready for deployment:
    θ θ β θ j L Q j ( f θ i )
The systematic methodology employed by the ATASML framework ensures improved diagnostic capabilities across a variety of operational conditions, establishing a robust model well-suited for dynamic environments.
Refer to Algorithm 1 for a complete overview of the procedural steps involved in the ATASML framework. This algorithm highlights the integrated approach to utilizing adversarial and meta-learning techniques to enhance the adaptability and accuracy of fault-diagnosis systems.
Algorithm 1 Adversarial Task Augmented Sequential Meta-Learning (ATASML)
Require: 
p ( T ) : distribution over tasks.
Require: 
α , β : step size hyperparameters for inner and outer loop optimization.
  1:
Initialize model parameters θ 0 randomly.
  2:
while not converged do
  3:
    Sample a batch of tasks { T i } i = 1 B from p ( T ) .
  4:
    for each T i  do
  5:
        Augment T i to T i , a u g using samples from X a u g .
  6:
        Generate adversarial tasks T i , a d v using samples from X a d v .
  7:
        Combine T i , a u g and T i , a d v into T i , c o m p .
  8:
        Assess difficulty D T i , c o m p and compute Wasserstein distance W T i , c o m p to original task distribution.
  9:
   end for
10:
   Rank all tasks in { T i , c o m p } based on D T i , c o m p and W T i , c o m p .
11:
   for each ranked task T j  do
12:
        Sample support set S j and query set Q j from T j .
13:
        Optimize θ i on S j using gradient descent:
θ i = θ i α θ i L S j ( f θ i )
14:
        Calculate loss on Q j with updated parameters:
L Q j ( f θ i ) = EvaluateLoss ( Q j , f θ i )
15:
    end for
16:
    Update global parameters:
θ θ β θ j L Q j ( f θ i )
17:
end while
Ensure: 
Optimized model parameters θ for generalized fault diagnosis.

3. Case Study

This section details the empirical evaluation of the Adversarial Task Augmented Sequential Meta-Learning (ATASML) framework, utilizing two benchmark datasets in the field of fault diagnosis: the Tennessee Eastman Process (TEP) and the Skoltech Anomaly Benchmark (SKAB) datasets. Chosen for their comprehensive representation of real-world industrial challenges and diverse fault scenarios, these datasets offer a substantive basis to assess the efficacy of ATASML in diagnosing system faults under varying conditions. The forthcoming experiments aim to elucidate the framework’s ability to leverage limited samples for effective fault diagnosis and its adaptability across distinct operational settings, adhering to the established principles and methodologies in the domain.

3.1. TEP Dataset Analysis

3.1.1. Dataset Overview

The Tennessee-Eastman process (TEP), as described in seminal works by Downs and Vogel [41,42], serves as a fundamental resource for the study of chemical process fault diagnosis. This dataset models a chemical production process and is instrumental in the development and benchmarking of diagnostic algorithms. Illustrated in Figure 4, TEP is structured around five critical subsystems: a reactor, a condenser, a vapor–liquid separator, a recycle compressor and a product stripper. It offers a rich array of data, with each sample comprising 52 features derived from 41 process measurements and 11 manipulated variables, captured at three-minute intervals.
The dataset is segmented into sets representing normal operations and various fault conditions, designed to challenge and evaluate fault-diagnosis methodologies. Specifically, the normal operation dataset encompasses 500 samples, while the fault dataset is detailed into 21 distinct conditions, each represented by 480 samples in the training set. For testing and validation, an additional subset is provided, featuring 960 samples per fault condition, where fault signatures are introduced following an 8 h operational baseline. This structured approach allows for comprehensive training and rigorous testing of diagnostic models like ATASML, underlining the dataset’s significance in advancing fault-diagnosis research.
Given the limited documentation for the final six fault types within the TEP dataset, this study focuses on the analysis and diagnosis of the initial 15 fault categories. Table 2 provides detailed insights into these fault categories, highlighting their distinctive characteristics. These selected faults encompass a broad range of operational discrepancies, each presenting unique diagnostic challenges.
In particular, Figure 5 illustrates the comparative visualizations of certain process variables under normal operational conditions and during Incident of Disturbance Variable 2 (IDV2) within the training subset. This visual representation aids in understanding the fault’s impact on the process dynamics and the complexity of distinguishing fault conditions based on process data.
Traditional fault-diagnosis techniques often stumble in scenarios where limited or no training data for specific faults is available, underscoring the necessity for innovative approaches capable of learning from sparse datasets. In this context, the application of the Adversarial Task Augmented Sequential Meta-Learning (ATASML) framework represents a significant advancement. ATASML’s methodology, designed to enhance diagnostic accuracy even in data-constrained situations, proves to be especially valuable. By leveraging a meta-learning paradigm and incorporating adversarial task augmentation, ATASML demonstrates enhanced adaptability and efficacy in fault diagnosis across a wide spectrum of fault conditions, as evidenced by its performance on the first 15 faults of the TEP dataset.
Given the Adversarial Task Augmented Sequential Meta-Learning (ATASML) framework’s unique approach to fault diagnosis, the preprocessing of the TEP dataset was tailored to complement its learning strategy. To facilitate this, a sliding window technique was employed, carefully segmenting the dataset to capture the intricate operational dynamics over time. This preprocessing step involved creating windows comprising 40 consecutive data points, corresponding to a 2 h operational window, with a step size of 20 points to ensure continuity and temporal relevance in the captured data.
This data segmentation method is crucial for ATASML’s ability to discern and learn from the nuanced variations and patterns within the process data. It allows the algorithm to effectively utilize each data window, ensuring that the full spectrum of operational dynamics is considered during the learning phase. The structured data thus prepared serves as the foundation for ATASML to apply its meta-learning and adversarial augmentation techniques, aiming to enhance the model’s diagnostic performance.
For the purposes of this study, each of the 15 fault categories within the TEP dataset was dissected into 23 distinct windows, culminating in a total of 345 unique data intervals for in-depth analysis. This segmentation provides a comprehensive dataset for ATASML to train and validate its fault-diagnosis capabilities. Moreover, to concentrate the analysis on the most pertinent data for fault detection, the initial 8 h segment of normal operation data was excluded from the testing subset. This decision ensures a focused evaluation on the segments where fault conditions are present, enabling a more accurate assessment of ATASML’s effectiveness in identifying and diagnosing process anomalies.

3.1.2. Experimental Setup

The ATASML framework leverages a structured experimental protocol to evaluate its performance under varied fault-diagnosis scenarios presented by the TEP dataset. This process is detailed below, from the initial data manipulation to the configuration of the base network designed to process time-series fault data effectively.
Each data sample in X o r i g undergoes a predefined augmentation process to simulate realistic operational noise and temporal variations. The data is segmented using a sliding window of 30 min, equivalent to 30 data points at 1 min sampling intervals, with a step size of 10 min (10 data points), enhancing the dataset with overlapping window segments for comprehensive coverage. Gaussian noise is added to each data segment to represent typical sensor inaccuracies and environmental fluctuations, using a zero-mean Gaussian distribution with a standard deviation of 5% to 10% of the variable range, which aligns with the common noise levels in process sensors. The samples are temporally warped by modifying the time indices using a uniformly distributed scaling factor from 0.8 to 1.2, which models varying process speeds and operational dynamics.
Adversarial samples are strategically engineered to enhance the model’s resilience by challenging its predictive capabilities. These samples are derived by introducing calculated perturbations into samples from X g a u s s , X w a r p and X o r i g , specifically designed to maximize prediction errors and test the model’s robustness under adversarial conditions. Samples are perturbed within an ϵ -ball centered around the original data points, with ϵ set to 10% of the range of each feature. This perturbation is directed by the gradient of the loss function (calculated via backpropagation), aiming to identify the most sensitive changes that increase the model’s prediction error.
The framework adopts an N-way K-shot methodology for task generation, crucial for evaluating its few-shot learning capabilities. Tasks are composed by randomly selecting N fault types where for each fault type, K examples are chosen to form the support set and another Q examples to construct the query set. This setup facilitates the evaluation of the model across different operational settings and fault conditions. Approximately 10,000 training tasks and 1000 testing tasks are generated to ensure comprehensive model training and robust validation against the diversified conditions depicted in the TEP dataset.
The base network architecture for processing the TEP dataset within the ATASML framework includes layers specifically designed to capture and analyze temporal dependencies and features relevant for time-series fault diagnosis. Initial layers use 1D convolutions with 64 filters of kernel size 3 to extract short-term temporal features, followed by ReLU activation for non-linearity. Batch normalization is applied post-convolution to stabilize and accelerate the learning process, followed by max pooling to reduce feature dimensionality, enhancing model generalizability and computational efficiency. Long Short-Term Memory (LSTM) units (100 units) are employed to capture long-term dependencies across the input features, which is critical for accurately identifying fault patterns over time. The network culminates in a dense layer with softmax activation that classifies the input data into predefined fault categories, based on the learned features and temporal patterns.
The specific parameters for each network module used in the TEP dataset analysis are detailed in Table 3 and Table 4, which outline the configurations of convolutional, LSTM and dense layers to optimize fault classification performance.

3.1.3. Comparative Study on Few-Shot Fault Classification

To underscore the efficacy of the ATASML framework, this study compares it with renowned deep learning models such as VGG-11 [43] and ResNet-18 [44], which are benchmarked for their ability to generalize from extensively pretrained features to new, unseen fault conditions. These models are thoroughly adapted to specific fault-diagnosis tasks by fine-tuning on targeted datasets. Our experiments concentrate on demonstrating that the ATASML framework, when subjected to comprehensive network fine-tuning, achieves superior accuracy compared to merely fine-tuning the classifiers, particularly under few-shot learning conditions. Moreover, to ensure a fair comparison across all evaluated methods, an N-way K-shot setup was employed. This setup not only maintains the integrity of the experimental comparisons but also aligns with the research paradigms of few-shot learning, allowing for a precise assessment of the ATASML framework’s performance in complex fault-diagnosis tasks.
As illustrated in Figure 6, a performance comparison between ATASML and conventional deep learning architectures, VGG-11 [43] and ResNet-18 [44], was conducted on the augmented TEP dataset. The results underscore that ATASML significantly outperforms both VGG-11 and ResNet-18 across all few-shot scenarios, particularly excelling in the 3-way 6-shot setting with a diagnostic accuracy of 98.47%. This marked improvement can be attributed to ATASML’s integration of meta-learning and adversarial training strategies, enhancing the model’s generalization capabilities to new, unseen fault conditions. In the more challenging 8-way 5-shot and 8-way 6-shot configurations, although a performance decline is observed for all models, ATASML maintains superior stability and higher diagnostic accuracy. This validates ATASML’s efficacy in managing high-dimensional fault classification tasks, demonstrating robust performance even as the number of categories increases.

3.1.4. Efficacy of Task Sequencing in ATASML

This investigation assesses three meta-learning frameworks: MAML, GOPML and ATASML, each distinct in their approach to task sequencing. MAML serves as a foundational framework that does not prioritize task sequencing. GOPML, conversely, uses gradient information to enhance task selection and ordering, thereby improving specificity and efficiency in learning processes. The nuances and relative advantages of GOPML are detailed further in our prior publications [45].
Empirical evaluations were conducted using the TEP dataset to measure the performance of the algorithms across varied fault classification tasks. The focus was on key metrics such as Accuracy, F1 Score and Precision, which together assess the efficacy of each model in fault classification. These metrics are crucial for evaluating the models’ ability to accurately identify faults and minimize false positives. The results are comprehensively presented in Table 5.
The data distinctly demonstrate ATASML’s superior performance in utilizing task sequencing to optimize the training process. Notably, ATASML achieves an accuracy of 98.47% in the 3-way 6-shot setting and maintains a high accuracy of 90.13% in the more complex 8-way 6-shot scenario. These findings underscore ATASML’s enhanced efficiency and accuracy in handling complex fault-diagnosis situations compared to MAML and GOPML. For instance, in the 8-way 5-shot configuration, ATASML registers an F1 Score of 82.88%, significantly surpassing GOPML’s 75.92% and MAML’s 64.90%. The precision metrics across all configurations also indicate ATASML’s superior performance, highlighting its effectiveness in minimizing false positives.

3.1.5. Training Process and Performance Stability

To evaluate the training efficiency and performance stability of the ATASML model, we conducted detailed experiments with specific settings. The training process was configured with a total of 20,000 epochs and a batch size of 64, using the Adam optimizer with a learning rate of 5 × 10 4 . The 3-way 6-shot task was selected to represent a complex fault-diagnosis scenario.
The experiment results demonstrate that the model approaches a stable performance around 15,000 epochs, with diagnostic accuracy peaking at 98.47%. Figure 7 illustrates the training dynamics of the model over the epochs, detailing both the accuracy and loss metrics. The accuracy graph (Figure 7a) reveals a consistent improvement, reaching near-maximal levels, while the loss graph (Figure 7b) showcases a significant reduction in classification loss, stabilizing at a lower rate, which is indicative of the model’s ability to effectively learn and generalize from the training data.

3.1.6. Parameter Sensitivity Analysis for ATASML

This section investigates the sensitivity of the ATASML model to variations in batch size and learning rate, specifically focusing on the most complex 8-way 5-shot scenario. The choice of this scenario aims to rigorously test the model’s performance under demanding conditions, where the complexity of fault classification is significantly higher.
The experiments demonstrate that ATASML achieves its best performance with a batch size of 64, particularly as the model approaches 15,000 epochs, as illustrated in Figure 8a. Increasing the batch size has also shown to place a greater demand on GPU resources, reflecting a balance between computational efficiency and model performance. Additionally, the model’s learning rate was methodically adjusted across the range from 1 × 10 5 to 1 × 10 3 , with ten iterations conducted at each learning rate to ensure statistical reliability and robustness of the findings. The results, displayed in Figure 8b, indicate that the accuracy stabilizes and peaks at a learning rate of 5 × 10 4 , achieving an optimal diagnostic accuracy of 84.57%.
Notably, when the learning rate was increased to 1 × 10 3 , a subsequent decrease in diagnostic accuracy was observed. This reduction can typically be attributed to the optimizer overshooting the minima during the training process. At higher learning rates, the gradient descent steps become too large, potentially bypassing the optimal solution and leading to less stable training convergence.
The data from this analysis is crucial for fine-tuning the ATASML model’s parameters to ensure maximal efficiency and accuracy in real-world applications. It also offers a framework for future research to explore the impact of other parameter variations on the performance of meta-learning models in fault diagnosis.

3.2. SKAB Dataset

3.2.1. Datasets Description

The Skoltech Anomaly Benchmark (SKAB) [46] serves as a pivotal resource for anomaly detection within simulated industrial environments, reflecting real-world operational challenges. This dataset encompasses a range of subsystems monitored by various sensors, capturing data on parameters such as temperature, pressure and flow rates. SKAB consists of 34 time-series datasets, each representing a different operational scenario within an industrial process. One dataset is designated as anomaly-free to establish a baseline for normal operations, while the remaining 33 datasets exhibit transitions from normal to anomalous states, simulating faults within a pumping system due to induced anomalies (Figure 9). Anomalies are integrated into the datasets to mimic faults from diverse sources, including valves and pumps. These faults specifically affect the fluid dynamics, manifesting as subtle yet detectable anomalies aimed at testing the limits of diagnostic algorithms. The anomalies are present in both training and testing subsets, ensuring comprehensive exposure to varied conditions. The SKAB dataset provides a comprehensive set of fault scenarios, as visualized in Figure 10. These scenarios simulate fluid leaks and additions, highlighting the complexity and variability of anomaly patterns within the dataset, which challenge the detection algorithms.
The ATASML algorithm requires a methodically preprocessed dataset to perform efficiently. For the SKAB dataset, preprocessing involved the application of a sliding window technique, segmenting the dataset into windows of 30 consecutive data points each. This window size was strategically chosen to capture significant operational dynamics, with a step size of 15 points to maintain overlap and continuity. Each window represents an individual learning task, forming a structured dataset D = { ( x ( j ) , y ( j ) ) } , where x ( j ) denotes the input features and y ( j ) the corresponding labels, which classify the operational state as normal or anomalous. This task-specific dataset formulation is critical for training the ATASML algorithm, which enhances its capability to recognize and adapt to new and complex anomaly patterns effectively.
ATASML’s application capitalizes on advanced meta-learning strategies that utilize both inherent data features and artificially induced anomalies to train robust anomaly detection models. By incorporating adversarial samples during the training phase, ATASML enhances the model’s adaptability and sensitivity to subtle operational changes, which are often indicative of potential faults. The algorithm’s structured learning approach, enriched with sequential and adversarial training components, provides significant advantages over conventional models. Specifically, by training on a blend of normal and adversarially augmented data, ATASML fosters a refined understanding of typical and atypical patterns, thereby enhancing the accuracy of anomaly detection.
The core of ATASML’s algorithm is its meta-learning component, designed for rapid adaptation to new tasks. This capability enables the algorithm to effectively adjust to previously unseen anomalies, substantially reducing the occurrence of false negatives. Furthermore, the strategic implementation of adversarial tasks not only sharpens the algorithm’s diagnostic precision but also expedites the learning process. These attributes render ATASML exceptionally suitable for real-time anomaly detection scenarios in industrial settings, where the dynamics of operational conditions are constantly changing and the costs associated with diagnostic errors are critically high.

3.2.2. Experimental Setup

The experimental setup for the ATASML framework on the SKAB dataset adheres to a structured protocol designed to rigorously assess its performance under varied industrial fault scenarios. This setup is crafted to highlight the framework’s robustness and adaptability through comprehensive experimental procedures involving both traditional and adversarial learning techniques.
Data preprocessing for the SKAB dataset involves a sliding window technique, where data is segmented into windows of 30 consecutive data points each. This segmentation captures essential temporal dynamics and a step size of 15 points ensures adequate overlap and continuity between samples. Such preprocessing is crucial to ensure the data’s suitability for complex anomaly detection tasks facilitated by the ATASML framework.
This experimental setup on the SKAB dataset emphasizes a comprehensive assessment of the ATASML framework’s performance under diverse industrial fault scenarios. Rather than focusing on few-shot fault classification, which is less representative of the SKAB dataset’s unique characteristics and anomaly patterns, the approach here leverages a broader learning context. This involves employing extensive training datasets to thoroughly evaluate the framework’s capabilities. Such an experimental design aims to robustly validate the generalization, accuracy and diagnostic precision of ATASML, particularly by showcasing detailed performance metrics throughout the training process, including accuracy improvements and loss reductions.
In line with few-shot learning methodologies, the N-way K-shot approach is employed for constructing learning tasks. Here, ‘N’ distinct fault types are randomly selected from the dataset, and ‘K’ examples from each type are used to form the support set. Additional samples are used to construct the query set, culminating in approximately 10,000 training tasks and 1000 testing tasks. This sampling strategy ensures a comprehensive evaluation over a wide array of fault scenarios, thus providing a robust measure of the model’s diagnostic capabilities.
Adversarial training is integral to the ATASML approach, enhancing the model’s ability to generalize from limited data and to recognize subtle anomalies indicative of faults. Strategic perturbations are introduced to the training data, forming adversarial samples that challenge the model’s predictive accuracy. These perturbations are bounded by an ϵ -ball centered around the original data points, with ϵ set to 10% of the range of each feature, aimed at maximizing the prediction error and thereby simulating realistic fault conditions more effectively.
The network architecture tailored for the SKAB dataset is designed to effectively harness both the feature-specific and temporal dynamics essential for accurate fault diagnosis. The architecture incorporates convolutional layers for initial feature extraction followed by LSTM units to capture long-term dependencies.
The learning rates for the task-level (inner loop) updates are set at α = 0.01 , and for the meta-level (outer loop) updates at β = 0.001 . The training involves preliminary adaptation through five inner update steps, followed by ten fine-tuning steps to refine the model parameters based on the insights gained from the adversarial samples, optimizing the model for high precision and reliability in fault detection.
The configuration detailed in Table 6 and Table 7 outlines the modular structure and specific parameters of the network, demonstrating the sequential application of convolutional and LSTM layers tailored to the unique requirements of the SKAB dataset. This design efficiently processes complex time-series data, effectively balancing feature extraction and temporal dependency modeling, which is critical for the continuous monitoring and anomaly detection tasks prevalent in industrial settings.

3.2.3. Efficacy of Task Sequencing in ATASML

This study evaluates the efficacy of task sequencing within the ATASML framework using the SKAB dataset. Unlike traditional setups, ATASML integrates advanced data augmentation and adversarial training directly into its learning process. This approach is designed to enhance the model’s adaptability and robustness by utilizing sophisticated simulations of faults within the original dataset, thereby providing a rigorous testing ground for assessing the framework’s generalization capabilities against other models such as MAML and GOPML.
The classification performance of ATASML, MAML and GOPML on the SKAB dataset is summarized in Table 8. These results highlight the models’ capabilities across various levels of task complexity, from 3-way 5-shot to 8-way 6-shot settings, emphasizing the benefits of ATASML’s methodological innovations.
ATASML outperforms both MAML and GOPML significantly, demonstrating its superior adaptability and efficiency in learning from limited samples. For instance, ATASML achieves an accuracy of 94.25% in the 3-way 5-shot setting, considerably higher than GOPML’s 84.18% and MAML’s 71.44%. This superior performance is attributed to ATASML’s use of task sequencing and its ability to leverage data augmentation and adversarial samples effectively.

3.2.4. Advanced Training Dynamics and Stability Analysis

To further substantiate the efficacy of the ATASML framework, a detailed analysis of the training process on the SKAB dataset is presented, specifically focusing on the 3-way 6-shot configuration. This configuration was chosen due to its balanced challenge in fault classification, providing insights into the model’s performance under moderately complex scenarios.
The experiments were configured with a batch size of 64 and an extended training duration of up to 20,000 epochs to ensure robust model convergence. The Adam optimizer was employed with a learning rate of 5 × 10 4 , slightly higher than typical settings to accommodate the complex nature of the SKAB dataset. This setup aims to rigorously test the adaptability and learning efficiency of the ATASML framework under varied fault conditions.
Two key performance metrics were tracked throughout the training process: Diagnostic Accuracy (%) and Classification Loss (expressed in e 2 scale). The training dynamics are illustrated in two parallel graphs, aligning with the methodology applied for the TEP dataset, enabling a direct comparison between the datasets.
  • Diagnostic Accuracy (%): As depicted in Figure 11, the accuracy improves steadily with epochs and approaches a stability phase between 17,000 to 18,000 epochs, eventually reaching an optimal accuracy of 94.79%. This gradual stabilization, later than observed in the TEP dataset experiments, reflects the SKAB dataset’s increased complexity and diversity in fault scenarios.
  • Classification Loss: The loss reduction trajectory, shown in Figure 11, indicates that the model efficiently learns to minimize error over extended epochs, substantiating the algorithm’s capability to handle complex anomaly detection tasks effectively.
These findings are pivotal when juxtaposed with the TEP dataset outcomes, where the model achieved stability quicker. The extended stabilization period in the SKAB dataset underscores the intricate fault patterns and the diverse operational conditions simulated within this dataset. Such comparisons not only demonstrate ATASML’s robust performance but also its versatility and scalability across different industrial datasets.
These results validate the efficacy of incorporating task sequencing and adversarial examples in the training process, as ATASML not only improves on basic accuracy metrics but also enhances precision and F1 scores significantly. This performance uplift is crucial in scenarios where even minor faults need to be detected reliably to prevent potential operational failures.

4. Discussion

The Adversarial Task Augmented Sequential Meta-Learning (ATASML) framework has been rigorously tested across complex industrial datasets, such as the Tennessee Eastman Process (TEP) and Skoltech Anomaly Benchmark (SKAB), demonstrating substantial improvements in fault-diagnosis capabilities. ATASML is particularly effective in few-shot learning scenarios that often challenge traditional models due to limited data availability.
A key innovation of ATASML is its use of adversarial examples integrated within a meta-learning structure, which significantly enhances the model’s ability to generalize from limited samples. Unlike traditional approaches such as MAML and GOPML, which, while robust, do not specialize in rapid adaptation to new and complex tasks, ATASML’s strategic task sequencing further refines this capability by arranging learning tasks to optimize both the accuracy and speed of the learning process, utilizing adversarially augmented data to effectively challenge the model under diverse conditions.
To underscore the efficacy of the ATASML framework, this study compares it with renowned deep learning models such as VGG-11 and ResNet-18, which are benchmarked for their ability to generalize from extensively pretrained features to new, unseen fault conditions. These models are thoroughly adapted to specific fault-diagnosis tasks by fine-tuning on targeted datasets. Our experiments concentrate on demonstrating that the ATASML framework, when subjected to comprehensive network fine-tuning, achieves superior accuracy compared to merely fine-tuning the classifiers, particularly under few-shot learning conditions. Moreover, to ensure a fair comparison across all evaluated methods, an N-way K-shot setup was employed. This setup not only maintains the integrity of the experimental comparisons but also aligns with the research paradigms of few-shot learning, allowing for a precise assessment of the ATASML framework’s performance in complex fault-diagnosis tasks.
However, the framework’s computational intensity, particularly when processing large, complex datasets, highlights an area for potential improvement. Future work will focus on optimizing the computational efficiency of ATASML to enhance its scalability and practical applicability in industrial settings. This will involve refining the adversarial training components and exploring more efficient ways to implement task sequencing that reduces computational demands while maintaining high diagnostic accuracy.

5. Conclusions

The deployment of the Adversarial Task Augmented Sequential Meta-Learning (ATASML) framework marks a significant advancement in the field of intelligent fault-diagnosis systems. Through comprehensive testing across the Tennessee Eastman Process (TEP) and Skoltech Anomaly Benchmark (SKAB) datasets, ATASML has demonstrated notable improvements in diagnostic accuracy and model robustness, particularly under few-shot learning conditions.
  • Superior Generalization: ATASML achieves a diagnostic accuracy up to 98.47% in 3-way 6-shot scenarios on the TEP dataset and maintains strong performance in more complex 8-way settings with an accuracy of up to 90.13%. These results are significantly higher than those achieved by traditional models such as MAML and GOPML, illustrating ATASML’s robust adaptability to varied fault conditions.
  • Enhanced Diagnostic Precision: In SKAB dataset evaluations, ATASML consistently outperformed comparisons, achieving as high as 94.79% accuracy in 3-way 6-shot settings. This precision underlines the framework’s capability to effectively handle even the subtlest anomalies in industrial environments.
  • Efficient Learning Process: The strategic integration of adversarial learning and task sequencing in ATASML not only accelerates the learning process but also enhances the precision and reliability of fault diagnostics across diverse operational scenarios.
Conclusively, ATASML sets a new benchmark in fault diagnosis by effectively learning from limited data and adapting swiftly to new, intricate operational scenarios. These capabilities are critical in reducing operational downtime and maintenance costs, thereby enhancing safety by mitigating risks associated with delayed or missed fault detections.
Moving forward, the focus will be on further enhancing the computational efficiency of the ATASML framework to facilitate its scalability and broader industrial application. Additional research will explore the integration of more advanced adversarial techniques and the expansion of the framework to accommodate a broader spectrum of industrial conditions, potentially including real-time learning scenarios and the development of more generalized models that can perform across various industries with minimal adjustments.

Author Contributions

Conceptualization, D.S. and Y.F.; methodology, D.S. and G.W.; software, D.S. and Y.F.; validation, D.S. and Y.F.; formal analysis, D.S. and G.W.; investigation, D.S.; resources, D.S.; data curation, D.S.; writing—original draft preparation, D.S. and Y.F.; writing—review and editing, D.S. and G.W.; visualization, D.S. and Y.F.; supervision, Y.F.; project administration, Y.F.; funding acquisition, Y.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by National Key Research and Development Program of China (Grant number 2022YFB4301401), National Natural Science Foundation of China (Grant number 52301360), Pilot Base Construction and Pilot Verification Plan Program of Liaoning Province of China (Grant number 2022JH24/10200029), China Postdoctoral Science Foundation (Grant number 2022M710569), Liaoning Province Doctor Startup Fund (Grant number 2022-BS-094).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. The first dataset (TEP) can be found here: https://github.com/YKatser/CPDE/tree/master/TEP_data (accessed on 10 February 2024). The second dataset (SKAB) is openly available at https://github.com/YKatser/CPDE/tree/master/SKAB_data (accessed on 10 February 2024).

Acknowledgments

The authors would like to express their sincere thanks to the editor and anonymous referees for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Angelopoulos, A.; Michailidis, E.T.; Nomikos, N.; Trakadas, P.; Hatziefremidis, A.; Voliotis, S.; Zahariadis, T. Tackling faults in the industry 4.0 era—A survey of machine-learning solutions and key aspects. Sensors 2019, 20, 109. [Google Scholar] [CrossRef] [PubMed]
  2. Bécue, A.; Praça, I.; Gama, J. Artificial intelligence, cyber-threats and Industry 4.0: Challenges and opportunities. Artif. Intell. Rev. 2021, 54, 3849–3886. [Google Scholar] [CrossRef]
  3. Baalisampang, T.; Abbassi, R.; Garaniya, V.; Khan, F.; Dadashzadeh, M. Review and analysis of fire and explosion accidents in maritime transportation. Ocean Eng. 2018, 158, 350–366. [Google Scholar] [CrossRef]
  4. Bi, X.; Qin, R.; Wu, D.; Zheng, S.; Zhao, J. One step forward for smart chemical process fault detection and diagnosis. Comput. Chem. Eng. 2022, 164, 107884. [Google Scholar] [CrossRef]
  5. Ahmed, S.F.; Alam, M.S.B.; Hoque, M.; Lameesa, A.; Afrin, S.; Farah, T.; Kabir, M.; Shafiullah, G.; Muyeen, S. Industrial Internet of Things enabled technologies, challenges, and future directions. Comput. Electr. Eng. 2023, 110, 108847. [Google Scholar] [CrossRef]
  6. ur Rehman, M.H.; Yaqoob, I.; Salah, K.; Imran, M.; Jayaraman, P.P.; Perera, C. The role of big data analytics in industrial Internet of Things. Future Gener. Comput. Syst. 2019, 99, 247–259. [Google Scholar] [CrossRef]
  7. Ren, Z.; Zhu, Y.; Liu, Z.; Feng, K. Few-shot GAN: Improving the performance of intelligent fault diagnosis in severe data imbalance. IEEE Trans. Instrum. Meas. 2023, 72, 1–14. [Google Scholar] [CrossRef]
  8. Zhang, T.; Chen, J.; Li, F.; Zhang, K.; Lv, H.; He, S.; Xu, E. Intelligent fault diagnosis of machines with small & imbalanced data: A state-of-the-art review and possible extensions. ISA Trans. 2022, 119, 152–171. [Google Scholar] [PubMed]
  9. Bansal, M.A.; Sharma, D.R.; Kathuria, D.M. A systematic review on data scarcity problem in deep learning: Solution and applications. ACM Comput. Surv. (CSUR) 2022, 54, 1–29. [Google Scholar] [CrossRef]
  10. Alzubaidi, L.; Bai, J.; Al-Sabaawi, A.; Santamaría, J.; Albahri, A.; Al-dabbagh, B.S.N.; Fadhel, M.A.; Manoufali, M.; Zhang, J.; Al-Timemy, A.H.; et al. A survey on deep learning tools dealing with data scarcity: Definitions, challenges, solutions, tips, and applications. J. Big Data 2023, 10, 46. [Google Scholar] [CrossRef]
  11. Lu, J.; Gong, P.; Ye, J.; Zhang, C. Learning from very few samples: A survey. arXiv 2020, arXiv:2009.02653. [Google Scholar]
  12. Wang, Y.; Yao, Q.; Kwok, J.T.; Ni, L.M. Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv. (CSUR) 2020, 53, 1–34. [Google Scholar] [CrossRef]
  13. Bhuiyan, M.R.; Uddin, J. Deep transfer learning models for industrial fault diagnosis using vibration and acoustic sensors data: A review. Vibration 2023, 6, 218–238. [Google Scholar] [CrossRef]
  14. Wang, P.; Li, J.; Wang, S.; Zhang, F.; Shi, J.; Shen, C. A new meta-transfer learning method with freezing operation for few-shot bearing fault diagnosis. Meas. Sci. Technol. 2023, 34, 074005. [Google Scholar] [CrossRef]
  15. Wu, K.; Nie, Y.; Wu, J.; Wang, Y. Prior knowledge-based self-supervised learning for intelligent bearing fault diagnosis with few fault samples. Meas. Sci. Technol. 2023, 34, 105104. [Google Scholar] [CrossRef]
  16. Zhang, Y.; Han, D.; Tian, J.; Shi, P. Domain adaptation meta-learning network with discard-supplement module for few-shot cross-domain rotating machinery fault diagnosis. Knowl.-Based Syst. 2023, 268, 110484. [Google Scholar] [CrossRef]
  17. Liang, X.; Zhang, M.; Feng, G.; Wang, D.; Xu, Y.; Gu, F. Few-Shot Learning Approaches for Fault Diagnosis Using Vibration Data: A Comprehensive Review. Sustainability 2023, 15, 14975. [Google Scholar] [CrossRef]
  18. Hospedales, T.; Antoniou, A.; Micaelli, P.; Storkey, A. Meta-learning in neural networks: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 5149–5169. [Google Scholar] [CrossRef] [PubMed]
  19. Finn, C.; Abbeel, P.; Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 1126–1135. [Google Scholar]
  20. Ravi, S.; Larochelle, H. Optimization as a model for few-shot learning. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
  21. Pan, T.; Chen, J.; Zhang, T.; Liu, S.; He, S.; Lv, H. Generative adversarial network in mechanical fault diagnosis under small sample: A systematic review on applications and future perspectives. ISA Trans. 2022, 128, 1–10. [Google Scholar] [CrossRef]
  22. Zhang, T.; Chen, J.; Li, F.; Pan, T.; He, S. A small sample focused intelligent fault diagnosis scheme of machines via multimodules learning with gradient penalized generative adversarial networks. IEEE Trans. Ind. Electron. 2020, 68, 10130–10141. [Google Scholar] [CrossRef]
  23. Li, Z.; Zheng, T.; Wang, Y.; Cao, Z.; Guo, Z.; Fu, H. A novel method for imbalanced fault diagnosis of rotating machinery based on generative adversarial networks. IEEE Trans. Instrum. Meas. 2020, 70, 1–17. [Google Scholar] [CrossRef]
  24. Pan, T.; Chen, J.; Xie, J.; Zhou, Z.; He, S. Deep feature generating network: A new method for intelligent fault detection of mechanical systems under class imbalance. IEEE Trans. Ind. Inform. 2020, 17, 6282–6293. [Google Scholar] [CrossRef]
  25. Liu, Q.; Ma, G.; Cheng, C. Data fusion generative adversarial network for multi-class imbalanced fault diagnosis of rotating machinery. IEEE Access 2020, 8, 70111–70124. [Google Scholar] [CrossRef]
  26. Zhang, T.; Chen, J.; Xie, J.; Pan, T. SASLN: Signals augmented self-taught learning networks for mechanical fault diagnosis under small sample condition. IEEE Trans. Instrum. Meas. 2020, 70, 1–11. [Google Scholar] [CrossRef]
  27. Liu, S.; Jiang, H.; Wu, Z.; Li, X. Rolling bearing fault diagnosis using variational autoencoding generative adversarial networks with deep regret analysis. Measurement 2021, 168, 108371. [Google Scholar] [CrossRef]
  28. Wang, R.; Zhang, S.; Chen, Z.; Li, W. Enhanced generative adversarial network for extremely imbalanced fault diagnosis of rotating machine. Measurement 2021, 180, 109467. [Google Scholar] [CrossRef]
  29. Lv, H.; Chen, J.; Pan, T.; Zhou, Z. Hybrid attribute conditional adversarial denoising autoencoder for zero-shot classification of mechanical intelligent fault diagnosis. Appl. Soft Comput. 2020, 95, 106577. [Google Scholar] [CrossRef]
  30. Wang, Y.r.; Sun, G.d.; Jin, Q. Imbalanced sample fault diagnosis of rotating machinery using conditional variational auto-encoder generative adversarial network. Appl. Soft Comput. 2020, 92, 106333. [Google Scholar] [CrossRef]
  31. Zheng, T.; Song, L.; Wang, J.; Teng, W.; Xu, X.; Ma, C. Data synthesis using dual discriminator conditional generative adversarial networks for imbalanced fault diagnosis of rolling bearings. Measurement 2020, 158, 107741. [Google Scholar] [CrossRef]
  32. Wang, Z.; Wang, J.; Wang, Y. An intelligent diagnosis scheme based on generative adversarial learning deep neural networks and its application to planetary gearbox fault pattern recognition. Neurocomputing 2018, 310, 213–222. [Google Scholar] [CrossRef]
  33. Huang, N.; Chen, Q.; Cai, G.; Xu, D.; Zhang, L.; Zhao, W. Fault diagnosis of bearing in wind turbine gearbox under actual operating conditions driven by limited data with noise labels. IEEE Trans. Instrum. Meas. 2020, 70, 1–10. [Google Scholar] [CrossRef]
  34. Shi, Z.; Chen, J.; Zi, Y.; Zhou, Z. A novel multitask adversarial network via redundant lifting for multicomponent intelligent fault detection under sharp speed variation. IEEE Trans. Instrum. Meas. 2021, 70, 1–10. [Google Scholar] [CrossRef]
  35. Zhou, F.; Yang, S.; Fujita, H.; Chen, D.; Wen, C. Deep learning fault diagnosis method based on global optimization GAN for unbalanced data. Knowl.-Based Syst. 2020, 187, 104837. [Google Scholar] [CrossRef]
  36. Vilalta, R.; Drissi, Y. A perspective view and survey of meta-learning. Artif. Intell. Rev. 2002, 18, 77–95. [Google Scholar] [CrossRef]
  37. Rajendran, J.; Irpan, A.; Jang, E. Meta-learning requires meta-augmentation. Adv. Neural Inf. Process. Syst. 2020, 33, 5705–5715. [Google Scholar]
  38. Hu, Y.; Liu, R.; Li, X.; Chen, D.; Hu, Q. Task-sequencing meta learning for intelligent few-shot fault diagnosis with limited data. IEEE Trans. Ind. Inform. 2021, 18, 3894–3904. [Google Scholar] [CrossRef]
  39. Lee, S.; Lee, S.; Song, B.C. Efficient Meta-Learning through Task-Specific Pseudo Labelling. Electronics 2023, 12, 2757. [Google Scholar] [CrossRef]
  40. Nguyen, C.; Do, T.T.; Carneiro, G. Task Weighting in Meta-learning with Trajectory Optimisation. arXiv 2023, arXiv:2301.01400. [Google Scholar]
  41. Downs, J.; Vogel, E. A plant-wide industrial process control problem. Comput. Chem. Eng. 1993, 17, 245–255. [Google Scholar] [CrossRef]
  42. Bathelt, A.; Ricker, N.L.; Jelali, M. Revision of the Tennessee Eastman Process Model. IFAC-PapersOnLine 2015, 48, 309–314. [Google Scholar] [CrossRef]
  43. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  44. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  45. Sun, D.; Fan, Y.; Wang, G. Gradient-Oriented Prioritization in Meta-Learning for Enhanced Few-Shot Fault Diagnosis in Industrial Systems. Appl. Sci. 2023, 14, 181. [Google Scholar] [CrossRef]
  46. Katser, I.D.; Kozitsin, V.O. Skoltech Anomaly Benchmark (SKAB). Kaggle. 2020. Available online: https://github.com/waico/SKAB (accessed on 10 February 2024).
Figure 1. Illustration of the data-augmentation process within the ATASML framework. The original dataset X o r i g consists of samples x 1 , x 2 , , x n , which are segmented using a window of size W. This segmentation process allows for the application of augmentation techniques: noise addition, resulting in X g a u s s , and temporal warping, resulting in X w a r p . The process culminates in a quality-assessed and filtered augmented dataset X a u g for model training.
Figure 1. Illustration of the data-augmentation process within the ATASML framework. The original dataset X o r i g consists of samples x 1 , x 2 , , x n , which are segmented using a window of size W. This segmentation process allows for the application of augmentation techniques: noise addition, resulting in X g a u s s , and temporal warping, resulting in X w a r p . The process culminates in a quality-assessed and filtered augmented dataset X a u g for model training.
Applsci 14 04433 g001
Figure 2. Schematic representation of adversarial sample generation in the ATASML framework. The process uses augmented samples ( X g a u s s and X w a r p ) combined with the original samples ( X o r i g ) to create a comprehensive set of adversarial samples ( X a d v ).
Figure 2. Schematic representation of adversarial sample generation in the ATASML framework. The process uses augmented samples ( X g a u s s and X w a r p ) combined with the original samples ( X o r i g ) to create a comprehensive set of adversarial samples ( X a d v ).
Applsci 14 04433 g002
Figure 3. Architecture of the ATASML framework showing the interaction between the datasets and task processes.
Figure 3. Architecture of the ATASML framework showing the interaction between the datasets and task processes.
Applsci 14 04433 g003
Figure 4. Flowsheet of the Tennessee Eastman process.
Figure 4. Flowsheet of the Tennessee Eastman process.
Applsci 14 04433 g004
Figure 5. The visualization of certain variables from Normal samples and IDV2 samples in the training subset.
Figure 5. The visualization of certain variables from Normal samples and IDV2 samples in the training subset.
Applsci 14 04433 g005
Figure 6. Performance comparison demonstrating the superior adaptability and accuracy of the ATASML framework against traditional models like VGG-11 [43] and ResNet-18 [44] under few-shot fault-diagnosis scenarios.
Figure 6. Performance comparison demonstrating the superior adaptability and accuracy of the ATASML framework against traditional models like VGG-11 [43] and ResNet-18 [44] under few-shot fault-diagnosis scenarios.
Applsci 14 04433 g006
Figure 7. Training dynamics for the 3-way 6-shot task on the TEP dataset showing Diagnostic Accuracy and Classification Loss.
Figure 7. Training dynamics for the 3-way 6-shot task on the TEP dataset showing Diagnostic Accuracy and Classification Loss.
Applsci 14 04433 g007
Figure 8. Comparison of diagnostic accuracy under different batch sizes and learning rates for the 8-way 5-shot configuration. (a) Impact of batch size on diagnostic accuracy. (b) Impact of learning rate on diagnostic accuracy. The circles in (b) represent the results from ten iterations conducted at each learning rate, illustrating the variability and statistical reliability of the measurements.
Figure 8. Comparison of diagnostic accuracy under different batch sizes and learning rates for the 8-way 5-shot configuration. (a) Impact of batch size on diagnostic accuracy. (b) Impact of learning rate on diagnostic accuracy. The circles in (b) represent the results from ten iterations conducted at each learning rate, illustrating the variability and statistical reliability of the measurements.
Applsci 14 04433 g008
Figure 9. Configuration and composition of the water circulation, control, and monitoring systems in the SKAB dataset: Front panel and composition of the water circulation, control, and monitoring systems: 1—water tank; 2—water pump; 3—electric motor; 4—solenoid valves; 5—mechanical lever for shaft misalignment; 6—emergency stop button. Sensors: 7—flow meter (NI 9401 8-channel); 8—water pressure meter (NI 9203 8-channel); 9, 10—vibration sensors (NI 9232 3-channel); 11, 12—thermocouples (NI 9213 Spring Terminal 16-channel thermocouple) [46].
Figure 9. Configuration and composition of the water circulation, control, and monitoring systems in the SKAB dataset: Front panel and composition of the water circulation, control, and monitoring systems: 1—water tank; 2—water pump; 3—electric motor; 4—solenoid valves; 5—mechanical lever for shaft misalignment; 6—emergency stop button. Sensors: 7—flow meter (NI 9401 8-channel); 8—water pressure meter (NI 9203 8-channel); 9, 10—vibration sensors (NI 9232 3-channel); 11, 12—thermocouples (NI 9213 Spring Terminal 16-channel thermocouple) [46].
Applsci 14 04433 g009
Figure 10. Visualization of typical fault scenarios in SKAB, simulating fluid leaks and additions. This highlights the complexity and variability of anomaly patterns within the dataset, challenging the detection algorithms.
Figure 10. Visualization of typical fault scenarios in SKAB, simulating fluid leaks and additions. This highlights the complexity and variability of anomaly patterns within the dataset, challenging the detection algorithms.
Applsci 14 04433 g010
Figure 11. Training process for the SKAB dataset: (Left) Diagnostic Accuracy over epochs, (Right) Classification Loss over epochs.
Figure 11. Training process for the SKAB dataset: (Left) Diagnostic Accuracy over epochs, (Right) Classification Loss over epochs.
Applsci 14 04433 g011
Table 1. GAN-based data augmentation according to different signal types [21].
Table 1. GAN-based data augmentation according to different signal types [21].
No.Signal TypeReferences
1One-dimensional time domain signalsLiu et al. [25], Zhang et al. [26], Liu et al. [27], Wang et al. [28], Lv et al. [29]
2One-dimensional frequency domain signalsLi et al. [23], Wang et al. [30], Zheng et al. [31], Wang et al. [32]
3Two-dimensional imagesHuang et al. [33], Shi et al. [34]
4One-dimensional feature setsPan et al. [24], Zhou et al. [35]
Table 2. TEP Fault Categories Applied in ATASML Fault Diagnosis.
Table 2. TEP Fault Categories Applied in ATASML Fault Diagnosis.
Fault IDFault DescriptionNature of Disturbance
IDV1A/C feed ratio, B composition constantStep change
IDV2B composition, A/C ratio constantStep change
IDV3D feed temperatureStep change
IDV4Reactor cooling water inlet temperatureStep change
IDV5Condenser cooling water temperatureStep change
IDV6A feed lossStep change
IDV7C header pressure lossStep change
IDV8A, B, C feed compositionRandom variation
IDV9D feed temperatureRandom variation
IDV10C feed temperatureRandom variation
IDV11Reactor cooling water inlet temperatureRandom variation
IDV12Condenser cooling water valve operationRandom variation
IDV13Reaction kinetics alterationSlow drift
IDV14Reactor cooling water valve operationSticking
IDV15Condenser cooling water valve operationSticking
Table 3. Configuration of the Convolutional and LSTM Layers for the TEP Dataset (TimeSeriesConvLSTM_Module_TEP).
Table 3. Configuration of the Convolutional and LSTM Layers for the TEP Dataset (TimeSeriesConvLSTM_Module_TEP).
Layer (Type)ConfigurationOutput Shape
Input-(None, T, F)
Conv1D (TimeConv)64 filters, kernel size 3(None, T-2, 64)
BatchNorm-(None, T-2, 64)
MaxPooling1DPool size 2(None, (T-2)/2, 64)
LSTM100 units(None, 100)
Dense (Output)Softmax activation(None, num_classes)
Table 4. Structure of the Base Learner for the TEP Dataset Using TimeSeriesConvLSTM_Module_TEP.
Table 4. Structure of the Base Learner for the TEP Dataset Using TimeSeriesConvLSTM_Module_TEP.
Module NumberConfiguration
1TimeSeriesConvLSTM_Module_TEP
2Repeats Module 1
3Repeats Module 1
4Repeats Module 1
5Dense Layer (Output)
Table 5. Comparative results of fault classification effectiveness on the TEP dataset.
Table 5. Comparative results of fault classification effectiveness on the TEP dataset.
Methods3-way 5-shot3-way 6-shot8-way 5-shot8-way 6-shot
MAML Accuracy72.26 ± 2.373.68 ± 2.464.60 ± 2.165.54 ± 2.1
GOPML Accuracy85.13 ± 1.489.68 ± 1.477.29 ± 1.279.66 ± 1.3
ATASML Accuracy95.02 ± 1.098.47 ± 1.084.57 ± 0.890.13 ± 0.9
MAML F172.98 ± 2.974.16 ± 3.064.90 ± 2.666.08 ± 2.6
GOPML F186.09 ± 1.789.23 ± 1.875.92 ± 1.579.06 ± 1.6
ATASML F193.12 ± 1.196.50 ± 1.282.88 ± 1.088.33 ± 1.1
MAML Precision72.66 ± 3.673.54 ± 3.662.72 ± 3.165.22 ± 3.2
GOPML Precision84.65 ± 2.588.47 ± 2.774.66 ± 2.278.49 ± 2.4
ATASML Precision92.17 ± 1.795.52 ± 1.782.03 ± 1.587.43 ± 1.6
Table 6. Configuration of the Convolutional and LSTM Layers for the SKAB Dataset (TimeSeriesConvLSTM_Module_SKAB).
Table 6. Configuration of the Convolutional and LSTM Layers for the SKAB Dataset (TimeSeriesConvLSTM_Module_SKAB).
Layer (Type)ConfigurationOutput Shape
Input-(None, T, F)
Conv1D (TimeConv)32 filters, kernel size 5(None, T-4, 32)
BatchNorm-(None, T-4, 32)
MaxPooling1DPool size 2(None, (T-4)/2, 32)
LSTM50 units(None, 50)
Dense (Output)Softmax activation(None, num_classes)
Table 7. Structure of the Base Learner for the SKAB Dataset Using TimeSeriesConvLSTM_Module_SKAB.
Table 7. Structure of the Base Learner for the SKAB Dataset Using TimeSeriesConvLSTM_Module_SKAB.
Module NumberConfiguration
1TimeSeriesConvLSTM_Module_SKAB
2Repeats Module 1
3Repeats Module 1
4Repeats Module 1
5Dense Layer (Output)
Table 8. Fault classification results with fluctuation on the SKAB dataset.
Table 8. Fault classification results with fluctuation on the SKAB dataset.
Methods3-way 5-shot3-way 6-shot8-way 5-shot8-way 6-shot
MAML Accuracy71.44 ± 2.372.33 ± 2.363.16 ± 2.064.06 ± 2.0
GOPML Accuracy84.18 ± 1.384.67 ± 1.474.32 ± 1.274.80 ± 1.2
ATASML Accuracy94.25 ± 0.994.79 ± 0.984.01 ± 0.884.55 ± 0.8
MAML F168.50 ± 2.769.48 ± 2.861.60 ± 2.562.58 ± 2.5
GOPML F180.03 ± 1.680.74 ± 1.671.71 ± 1.472.43 ± 1.4
ATASML F192.37 ± 1.192.89 ± 1.182.33 ± 1.082.86 ± 1.0
MAML Precision73.09 ± 3.673.17 ± 3.664.71 ± 3.264.78 ± 3.2
GOPML Precision86.02 ± 2.685.62 ± 2.675.51 ± 2.375.11 ± 2.3
ATASML Precision91.42 ± 1.691.95 ± 1.781.49 ± 1.582.01 ± 1.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, D.; Fan, Y.; Wang, G. Enhancing Fault Diagnosis in Industrial Processes through Adversarial Task Augmented Sequential Meta-Learning. Appl. Sci. 2024, 14, 4433. https://doi.org/10.3390/app14114433

AMA Style

Sun D, Fan Y, Wang G. Enhancing Fault Diagnosis in Industrial Processes through Adversarial Task Augmented Sequential Meta-Learning. Applied Sciences. 2024; 14(11):4433. https://doi.org/10.3390/app14114433

Chicago/Turabian Style

Sun, Dexin, Yunsheng Fan, and Guofeng Wang. 2024. "Enhancing Fault Diagnosis in Industrial Processes through Adversarial Task Augmented Sequential Meta-Learning" Applied Sciences 14, no. 11: 4433. https://doi.org/10.3390/app14114433

APA Style

Sun, D., Fan, Y., & Wang, G. (2024). Enhancing Fault Diagnosis in Industrial Processes through Adversarial Task Augmented Sequential Meta-Learning. Applied Sciences, 14(11), 4433. https://doi.org/10.3390/app14114433

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop