Next Article in Journal
Identification of Olives Using In-Field Hyperspectral Imaging with Lightweight Models
Next Article in Special Issue
Advancing Ultrasonic Defect Detection in High-Speed Wheels via UT-YOLO
Previous Article in Journal
Underwater Turbid Media Stokes-Based Polarimetric Recovery
Previous Article in Special Issue
Anomaly Detection Based on a 3D Convolutional Neural Network Combining Convolutional Block Attention Module Using Merged Frames
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

FR-PatchCore: An Industrial Anomaly Detection Method for Improving Generalization

School of Physical Science and Technology, Southwest Jiaotong University, Chengdu 611756, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(5), 1368; https://doi.org/10.3390/s24051368
Submission received: 28 December 2023 / Revised: 18 February 2024 / Accepted: 19 February 2024 / Published: 20 February 2024
(This article belongs to the Special Issue Sensing and Imaging for Defect Detection)

Abstract

:
In recent years, a multitude of self-supervised anomaly detection algorithms have been proposed. Among them, PatchCore has emerged as one of the state-of-the-art methods on the widely used MVTec AD benchmark due to its efficient detection capabilities and cost-saving advantages in terms of labeled data. However, we have identified that the PatchCore similarity principal approach faces significant limitations in accurately locating anomalies when there are positional relationships between similar samples, such as rotation, flipping, or misaligned pixels. In real-world industrial scenarios, it is common for samples of the same class to be found in different positions. To address this challenge comprehensively, we introduce Feature-Level Registration PatchCore (FR-PatchCore), which serves as an extension of the PatchCore method. FR-PatchCore constructs a feature matrix that is extracted into the memory bank and continually updated using the optimal negative cosine similarity loss. Extensive evaluations conducted on the MVTec AD benchmark demonstrate that FR-PatchCore achieves an impressive image-level anomaly detection AUROC score of up to 98.81%. Additionally, we propose a novel method for computing the mask threshold that enables the model to scientifically determine the optimal threshold and accurately partition anomalous masks. Our results highlight not only the high generalizability but also substantial potential for industrial anomaly detection offered by FR-PatchCore.

1. Introduction

The issue of data imbalance is widely observed in industrial anomaly detection, further exacerbated by the scarcity of valuable anomalous data available for this task. Anomalies can arise from various unknown external influences [1], and they can also originate from the objects themselves. It is impractical to account for every possible exception. Unsupervised approaches address this challenge by leveraging unlabeled samples to enable the model to learn the distribution properties and features of both normal and abnormal samples. This enables effective anomaly detection while avoiding the high cost associated with labelled data. In the field of industrial anomaly detection, training autoencoders [2,3] and methods based on generative adversarial networks (GANs) [4,5,6,7] are commonly employed. Lyu et al. [8] fused GAN with a deep convolutional neural network (DCNN) to build an anomaly location model, which can find the mapping relationship between images and high-dimensional features. Son et al. [9] computed anomaly scores by training encoder–decoder-based long short-term memory (LSTM) networks to evaluate data based on the time the anomaly existed. J. Mitra et al. [10] used generative adversarial networks to identify a manifold of normal samples and, at the same time, identify abnormal patterns that fall outside the model. Xiao, K., Cao, J., Zeng, Z., and Ling, W.-K. [11] explore sample distribution by utilizing the fast-labelling characteristics of a graph structure and use an autoencoder to blur low-level features while retaining local features to achieve better anomaly detection performance. However, these methods often rely on the availability of a substantial amount of data. When the number of anomalous samples is limited, the model may struggle to distinguish between normal and anomalous images accurately.
Consequently, self-supervised approaches are highly favored in the field. Several state-of-the-art self-supervised methods, such as PatchCore [12], have achieved remarkable detection results on the widely recognized MVTec AD benchmark [13]. However, it has been observed that the PatchCore method imposes stringent requirements for image alignment. It is worth noting that in the MVTec dataset, most categories satisfy the condition of “similarity”, where samples within the same category share common characteristics and can be approximately aligned at identical positions. These samples do not exhibit significant spatial differences, such as rotation or flipping. Hence, they demonstrate excellent test results and are referred to as “similar categories”. In contrast, categories that fail to meet these similarity conditions are termed “spatial transformation categories”. The effectiveness of their anomaly detection may be compromised as we have observed that these methods are not proficient in handling the latter type of category.
To establish that this is not a random occurrence, we also conducted experiments on the MPDD dataset [14], whose six categories also have “spatial location relationships”. These experiments confirmed that the PatchCore method is more sensitive to “similar pictures” and does not pay much attention to the spatial position relationship between samples. In industrial detection, the images to be analyzed are not limited to “similar categories”. They can be captured by sensors from various angles and locations. Moreover, when training data are scarce, employing data augmentation techniques to expand the dataset (e.g., rotation, mirror flipping, left, and right flipping, etc.) can introduce “spatial transformation categories”, which may reduce the effect of anomaly detection, resulting in increased rates of missed detection and false positives.
To enhance the generalization performance of the method and enable it to effectively detect even in the presence of “spatial transformation categories”, we introduce registration as a pretext task. We extract fused features from the feature extraction layer and store them in a database through a series of dimensionality reduction operations. By utilizing a negative cosine similarity loss, we update the features in the memory bank, eliminating overlapping features while incorporating previously neglected ones during training. As the tests adopt a pixel-by-pixel comparison approach, disregarding pixel relationships, we construct a module similar to pyramid pooling to enforce connectivity between patches. Additionally, we devise an innovative approach that enables the model to look for the optimal threshold, resulting in accurate ground truth segmentation. Our proposed method achieves an impressive AUROC performance of 98.81% on the MVTec dataset, significantly enhancing generalization and reducing false positive rates.

2. Related Works

Recently, supervised methods have exerted a significant impact on the field of anomaly detection, resulting in the assimilation of concepts from these approaches into self-supervised methods. Self-supervised anomaly detection methods can be broadly classified into reconstruction-based techniques and feature embedding-based techniques.

2.1. Reconstruction-Based Approaches

Reconstruction-based anomaly detection relies on the assumption that the model can reconstruct normal samples well but struggles with anomalous regions. Early attempts involved using generative adversarial networks (GANs) [15,16]. However, due to the strong generalization of neural networks, abnormal samples could also achieve good reconstruction, rendering the comparison unreliable. More recently, You et al. [17] introduced a new paradigm for reconstruction networks, addressing the issue where popular reconstruction networks may fall into a “same shortcut” scenario, where both normal and abnormal samples can be accurately recovered, leading to difficulties in detecting outliers. This work tackles the practical challenge of detecting anomalies across different object classes using a unified framework.

2.2. Feature Embedding-Based Approaches

The feature embedding-based approach involves feeding images into the model, extracting features, and constructing interface scoring rules in the feature space. Unlike the reconstruction method that operates in the RGB image space, feature embedding focuses on the high-dimensional feature space for anomaly detection. Notably, Bergman, L., Cohen, N. and Hoshen, Y. [18] encouraged the use of pre-trained models on large-scale datasets for anomaly detection. This recommendation gives great encouragement to such methods.
SPADE [19] initially extracts features of all normal samples (training set) in a network pre-trained on ImageNet. During testing, the K-nearest neighbors (KNN) [20] metric is applied to each pixel-wise feature of the test sample to calculate the anomaly score for each pixel point. However, the more normal images used in training, the more features are stored, resulting in higher KNN complexity during testing. Building upon SPADE, a method named PaDiM [21] improves it by eliminating the construction of a feature bank and performing KNN for anomaly detection. However, the pixels at each position are not strictly aligned, so modelling each pixel position alone may lead to inaccuracies. Y. Zheng et al. [22] performs fine-tuning based on the anomaly localization task and proposed FYD. To address the feature misalignment issue in PaDiM, they introduce the spatial transformer network (STN) [23] module for coarse alignment and use a self-supervised learning paradigm inspired by the SimSiam network (without requiring negative samples). However, certain images may still not be aligned. Furthermore, from a visualization perspective, the detection performance is not optimal, primarily due to the evaluation metric (pixel-wise AUCROC score [24]), which tolerates anomalies in small areas.
Recently, Huang et al. [25] innovated based on PaDiM and FYD and proposed RegAD, which employs registration as a proxy task to train an anomaly detection model with unknown categories. PatchCore addresses the slow testing speed of SPADE. KNN and greedy coreset subsampling are applied to select the most representative feature points for measuring anomaly scores during testing, which reduce the size of the feature bank and finally achieve excellent results.
The method FR-PatchCore (Feature-level Registration PatchCore) of ours preserves the high efficiency of PatchCore. Feature-level registration is employed as a pretext task, and the information stored in the feature database is updated using a negative cosine similarity loss. To enforce pixel relationships, a module similar to pyramid pooling is constructed. During testing, the Euclidean distance from each pixel to its corresponding feature in the database is used for scoring.

3. Methodology

In our methodology, we introduce the concept of feature-level registration and utilize it as a non-cumbersome pretext task during training. The negative cosine similarity loss is employed as the training loss, which we optimize to update the memory bank. The framework of our approach is depicted in Figure 1.
The upper section represents the registration module, while the lower section represents the memory bank module. Both sections operate concurrently. Two images from samples belonging to the same category are randomly selected, and their features are extracted using the convolutional neural network and spatial transformer network (CNN + STN) for registration purposes. The feature registration process is supervised by maximizing the absolute value of the negative cosine similarity loss. After the registration process, the extracted fusion features are stored in a dedicated memory bank and continuously updated with training losses after each registration iteration.

3.1. Registration Module

Neural network training necessitates task-driven learning. Therefore, the essence of self-supervised learning lies in the thoughtful design of tasks that facilitate effective model learning. Inspired by the work of [25], which obtained a Gaussian distribution model of normal data through feature-level registration training, we leverage the registration task as a pretext task to enhance the model’s understanding of features and emphasize spatial and positional differences. Accordingly, we construct the registration module, consisting of a feature extractor, feature encoder, and predictor, as illustrated in Figure 2.
In feature registration, since spatial transformation can be represented as matrix operations, it is advantageous to allow the network to learn the generation of matrix parameters, thereby acquiring spatial transformation capabilities. Common network frameworks in deep learning include the CNN and the transformer. Additionally, the spatial transformer network, which plays a pivotal role in our research, can seamlessly integrate into any component of the CNN architecture. To ensure a fair comparison with state-of-the-art methods, we have selected the wide_resnet_50_2 [26] network as the backbone for our experiments among various CNNs commonly employed in anomaly detection tasks. This network has demonstrated exceptional performance on the ImageNet dataset, achieving an accuracy of 78.51% for Top 1 and 94.09% for Top 5. For this specific task addressed in our paper, we conducted an ablation experiment (4.5.1) to compare the feature extraction capabilities of both Resnet and VIT models, where wide_resnet_50_2 exhibited superior performance.
We incorporated the STN module into the wide_resnet_50_2 architecture. The overall structure of the STN module is illustrated in Figure 2 and consists of a localization network, a grid generator, and a sampler. In the first component, a feature map is given as Formula (1):
U R H × W × C
After several convolutions or fully connected layers, a regression layer follows, leading to the output of regression transformation parameter θ . The dimension of θ depends on the specific transformation type chosen by the network.
In the second component, we utilize θ and the specified transformation mode from the localization network output, and the grid generator performs further spatial transformations of the feature map to determine the mapping of T(θ) between the output and input features. It employs the predicted transformation parameters to create a sampling grid, which represents a set of points where the input map should be sampled to produce the transformed output. In the sampler, it utilizes the sampling grid to determine which points in the input feature map will be utilized for the transformation. It samples the input feature map against the sampling grid to obtain the final output.
In Step 1, assuming that the input RGB image has a resolution of (224,224), Formula (2) is applied.
U R 224 × 224 × 3
The fourth layer of the wide_resnet_50_2 network is excluded to preserve more comprehensive spatial information, and the spatial transformer network (STN) is integrated after the initial three layers of the network. The input feature f i ( x i s , y i s ) undergoes a transformation function τ θ . The mapping relationship between the input and output feature mappings is defined as Formula (3).
x i s y i s = τ θ G i = A θ x i t y i t 1 = θ 11 θ 12 θ 13 θ 21 θ 22 θ 23 x i t y i t 1
In this case, the form of the eigenvector is shown in Formula (4):
U 1 R 56 × 56 × 64 U 2 R 28 × 28 × 128 U 3 R 14 × 14 × 256
When no key points are labeled, the STN allows the neural network to actively transform the feature map based on input features and learn spatial transformation parameters without requiring additional training supervision or modifications to the optimization process. As illustrated in Figure 2, the STN can effectively align input images or learned features during training, thereby mitigating the impact of spatial geometric transformations such as rotation, translation, scale, and distortion on tasks like classification and localization. The STN facilitates the spatial transformation of the input data, thereby enhancing feature classification and enabling the network to achieve rotational invariance dynamically. Moreover, it intelligently selects the most salient region of the image and optimally transforms it into a suitable orientation. Figure 2 illustrates the depiction of feeding an inverted screw image into the STN module. Through a series of transformations, the input is effectively rectified to face forward.
The employed approach in Figure 2 involves the utilization of a Siamese network for feature encoding, wherein a negative cosine similarity loss is applied as per Formulas (5) and (6).
D p a , z b = p a p a 2 · z b z b 2
D p b , z a = p b p b 2 · z a z a 2
The negative cosine similarity loss is an appropriate metric for quantifying the similarity between two vectors. Furthermore, it possesses the capability to map similar vectors to adjacent points and dissimilar vectors to distant points. This characteristic facilitates feature clustering, as described in Section 3.2.3.
The objective is to maximize the similarity between p a and z b , as well as between p b and z a . To prevent the input data from converging to a constant value after convolution activation, resulting in identical outputs regardless of the input image, we adopt the approach described in [27] by halting the gradient operation on one of the branches to avoid model collapse. Finally, we define Formula (7) as the registration loss for symmetric features.
L = 1 2 [ ( D p a , z b + ( D p b , z a ]
The STN is employed in this stage to perform feature rotation and inversion, facilitating the model’s determination of image similarity. Following each training iteration, a negative cosine similarity loss is obtained.

3.2. Memory Bank Acquisition

3.2.1. Feature Extraction

Let φ ( h , w , c ) represent the feature mapping of the second layer, where h denotes the height, w denotes the channel width, and c denotes the number of channels. Like PatchCore, the patch-level features of local features in clustered neighborhoods can be represented as Formula (8):
Ρ = f a g g φ
Here, f a g g represents the aggregation function within the neighborhood. As mentioned in Figure 1, we use a combination of the first three layers of wide_resnet_50_2 and STN to extract features and build the memory library, with each layer followed by the STN. Inspired by the PatchCore method, we likewise did not adopt the last layer of the Resnet network since it lost a lot of the features’ spatial information. As shown in Figure 3, the features of the second and third layers of the STN can retain global information while containing more local feature information. However, if the features of the first three or more layers are fused, the features in the input memory library will not contain enough information for accurate detection.

3.2.2. Similar to Pyramid Pooling Module (SPPM)

After feature extraction, a 3-dimensional tensor with the shape ( C , H , W ) is obtained, where C is the sum of the dimensions of the second and third layers. We then process this three-dimensional feature vector by flattening it in three dimensions except for the channels. This results in a two-dimensional tensor with a shape of ( H × W , C ) , which is randomly projected. Thus, we flatten the original feature vector into a column-wise phenotypic feature matrix. However, as mentioned earlier, the PatchCore method employs a pixel-by-pixel search approach, conducting nearest neighbor searches on each pixel and disregarding pixel relationships. We recognize that 2D average pooling can increase the receptive field, which is crucial for anomaly detection tasks. Hence, we employ the pooling approach illustrated in Figure 4.
Assuming an input feature size of 64 × 64, we perform three pooling operations with pool core sizes of 3 × 3, 4 × 4, and 5 × 5. Subsequently, tensors of sizes 64 × 64, 32 × 32, and 16 × 16 are obtained. Next, the three pooled tensors are unsampled to the same size of 64 × 64, and their dimensions are concatenated. The earlier pooling regions are included within subsequent pooling regions using this approach to enlarge the receptive field and give more attention to edge information due to an overlap between pooling regions, thus establishing closer relationships between the pixels.
After the pooling step, the eigenvector of the domain φ ( h , w ) , as represented by Formula (9), is enacted.
N p h , w = a , b a h p 2 , h + p 2 , b w p 2 , w + p 2
The feature aggregation operation is utilized to obtain the locally aware patch feature set p of the feature mapping tensor φ i , j , enabling the successful realization of the clustering Ρ s , p φ ( h , w ) of the feature tensor, as shown in Formula (10).
Ρ s , p φ ( h , w ) = φ ( h , w ) Ν p ( h , w ) h ,   w   m o d   s = 0 , h < h * , w < w * , h , w Ν }
In this case, the feature repository M 1 can be described as Formula (11):
M 1 = x i N Ρ s , p φ ( x i

3.2.3. Anomaly Detection

The samples used in self-supervised learning are exclusively normal. The training process aims to identify representative features of the “normal category” and utilize them as a reference for evaluating positivity and negativity, ultimately achieving anomaly detection. During testing, we index the memory bank that stores characteristic information of positive samples and calculate the Euclidean distance between sample patches to obtain an anomaly score. In n-dimensional space, if there are two points a ( x 11 , x 12 , , x 1 n ) and b ( x 21 , x 22 , , x 2 n ) , then the Euclidean distance d is defined as Formula (12).
d = k = 1 n ( x 1 k x 2 k ) 2
After completing the feature aggregation and pooling module in Step 2, we proceed to the feature clustering operation. The utilization of negative cosine similarity loss during training greatly facilitates feature clustering by ensuring that similar vectors are consistently mapped to proximate locations with each iteration, while dissimilar vectors are assigned to distant points. This approach effectively reduces the clustering time and enhances the operational efficiency of the model. To optimize the memory library size and improve the testing efficiency, we implement the PatchCore method and employ greedy subsampling for reducing and optimizing the memory bank based on Formula (13).
M 2 * = arg m i n M 2 M 1 m a x m M 1 m i n n M 2 m n 2
The objective of our purpose is to streamline the testing process by solely conducting nearest neighbor searches for the test sample’s feature in M 2 , identifying its closest neighboring feature and subsequently calculating the maximum Euclidean distance from this feature to its clustering center. This approach allows us to obtain an anomaly score, facilitating effective anomaly detection.
In the memory bank, for each patch-level feature m M 2 of the training data, we select the m * nearest neighbors from the patch-level features of the test data. The patch-level anomaly score of the test image X t e s t is estimated based on the distance between the patch-level feature m t e s t and m * , as shown in Formulas (14) and (15).
m * = a r g   m i n m M 2       m t e s t m 2
s * = m t e s t m * 2

3.2.4. Post-Processing Method

In anomaly detection, the ground truth serves as a baseline measurement obtained from a reliable method. It is used to calibrate and improve the accuracy of new measurement methods. Many current anomaly detection methods rely on the calculated AUROC curve to determine the threshold of the entire dataset. This involves setting the threshold based on the data distribution and other characteristics. The regions above the threshold are considered abnormal, while those below are considered normal. However, if the overall threshold of an abnormal image is lower than the dataset threshold, the anomaly cannot be detected. Otherwise, false detections may occur. To address this, we propose a threshold determination method based on the image itself, which identifies the critical point between normal and anomalous regions, enabling the accurate labelling and localization of anomalies.
For each test sample, as shown in Figure 5a, we artificially construct five rectangular sampling frames in its upper left, lower left, upper right, lower right, and right center regions, as shown in Figure 5b.
The constructed image is also fed into the network for feature extraction, and then we search for distances in the feature database, compute scores, and obtain the score matrix. Its heat map visualization is shown in Figure 5c. Obviously, as artificially constructed anomalies, these five areas will have significantly higher anomaly scores than the rest of the picture. We set the pixel values of these five areas to 0 and the other parts to 1, thus obtaining its ground truth, as depicted in Figure 5d. Then, we traverse the five regions, find the maximum and minimum fractional values ( t h r e s m a x   and   t h r e s m i n ) and define the interval as outlined in Formula (16), setting iter_times to 10 to ensure the code runs efficiently:
i n t e r v a l = ( t h r e s m a x t h r e s m i n ) / i t e r _ t i m e s
For each iteration, we set the threshold as outlined in Formula (17):
t h = t h r e s m a x i t e r _ t i m e s · t h r e s m i n
The iteration is carried out in the range of minimum and maximum thresholds, and each th can output an anomaly mask accordingly, where the white area is abnormal and the black area is normal, as shown in Figure 6a–e, which are the masks corresponding to the thresholds 9.0, 8.3, 7.6, 6.4 and 6.1, respectively. As can be seen from the five pictures shown in Figure 6, with the continuous increase in the threshold, the area of the abnormal region in the mask image corresponding to each threshold value increases first and then decreases. Finally, we calculate the intersection over union (IOU) between the prediction box and the ground truth box. The threshold is updated according to the highest IOU, and the best threshold (best_th) is obtained after 10 iterations. The mask graphs corresponding to different responsiveness are presented in Figure 6a–e, serving the purpose of facilitating the description of this method’s principle. It is important to note that these graphs do not represent the final segmentation results but rather serve as temporary variables within the code and are subsequently deleted after calculation to optimize memory usage.
After determining the optimal threshold value (best_th), based on industrial detection expertise, we establish the region of abnormal pixels (S = 25) and devise an algorithm. For the neighboring pixel region exceeding best_th, if it is smaller than S, the pixel value in this region will be set to 0 (normal); otherwise, it will be set to 1 (abnormal). The implementation of this setting will effectively eliminate the occurrence of false positives in small areas, resulting in a reduction in the overall false alarm rate observed in the test results.

4. Experiment

4.1. Datasets and Metrics

We first evaluate the performance of our method on two public datasets, MVTec and MPDD, which cover both classical industrial anomaly detection and defect detection in manufacturing processes.
-
MVTec dataset: The MVTec dataset is a well-known benchmark for industrial anomaly detection. It consists of 15 categories with a total of 3629 training and validation images and 1725 test images. The training set contains only normal images, while the test set includes images with various defects and normal images. The images have resolutions ranging from 700 × 700 to 1024 × 1024 pixels, and pixel-level ground truth labels are provided for each defect.
-
MPDD dataset: The MPDD dataset is a newly proposed dataset focused on defect detection in the manufacturing process of painted metal parts. It comprises six types of metal parts captured under different spatial orientations, positions, distances, light intensities, and backgrounds. We utilize this dataset to evaluate the registration effect of our method and compare it with several other methods to highlight the superiority of our approach.
-
Industrial scenario dataset: To demonstrate the robustness and generalization of our approach, we have selected an additional three datasets from the field of high-speed rail (HSR) component inspection for experimentation. Each of these industrial datasets consists of 460 normal samples and 115 abnormal samples, all of which exhibit visual bias in terms of their positioning. Figure 7 showcases the appearance of these normal samples.
We first adopt the traditional model evaluation method, randomly sampling normal data and abnormal data, and divide the dataset according to the ratio of training set to test set of 8:2. In order to avoid overfitting and other effects caused by unreasonable dataset division, we adopt the cross-validation method to evaluate the model in Section 4.4.
For the evaluation, we employ two threshold-independent metrics. The first is the area under the receiver operating characteristic curve (AUROC), which measures the true positive rate (percentage of correctly classified anomalous pixels). The image-level AUROC scores assess the model’s detection accuracy, denoted as detection (Det.), while the pixel-level AUROC scores evaluate the model’s positioning and segmentation accuracy, indicated as segmentation (Seg.). The second metric is the per-region-overlap score (PRO-Score) [28], which considers the mean rate of correctly classified pixels as a function of the false positive rate between 0 and 0.3 for each connected component, as shown in Formula (18). Higher scores indicate better localization of both large and small anomalies [29].
P R O = 1 N n = 1 N P G n G n = 1 N n N T P n T P n + F N n
where P represents the prediction result, G n represents the true value, then the intersection of the two is the true positive sample T P n , and F N n represents the false negative sample.
The above two indicators are commonly employed in anomaly detection tasks and are widely acknowledged and utilized as benchmarks within the detection industry. However, due to the intricacy of industrial detection scenarios, there exists variation in data quality as well as resolution. To demonstrate the robustness of our method, we also employ three additional indicators, namely the recall rate, accuracy rate, and false alarm rate (FAR), for testing and cross-validation using the high-speed railway dataset. The calculation formulas for these indicators are presented in Formulas (19)–(21), respectively.
R e c a l l = T P T P + F N
A c c u r a c y = T P + T N T P + T N + F P + F N
F A R = F P F P + T N
The term FP denotes the number of false positives, while TN represents the count of true negatives.

4.2. Implementation Details

We utilize wide_resnet_50_2 and the STN as the underlying architecture to construct a convolution-based encoder and predictor. These models are trained on images with a resolution of 224 × 224 using an Asus NVIDIA GTX 3090 GPU sourced from Chengdu, China. For parameter updates, we employ momentum SGD for 55 epochs with a learning rate of 0.0001 and a batch size of 32. The duration of each training round is approximately 8 min, with an average training epoch lasting for about 8.7 s.

4.3. Anomaly Detection

To assess the efficacy of FR-PatchCore, we conduct a comparative analysis with other cutting-edge anomaly detection techniques, such as SPADE, PaDiM, Patch-SVDD [30], Cutpaste [31], RegAD, and PatchCore. We meticulously replicate their reported outcomes using each method’s official source code to ensure an equitable comparison. The last row displays the macro-averaged scores for all categories, highlighting the highest scores in bold.
The MVTec dataset is the most widely used benchmark for anomaly detection, and FR-PatchCore achieves a 98.81% image-level AUROC score and a 97.86% pixel-wise AUROC score when compared with the other six methods in Table 1. In the comparison of detection results of “similar categories” such as bottle, leather, cable, etc., our method is slightly inferior to PatchCore, but it still has high detection scores and accuracy, and the average detection scores of all categories are better than the other six methods. For the three “spatial transformation classes” such as hazelnut, screw, and metal nut in the dataset, we present partial representative results of their visualizations in Section 4.6.
To evaluate the robustness of our method against the “rotation class” data that we defined, we conducted experiments on the MPDD dataset, which are displayed in Table 2, from ten runs. The final row presents the macro-averaged scores across all categories, with the best scores indicated in bold.
We present partial representative results of the MPDD dataset visualizations in Section 4.5. Unlike the MVTec dataset, all six categories in the MPDD dataset have spatial differences and position shifts. This is challenging for most methods because anomaly detection models are not trained enough to deal with morphologically diverse samples. We believe that increasing the training time is not enough to make the model learn more. On the contrary, it may cause the model to overfit. Most models cannot detect the “spatial transformation class” of the sample effectually. The FR-PatchCore proposed by us is innovative in model learning. As shown in Table 2, our method leads in comparison with the other six methods. This proves that the idea of using feature-level registration as a pretext task is worthwhile. Only when the model learns the spatial information that has been ignored before can it better deal with such problems, improve the generalization and be applied to the field of high-standard industrial anomaly detection.
To evaluate the applicability of our method in real-world scenarios, we conducted experiments using data from high-speed rail (HSR) components in industrial anomaly detection. The scores are displayed from ten runs in Table 3.
We present partial representative results of the HSR dataset visualizations in Section 4.5. As can be seen from Table 3, FR-PatchCore still achieves an excellent detection performance even when detecting real industrial datasets with a complex environment, noisy images, and many influencing factors, which proves that our method has good generalization and robustness.
Most anomaly detection researchers evaluate the performance of their methods using both image-level and pixel-wise AUROC curves. AUROC is a widely used metric for assessing the ability of models to discriminate between abnormal and normal samples. However, in anomaly detection, it is not only important to identify anomalies but also to accurately localize them. The pixel-wise AUROC evaluation provides a broad assessment, where the score is significantly improved if a large region is correctly localized but has a minimal impact if a small region is incorrectly positioned.
To address this limitation, we also employed the PRO score [24] criteria, which considers the rate of correctly classified pixels as a function of the false positive rate (fpr). We calculated the average PRO score of the three datasets using an fpr of 30%, following the methodology described in [25]. In Table 4, we compare our method with SPADE, PaDiM, and PatchCore, which are all based on feature embedding. Our approach achieves the highest PRO-Score, indicating a superior localization performance.
In the comparison of the PRO score, we only compare four methods based on feature embedding: SPADE, PaDiM, PatchCore and our proposed method. Since these four methods have the same basic principle, they are easier to compare, and it is more convenient to use the principle of control variables. For example, the same backbone wide_resnet_50_2 is used at the same time. As shown in Figure 8, we report the results of the PRO score comparison on three datasets of different difficulty, which indicates that FR-PatchCore has an excellent detection performance and still obtains good indicators under this more stringent score limitation.
In addition, as presented in Table 5, Table 6 and Table 7, we have documented the scores for the accuracy rate, recall rate, and false positive rate of the three high-speed rail datasets. Among the four comparative methods employed, our approach stands out as the most advanced, with the lowest occurrence of false alarms. Furthermore, apart from these metrics, industrial tests also assess efficient reasoning performance. By employing identical parameter settings across all four methods, we conducted comprehensive evaluations. The inference time considered here encompasses both inputting samples to be detected into the network for forward propagation and the processing time. The original image resolution was adjusted from 1280 × 1280 to a 224 × 224 input network size. Our method’s inference speed remains comparable to that of well-known PatchCore networks while maintaining superior anomaly detection and segmentation performance at the image-level.

4.4. Cross-Validation Experiment

To assess the performance of our model on limited data samples, we conducted five-fold cross-validation using the high-speed rail component dataset. The training set was divided into five folds, with four folds used for training and the remaining one for testing. Additionally, within the four-fold training data, an additional 5% of the data was extracted as a verification set. Each training and test iteration involved 442 training data, 18 validation data, and 115 test data. This approach ensures that all samples in the training set are utilized for both training and testing purposes.
Since this study is based on self-supervised learning and utilizes unlabeled positive samples for training, there is no actual labeled data available. However, it should be noted that anomaly detection inherently involves a classification task where the “normal” label is carried by the data itself during training. Therefore, in each of the five cross-validations performed, classification accuracy serves as our evaluation metric. Subsequently, we select the model with highest classification accuracy on the verification set to evaluate its performance on separate test sets through five experiments. The resulting test errors from each experiment can be obtained using Formula (22).
E T ω = x D n f ^ x ω f x 2
Set D consists of N samples, where each sample ( x i , y i = f x i ) represents a subset of the total set χ .
The generalization error in this case can be approximated as the average of the five test errors, as demonstrated in Formula (23).
E G ω = x χ p ( x ) f ^ x ω f x 2 E T ω
where p ( x ) represents the probability that x occurs in the total set χ .
The results of the five cross-validation iterations are presented in Table 8, Table 9 and Table 10. Taking high-speed rail parts dataset 1 as an example, the recall rate is utilized as an evaluation metric to compare the four methods in the classification task of positive and negative samples, as depicted in Table 10. The model achieving the highest score during iteration will be selected for testing purposes.
After selecting the optimal model for each of the four comparison methods, we assessed its performance by conducting tests on five separate datasets and calculated the average accuracy based on the results obtained from these five tests, as presented in Table 9.
Finally, we computed the average test error after five iterations of the four models utilized in the testing process. It can be observed from Table 10 that our method demonstrates superior generalization capability for the HSR dataset, as evidenced by its significantly lower generalization error compared to other methods employed across the three industrial datasets.

4.5. Ablation Experiment

Experiments were conducted to assess the contribution of different components in our proposed approach. The MVTec dataset was used as the subject of these experiments, where we evaluated the impact of pooling, registration, and post-processing methods individually. The experimental results clearly demonstrated that our method outperformed the other methods in terms of performance.

4.5.1. Backbone Selection Process

The CNN and transformer have emerged as the predominant backbone networks for deep learning in recent years, owing to their robust feature extraction capabilities. Initially designed for natural language processing, the transformer has also been adapted by researchers for image processing through vision transformer (VIT) networks. In this section, we randomly select three subgraphs to visualize the initial three layers of features extracted from VIT and wide_resnet_50_2 networks, using the screw class as an example (Figure 9 and Figure 10). Notably, the features extracted by VIT exhibit enhanced recognizability, with reduced noise compared to those obtained by wide_resnet_50_2.
The disparity in the experimental outcomes arises from variations in the scale of the training data. In contrast to VIT’s remarkable efficacy on extensive datasets, CNNs exhibit commendable feature extraction capabilities even with limited data, rendering them more amenable for deployment within data-scarce industrial domains.

4.5.2. SPPM

In the registration module depicted in Figure 1, we incorporate a 2D average pooling module alongside the CNN + STN for extracting image features. This addition serves to mitigate the over-sensitivity of the convolutional layers to positional information. We have opted not to use the SPPM in this module, as our experiments have shown that 2D average pooling yields better results. As shown in Table 11, the application of AvgPool2d in the registration module received the highest score because this module plays a crucial role in maximizing the absolute value of the negative cosine similarity loss, thereby significantly enhancing the performance of anomaly detection.
For evaluation purposes, we employ a pixel-wise anomaly detection AUROC score and PRO score as our chosen metrics. To illustrate the convergence of loss, we utilize the screw category as an example, which is depicted in Figure 11.
The convergence of the negative cosine similarity loss for the screw class is analyzed under different settings. Figure 11a demonstrates the absence of pooling results in a loss value of approximately −0.49. When our SPPM is utilized, the loss slightly increases to around −0.57, as depicted in Figure 11b. However, by incorporating 2D average pooling, the loss significantly improves and reaches approximately −0.75, as shown in Figure 11c. This observation suggests that integrating 2D average pooling enhances the model’s capacity to learn features during the registration process.

4.5.3. Registration Module

For the SPPM and the registration module, we conducted ablation experiments to assess the impact of specific components in our model, namely the SPPM and the registration module. By removing and combining these components individually, we aim to understand their contributions to the overall performance. The results in Table 12 show that when neither the SPPM nor the registration module is used, the two metrics used for evaluation have the lowest score and the lowest anomaly detection result. When both are included in the model, we will get the best anomaly detection result.

4.5.4. Post-Processing Method

In Figure 12, we demonstrate the application of our proposed post-processing method to generate a visual representation of the mask. Specifically, we present an abnormal image from the high-speed rail dataset and show the original image, segmentation image, heat map, ground truth value, the mask generated without the post-processing method, and the mask generated after the post-processing method from left to right. It is evident from Figure 12e that there is a large false positive area; however, this issue is greatly reduced in Figure 12f due to our post-processing method.
The proposed post-processing method for segmentation exceptions employs intersection over union (IoU) as a metric to evaluate the test image, as depicted in Formula (24):
I o U = p i i p i j + p j i p i i
The true value is represented by i, the predicted value is represented by j, and p i j represents the number of pixels predicting i as j. Therefore, Formula (25) is equivalent to Formula (11).
I o U = P G P G
The letter P denotes the prediction, while G represents the ground truth.
The intersection over union (IoU) algorithm is commonly utilized in deep learning for object detection and semantic segmentation tasks to compute the overlap ratio between different images. In Table 13, we present the average IoU metrics for the three datasets used in our experiment, while Section 4.5 showcases visualizations of selected test results. The mask graph generated after applying our proposed post-processing method exhibits a higher degree of proximity to the ground truth value, thereby resulting in an elevated average IOU value. This substantiates the efficacy of our proposed approach.
As can be seen from Figure 12e,f, the mask result obtained after segmentation via post-processing method is closer to the heat map. The responsiveness of the sample in Figure 12a is depicted in a histogram format, as illustrated in Figure 13b, while the responsiveness of the corresponding class’s normal sample is presented in Figure 13a.
The test image predominantly consists of normal areas with a high frequency of response but a low level of responsiveness, whereas abnormal areas exhibit a low frequency of response but a high level of responsiveness. The responsivity distribution in the normal region typically conforms to an approximate normal distribution, while the responsivity in the abnormal region lies outside this distribution. The process of calculating responsiveness is analogous to binary classification. Initially, the positive and negative attributes of the image are evaluated. Subsequently, threshold-based truth rules may be employed to exclude certain normal regions exhibiting higher responsiveness and identify regions demonstrating elevated levels of responsiveness. The response value of the normal region is observed to be approximately distributed within the range of (0, 0.7), while our post-processing method calculates an optimal threshold of 0.75. The proposed approach ensures that the highly responsive normal region, depicted in Figure 13b, with a responsivity range of 0.4–0.6 is not erroneously classified as an exception, and the mask prediction yields superior results. By employing this post-processing method, the occurrence of false positives is effectively minimized.

4.6. Visualization

A visualization of selected categories with “spatial transformation” characteristics from various datasets used in the experiment was employed to provide a more intuitive analysis of the detection capability of FR-PatchCore, as depicted in Figure 14.
The “spatial rotation” category of MVTec and MPDD in the two public datasets is selected for display in Figure 14, lines A–F. In Figure 14b, highly accurate anomaly segmentation results are showcased, while Figure 15c visualizes the anomaly score. Once the model successfully locates and evaluates anomalies, the threshold selection method proposed by us can be utilized to obtain mask prediction results, as depicted in Figure 14e.
The anomaly detection results for three different categories in HSR 1–3 datasets are shown in line G–I of Figure 14. In contrast to the single simple scenes of the public dataset, the data captured in industrial scenes exhibit higher levels of noise and complexity in the background, making anomaly detection susceptible to environmental factors. To capture more comprehensive visual information of the inspected objects and prevent data overexposure caused by lighting or other factors during raw data collection for industrial inspection, an unfixed shooting point is adopted instead of continuous shooting from a fixed position. This leads to a significant presence of “spatial transformation” category data in industrial inspections, which further validates the necessity of our research. Based on the segmentation and prediction results from these three columns, FR-PatchCore demonstrates accurate localization of anomalies amidst complex backgrounds while exhibiting high sensitivity toward abnormal areas and low response toward normal areas. Moreover, it maintains a low false positive rate when ensuring reliable anomaly detection, thereby showcasing its potential applications and strong generalization capabilities.

4.7. Discussion

The AUROC and PRO scores were utilized in Table 1, Table 2, Table 3 and Table 4 to perform category-by-category anomaly detection on the three datasets employed. In comparison with other notable methods, FR-PatchCore demonstrated a superior average performance by not only achieving excellent anomaly detection for conventional “similar class” images but also effectively detecting the challenging “spatial transformation class” in the dataset.
An evaluation of three challenging datasets for high-speed rail parts, which are critical in industrial testing, was conducted using recall, accuracy, and false positive rates in Table 5, Table 6 and Table 7. Additionally, the test speed of the comparison method was assessed, and the results indicate that at an image resolution of 224 × 224, the test speed is comparable to that of the PatchCore method, with a maximum speed of 0.21 s per frame. These findings demonstrate that our approach ensures a superior recall rate and accuracy while maintaining a minimal false positive rate without compromising efficiency.
Furthermore, to provide a more comprehensive discussion on FR-PatchCore’s capabilities, we designed a five-fold cross-validation experiment utilizing the three industrial datasets. The average error from five tests was adopted as the final evaluation index. Our proposed method exhibited the lowest average error among all tests conducted, further highlighting its competitiveness and strong generalization abilities.
In the ablation experiment conducted in Section 4.5, we performed comprehensive analyses and experiments on the utilized model modules and network selection process. Furthermore, IoU was employed as a metric to assess the effectiveness of the proposed post-processing approach. Additionally, a subset of experimental results was visually presented in Section 4.6 to illustrate the recovery process of FR-PatchCore segmentation and anomaly positioning.
The effectiveness of self-supervised learning heavily relies on a well-designed auxiliary task. To enhance the model’s understanding of local feature relationships during registration, we propose a feature-level registration task that captures and stores the maximum distance between normal features and their respective clustering centers in the feature database. In testing, precise image alignment is no longer necessary. Instead, we can simply query the memory database to calculate the maximum distance between target features and their most similar clustering centers. As a result, our approach achieves robust anomaly detection in two-dimensional images. However, it is important to acknowledge that our method also has certain limitations and disadvantages.
In the introduction, we define the “similarity category” and “spatial transformation category” based on the spatial position relations of objects in the picture, such as rotation and flip. However, this definition is limited to two-dimensional planes and does not account for variations in depth between images that may exist across different planes. Our method is therefore not suitable for capturing spatial position relationships with significant depth differences. As illustrated in Figure 15, we present detection results of data exhibiting substantial disparities in depth information within industrial detection scenarios, along with the corresponding registration loss curve for this class shown in Figure 16. The high occurrence of false positives can be attributed to our feature-grade registration method’s inability to achieve accurate depth alignment. Figure 16 depicts the registration loss curve specifically for this category, revealing convergence when the loss reaches approximately 0.4. This indicates that our model struggles to effectively register features associated with this type of data, resulting in a scarcity of learned features compared to previous categories.
The integration of two-dimensional images with depth maps in the above scenarios can potentially yield significant benefits, as it encompasses more comprehensive spatial information. By combining these two types of images, the model can acquire an enhanced feature representation, thereby enhancing its performance.
In addition, the proposed post-processing method is based on a single test image and determines the threshold by calculating the responsivity of each part of the image, thereby mitigating the occurrence of false positives to some extent. However, when dealing with large-scale test data and high-resolution images, this approach may introduce significant computational overhead and pose challenges for practical implementation. Furthermore, our exploration into adaptive threshold calculations remains limited, without incorporating a comprehensive and universally applicable algorithm. The responsivity values for each area of the image in Section 4.5.4 were visually represented, offering a novel perspective. If we consider the data distribution, assuming that responsiveness follows a normal distribution, then according to the three-sigma rule, an outlier is defined as a value in a set of result values that deviates from the mean value by more than three standard deviations. However, if the responsiveness does not follow a normal distribution and this distribution can be computed with minimal effort, then the outlier can be defined as k times the standard deviation from the mean, where k serves as a threshold. Further investigation into this issue will be conducted in future research.

5. Conclusions

In this study, we propose FR-PatchCore, a feature-level registration method that leverages PatchCore, one of the most advanced anomaly detection methods on the MVTec AD benchmark. Feature-level registration is employed as a pre-text task, and the features stored in memory are optimized through registration loss to approach an optimal feature representation. Additionally, we introduce an innovative ground-truth segmentation method to address false positives resulting from artificial threshold determination. The performance of FR-PatchCore excels in anomaly detection on MVTec datasets and achieves state-of-the-art functionality on MPDD datasets (datasets exhibiting spatial location differences). Furthermore, our method is validated using the high-speed rail dataset. The experimental results demonstrate that FR-PatchCore effectively handles two types of data, “similar category” and “spatial transformation category”, thereby enhancing the generalization capability of PatchCore. Our approach showcases its potential for industrial anomaly detection applications while maintaining high accuracy for effective generalization purposes. However, during our discussion, we also analyze certain limitations of this approach. When significant depth differences exist between samples within the same category, our approach does not sufficiently structure the registration task to cope with such variations, leaving room for future improvements.

Author Contributions

Conceptualization, Y.Z. and Z.J.; methodology, Y.W.; software, Z.J.; validation, Z.J. and J.L.; formal analysis, J.L.; investigation, Y.Z.; resources, X.G.; data curation, X.G.; writing—original draft preparation, Z.J.; writing—review and editing, X.G.; visualization, Z.J.; supervision, Y.Z.; project administration, J.L.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Funds for International Cooperation and Exchange of the National Natural Science Foundation of China (Grant No. 61960206010).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the result presented in this paper are available in Refs. [13,14]. Other data are not publicly available at this time but can be obtained from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, Z.; Zhang, Y.; Luo, L.; Wang, N.; Lim, S. AnoDFDNet: A Deep Feature Difference Network for Anomaly Detection. J. Sens. 2022, 2022, 3538541. [Google Scholar] [CrossRef]
  2. Nguyen, D.T.; Lou, Z.; Klar, M.; Brox, T. Anomaly detection with multiple-hypotheses predictions. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 31 May 2019; pp. 4800–4809. [Google Scholar]
  3. Sakurada, M.; Yairi, T. Anomaly detection using autoencoders with nonlinear dimensionality reduction. In Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis, Dunedin, New Zealand, 2 December 2013; pp. 4–11. [Google Scholar]
  4. Akcay, S.; Atapour-Abarghouei, A.; Breckon, T.P. Ganomaly: Semi-supervised anomaly detection via adversarial training. In Proceedings of the Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, 2–6 December 2018; Revised Selected Papers, Part III 14. pp. 622–637. [Google Scholar]
  5. Arisoy, S.; Nasrabadi, N.M.; Kayabol, K. Unsupervised pixel-wise hyperspectral anomaly detection via autoencoding adversarial networks. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  6. Li, D.; Chen, D.; Jin, B.; Shi, L.; Goh, J.; Ng, S.-K. MAD-GAN: Multivariate anomaly detection for time series data with generative adversarial networks. In Proceedings of the International Conference on Artificial Neural Networks, Lugano, Switzerland, 15 January 2019; pp. 703–716. [Google Scholar]
  7. Carrara, F.; Amato, G.; Brombin, L.; Falchi, F.; Gennaro, C. Combining gans and autoencoders for efficient anomaly detection. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 3939–3946. [Google Scholar]
  8. Lyu, Y.; Han, Z.; Zhong, J.; Li, C.; Liu, Z. A Generic Anomaly Detection of Catenary Support Components Based on Generative Adversarial Networks. IEEE Trans. Instrum. Meas. 2020, 69, 2439–2448. [Google Scholar] [CrossRef]
  9. Son, H.; Jang, Y.; Kim, S.-E.; Kim, D.; Park, J.-W. Deep Learning-Based Anomaly Detection to Classify Inaccurate Data and Damaged Condition of a Cable-Stayed Bridge. IEEE Access 2021, 9, 124549–124559. [Google Scholar] [CrossRef]
  10. Mitra, J.; Qiu, J.; MacDonald, M.; Venugopal, P.; Wallace, K.; Abdou, H.; Richmond, M.; Elansary, N.; Edwards, J.; Patel, N.; et al. Automatic Hemorrhage Detection From Color Doppler Ultrasound Using a Generative Adversarial Network (GAN)-Based Anomaly Detection Method. IEEE J. Transl. Eng. Health Med. 2022, 10, 1800609. [Google Scholar] [CrossRef] [PubMed]
  11. Xiao, K.; Cao, J.; Zeng, Z.; Ling, W.-K. Graph-based Active Learning with Uncertainty and Representativeness for Industrial Anomaly Detection. IEEE Trans. Instrum. Meas. 2023, 72, 5016114. [Google Scholar] [CrossRef]
  12. Roth, K.; Pemula, L.; Zepeda, J.; Schölkopf, B.; Brox, T.; Gehler, P. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 14318–14328. [Google Scholar]
  13. Bergmann, P.; Fauser, M.; Sattlegger, D.; Steger, C. MVTec AD—A comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9592–9600. [Google Scholar]
  14. Yan, X.; Zhang, H.; Xu, X.; Hu, X.; Heng, P.-A. Learning semantic context from normal samples for unsupervised anomaly detection. AAAI Conf. Artif. Intell. 2019, 35, 3110–3118. [Google Scholar] [CrossRef]
  15. Zhou, C.; Paffenroth, R.C. Anomaly detection with robust deep autoencoders. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 665–674. [Google Scholar]
  16. Bergmann, P.; Löwe, S.; Fauser, M.; Sattlegger, D.; Steger, C. Improving unsupervised defect segmentation by applying structural similarity to autoencoders. arXiv 2018, arXiv:1807.02011. [Google Scholar] [CrossRef]
  17. You, Z.; Cui, L.; Shen, Y.; Yang, K.; Lu, X.; Zheng, Y.; Le, X. A unified model for multi-class anomaly detection. Adv. Neural Inf. Process. Syst. 2022, 35, 4571–4584. [Google Scholar] [CrossRef]
  18. Bergman, L.; Cohen, N.; Hoshen, Y. Deep nearest neighbor anomaly detection. arXiv 2020, arXiv:2002.10445. [Google Scholar] [CrossRef]
  19. Cohen, N.; Hoshen, Y. Sub-image anomaly detection with deep pyramid correspondences. arXiv 2020, arXiv:2005.02357. [Google Scholar] [CrossRef]
  20. Altman, N.S. An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression. Am. Stat. 1992, 46, 175–185. [Google Scholar] [CrossRef]
  21. Defard, T.; Setkov, A.; Loesch, A.; Audigier, R. Padim: A patch distribution modeling framework for anomaly detection and localization. In Proceedings of the International Conference on Pattern Recognition, Milan, Italy, 10–15 January 2020; pp. 475–489. [Google Scholar]
  22. Zheng, Y.; Wang, X.; Deng, R.; Bao, T.; Zhao, R.; Wu, L. Focus your distribution: Coarse-to-fine non-contrastive learning for anomaly detection and localization. In Proceedings of the 2022 IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan, 18–22 July 2022; pp. 1–6. [Google Scholar]
  23. Jaderberg, M.; Simonyan, K.; Zisserman, A. Spatial transformer networks. In Proceedings of the Advances in Neural Information Processing Systems 28 (NIPS 2015), Montreal, QC, Canada, 7–12 December 2015; Volume 28. [Google Scholar] [CrossRef]
  24. Hanley, J.A.; McNeil, B.J. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982, 143, 29–36. [Google Scholar] [CrossRef] [PubMed]
  25. Huang, C.; Guan, H.; Jiang, A.; Zhang, Y.; Spratling, M.; Wang, Y.-F. Registration based few-shot anomaly detection. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 303–319. [Google Scholar]
  26. Zagoruyko, S.; Komodakis, N. Wide residual networks. arXiv 2016, arXiv:1605.07146. [Google Scholar] [CrossRef]
  27. Chen, X.; He, K. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15750–15758. [Google Scholar]
  28. Cai, L.Z.; Lin, J.; Starr, M.R.; Obeid, A.; Ryan, E.H.; Ryan, C.; Forbes, N.J.; Arias, D.; Ammar, M.J.; Patel, L.G.; et al. PRO score: Predictive scoring system for visual outcomes after rhegmatogenous retinal detachment repair. Br. J. Ophthalmol 2023, 107, 555–559. [Google Scholar] [CrossRef] [PubMed]
  29. Yang, J.; Shi, Y.; Qi, Z. Dfr: Deep feature reconstruction for unsupervised anomaly segmentation. arXiv 2020, arXiv:2012.07122. [Google Scholar] [CrossRef]
  30. Yi, J.; Yoon, S. Patch svdd: Patch-level svdd for anomaly detection and segmentation. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020. [Google Scholar]
  31. Li, C.-L.; Sohn, K.; Yoon, J.; Pfister, T. Cutpaste: Self-supervised learning for anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 9664–9674. [Google Scholar]
Figure 1. The architecture of the FR-PatchCore model proposed is presented. The entire framework can be divided into three main steps, and the feature extraction processes in both modules are executed simultaneously. In the registration module, randomly selected images are fed into CNN + STN to obtain pre-encoded features. Meanwhile, in the memory bank module, we combine the features passing through the second STN layer and the third STN layer to retain sufficient spatial information of features. Finally, the fusion features are stored in the memory bank.
Figure 1. The architecture of the FR-PatchCore model proposed is presented. The entire framework can be divided into three main steps, and the feature extraction processes in both modules are executed simultaneously. In the registration module, randomly selected images are fed into CNN + STN to obtain pre-encoded features. Meanwhile, in the memory bank module, we combine the features passing through the second STN layer and the third STN layer to retain sufficient spatial information of features. Finally, the fusion features are stored in the memory bank.
Sensors 24 01368 g001
Figure 2. Registration module: two different images of the same category are inputted, and the CNN + STN combination is utilized to extract features. A Siamese structure network is employed for feature encoding and prediction. The objective is to train the registration network in such a way that the predicted features closely resemble the encoded features in the figure.
Figure 2. Registration module: two different images of the same category are inputted, and the CNN + STN combination is utilized to extract features. A Siamese structure network is employed for feature encoding and prediction. The objective is to train the registration network in such a way that the predicted features closely resemble the encoded features in the figure.
Sensors 24 01368 g002
Figure 3. A visualization of features of different levels when fused. When too many levels of the feature are fused, the naked eye can no longer distinguish the outline of the original image, and the features become fine-grained.
Figure 3. A visualization of features of different levels when fused. When too many levels of the feature are fused, the naked eye can no longer distinguish the outline of the original image, and the features become fine-grained.
Sensors 24 01368 g003
Figure 4. The structure of SPPM.
Figure 4. The structure of SPPM.
Sensors 24 01368 g004
Figure 5. (a) source image; (b) artificial construction; (c) heat map; (d) ground truth of the constructed image.
Figure 5. (a) source image; (b) artificial construction; (c) heat map; (d) ground truth of the constructed image.
Sensors 24 01368 g005
Figure 6. The corresponding anomaly mask under a certain th. (a) th = 9; (b) th = 8.3; (c) th = 7.6; (d) th = 6.4; (e) th = 6.1.
Figure 6. The corresponding anomaly mask under a certain th. (a) th = 9; (b) th = 8.3; (c) th = 7.6; (d) th = 6.4; (e) th = 6.1.
Sensors 24 01368 g006
Figure 7. (a) HSR dataset 1; (b) HSR dataset 2; (c) HSR dataset 3.
Figure 7. (a) HSR dataset 1; (b) HSR dataset 2; (c) HSR dataset 3.
Sensors 24 01368 g007
Figure 8. Average PRO score line chart.
Figure 8. Average PRO score line chart.
Sensors 24 01368 g008
Figure 9. Feature visualization extracted by wide_resnet_50_2 network.
Figure 9. Feature visualization extracted by wide_resnet_50_2 network.
Sensors 24 01368 g009
Figure 10. Feature visualization extracted by VIT network.
Figure 10. Feature visualization extracted by VIT network.
Sensors 24 01368 g010
Figure 11. (a) Loss convergence curve with no pooling; (b) our SPPM; (c) 2D average pooling.
Figure 11. (a) Loss convergence curve with no pooling; (b) our SPPM; (c) 2D average pooling.
Sensors 24 01368 g011
Figure 12. (a) Original image; (b) segmentation result, the red part represents the initial segmentation of abnormal areas; (c) predicted result; (d) ground truth; (e) the mask has been segmented without the use of any post-processing methods; (f) the mask has been segmented using a post-processing method.
Figure 12. (a) Original image; (b) segmentation result, the red part represents the initial segmentation of abnormal areas; (c) predicted result; (d) ground truth; (e) the mask has been segmented without the use of any post-processing methods; (f) the mask has been segmented using a post-processing method.
Sensors 24 01368 g012
Figure 13. (a) Responsiveness histogram (normal); (b) responsiveness histogram (abnormal).
Figure 13. (a) Responsiveness histogram (normal); (b) responsiveness histogram (abnormal).
Sensors 24 01368 g013
Figure 14. (a) original image; (b) segmentation result, the red part represents the initial segmentation of abnormal areas; (c) predicted result; (d) ground truth; (e) segmented mask.
Figure 14. (a) original image; (b) segmentation result, the red part represents the initial segmentation of abnormal areas; (c) predicted result; (d) ground truth; (e) segmented mask.
Sensors 24 01368 g014
Figure 15. (a) original image; (b) segmentation result, the red part represents the initial segmentation of abnormal areas; (c) predicted result; (d) ground truth; (e) segmented mask.
Figure 15. (a) original image; (b) segmentation result, the red part represents the initial segmentation of abnormal areas; (c) predicted result; (d) ground truth; (e) segmented mask.
Sensors 24 01368 g015
Figure 16. Registration loss function curve of Figure 15a.
Figure 16. Registration loss function curve of Figure 15a.
Sensors 24 01368 g016
Table 1. The results on the MVTec dataset (AUROC—image%, AUROC—pixel%).
Table 1. The results on the MVTec dataset (AUROC—image%, AUROC—pixel%).
CategorySPADEPaDiMSVDDCutpasteRegADPatchCoreOurs
Det.Seg.Det.Seg.Det.Seg.Det.Seg.Det.Seg.Det.Seg.Det.Seg.
carpet92.8097.5099.4098.9092.9092.6093.1098.30 98.5098.9098.0098.9097.1398.35
grid47.3093.7095.7094.9094.6096.2099.9097.5091.5088.70 97.5097.5098.6595.14
leather95.4097.6099.8099.1090.9097.40100.099.50100.098.90100.099.10 99.8498.84
tile96.5087.4097.4091.2097.8091.4093.4090.50 97.4095.20 99.4094.90 98.4697.67
wood95.8088.5098.8093.6096.5090.8098.6095.5099.4094.60 98.9093.60 98.7395.45
bottle97.2098.4099.8098.1098.6098.1098.3097.6099.8097.50 100.098.30 99.7698.39
cable84.8097.2092.2095.8090.3096.8080.6090.00 80.6094.90 99.3098.90 99.7898.17
capsule89.7099.0091.5098.3076.7095.8096.2097.40 76.3098.20 97.6098.50 98.8398.82
hazelnut88.1099.1093.3097.7092.0097.5097.3097.30 96.5098.50 100.098.40 99.5997.96
metal nut71.0098.1095.8096.7094.0098.0099.3093.10 98.3096.90 99.7097.7099.6798.77
pill80.1096.5094.4094.7086.1095.1092.4095.70 80.6097.80 95.9097.70 98.8298.21
screw66.7098.9084.4097.4081.3095.7086.3096.7063.4097.10 94.9098.60 96.8698.50
toothbrush88.9097.9097.2098.70100.098.1098.3098.10 98.5098.70100.097.20 98.0698.63
transistor90.3094.1097.5097.2091.5097.0095.5093.00 93.4096.80 99.9094.9098.9697.26
zipper96.6096.5090.8098.2097.9095.1099.4099.3094.0097.40 99.2098.40 99.5897.79
average85.4196.0395.2096.7092.0795.7195.1995.97 91.2196.67 98.6997.51 98.8197.86
Table 2. The results on the MPDD dataset (AUROC—image%, AUROC—pixel%).
Table 2. The results on the MPDD dataset (AUROC—image%, AUROC—pixel%).
CategorySPADEPaDiMSVDDCutpasteRegADPatchCoreOurs
Det.Seg.Det.Seg.Det.Seg.Det.Seg.Det.Seg.Det.Seg.Det.Seg.
bracket black62.8097.7075.6094.2060.1789.4677.5085.2067.3072.5868.1590.5585.9896.74
bracket brown88.0098.5085.4095.2064.5689.6993.4093.8069.6093.2574.9686.4683.5095.40
bracket white83.6097.0082.2098.1050.8989.4874.6082.7061.4095.6872.5698.3188.2599.50
connector90.7098.6091.7097.9094.5290.9897.6096.8094.9092.1680.9596.7499.9599.11
metal plate91.7097.4056.3093.4092.8092.7088.0092.5090.2089.2689.9896.3098.4898.22
tubes51.0097.5057.5092.1022.1585.8198.2097.4067.9092.2769.0794.1181.2598.85
average77.9797.7874.7895.1564.1889.6988.2291.4071.9089.2075.9593.7589.5797.97
Table 3. The results on the HSR dataset (AUROC—image%, AUROC—pixel%).
Table 3. The results on the HSR dataset (AUROC—image%, AUROC—pixel%).
CategorySPADEPaDiMSVDDCutpasteRegAD(k = 8)PatchCoreOurs
Det.Seg.Det.Seg.Det.Seg.Det.Seg.Det.Seg.Det.Seg.Det.Seg.
198.7890.7098.6793.7095.4693.2578.7085.2792.2590.7598.2897.2599.3296.58
297.2595.2697.6897.2598.5698.2599.2598.7594.8795.7698.2699.2798.8698.86
395.4594.2896.8797.3694.2896.2898.3697.1698.2396.1497.5297.2599.2899.38
average97.1693.4197.7496.1096.1095.9392.1093.73 95.1294.2298.0297.9299.1598.27
Table 4. Average PRO scores on three datasets.
Table 4. Average PRO scores on three datasets.
DatasetsSPADEPADIMPatchCoreOurs
MVTec91.7090.1090.1792.90
MPDD91.2084.7093.2094.90
HSR 1–390.7094.8194.7295.81
Mean score91.2089.8792.6994.54
Table 5. Industrial inspection score on HSR dataset 1.
Table 5. Industrial inspection score on HSR dataset 1.
MetricsSPADEPaDiMPatchCoreOurs
Recall%88.0096.0094.0096.00
Accuracy%87.8097.6097.9297.39
FAR%12.004.003.524.00
TIMES(s)0.370.350.200.22
Table 6. Industrial inspection score on HSR dataset 2.
Table 6. Industrial inspection score on HSR dataset 2.
MetricsSPADEPaDiMPatchCoreOurs
Recall%84.0094.0096.0098.00
Accuracy% 82.6093.0492.1793.91
FAR%16.006.004.002.00
TIMES(s)0.420.340.330.30
Table 7. Industrial inspection score on HSR dataset 3.
Table 7. Industrial inspection score on HSR dataset 3.
MetricsSPADEPaDiMPatchCoreOurs
Recall%86.0092.0096.4198.00
Accuracy% 86.0992.1797.9298.45
FAR%14.000.084.002.00
TIMES(s)0.310.260.210.21
Table 8. Five train iterations on HSR dataset 1.
Table 8. Five train iterations on HSR dataset 1.
IterationSPADEPADIMPatchCoreOurs
183.3388.8994.4494.44
294.4494.4488.89100.00
383.3383.3394.4488.89
4100.0094.44100.0094.44
588.89100.0094.4494.44
Mean Recall89.9992.2294.4494.44
Table 9. Five cross-validation iterations on HSR datasets 1–3.
Table 9. Five cross-validation iterations on HSR datasets 1–3.
IterationSPADEPADIMPatchCoreOurs
194.7898.2695.6598.26
295.6593.0496.5295.65
392.1793.9194.7893.91
497.3995.6594.7897.39
590.4394.7899.1396.52
Mean Accuracy94.0895.1396.1796.35
Table 10. Mean error calculation.
Table 10. Mean error calculation.
IterationSPADEPADIMPatchCoreOurs
Mean error5.924.873.833.65
Table 11. Comparison of different pooling (AUROC—image%, AUROC—pixel%).
Table 11. Comparison of different pooling (AUROC—image%, AUROC—pixel%).
MetricsNo PoolingSPPMAvgPool2d
Det.Seg.Det.Seg.Det.Seg.
scores95.5088.5096.8090.6098.2091.70
Table 12. Module ablation (AUROC%).
Table 12. Module ablation (AUROC%).
ModuleMVTecMPDD
Det.Seg.Det.Seg.
No SPPM95.5097.5196.8090.60
no Registration95.5097.6496.8090.60
neither95.5097.5195.5088.50
Both95.5097.9295.5088.50
Table 13. The mask generated by the non-post-processing method and the mask generated by our method.
Table 13. The mask generated by the non-post-processing method and the mask generated by our method.
Datasets I o U ¯ (Before) I o U ¯
HSR dataset 10.420.68
HSR dataset 2 0.330.75
HSR dataset 30.370.71
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, Z.; Zhang, Y.; Wang, Y.; Li, J.; Gao, X. FR-PatchCore: An Industrial Anomaly Detection Method for Improving Generalization. Sensors 2024, 24, 1368. https://doi.org/10.3390/s24051368

AMA Style

Jiang Z, Zhang Y, Wang Y, Li J, Gao X. FR-PatchCore: An Industrial Anomaly Detection Method for Improving Generalization. Sensors. 2024; 24(5):1368. https://doi.org/10.3390/s24051368

Chicago/Turabian Style

Jiang, Zhiqian, Yu Zhang, Yong Wang, Jinlong Li, and Xiaorong Gao. 2024. "FR-PatchCore: An Industrial Anomaly Detection Method for Improving Generalization" Sensors 24, no. 5: 1368. https://doi.org/10.3390/s24051368

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop