Next Article in Journal
Improving Ship Fuel Consumption and Carbon Intensity Prediction Accuracy Based on a Long Short-Term Memory Model with Self-Attention Mechanism
Previous Article in Journal
Effects of Laboratory Ageing on the Chemical Composition and High-Temperature Performance of Warm Mix Asphalt Binders
Previous Article in Special Issue
Application of Technological Procedure Automated Guided Vehicles in the Production Hall of a Company Due to Increasing the Automation—Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust and Accurate Recognition of Carriage Linear Array Images for Train Fault Detection

School of Instrumentation and Optoelectronic Engineering, Beihang University (BUAA), Beijing 100191, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(18), 8525; https://doi.org/10.3390/app14188525
Submission received: 25 August 2024 / Revised: 11 September 2024 / Accepted: 20 September 2024 / Published: 22 September 2024
(This article belongs to the Special Issue Current Advances in Railway and Transportation Technology)

Abstract

:
Train fault detection often relies on comparing collected images with reference images, making accurate image type recognition crucial. Current systems use Automatic Equipment Identification (AEI) devices to recognize carriage numbers while capturing images, but damaged Radio Frequency (RF) tags or blurred characters can hinder this process. Carriage linear array images, with their high resolution, extreme aspect ratios, and local nonlinear distortions, present challenges for recognition algorithms. This paper proposes a method tailored for recognizing such images. We apply an object detection algorithm to locate key components, simplifying image recognition into a sparse point set alignment task. To handle local distortions, we introduce a weighted radial basis function (RBF) and maximize the similarity between Gaussian mixtures of point sets to determine RBF weights. Experiments show 100% recognition accuracy under nonlinear distortions up to 15%. The algorithm also performs robustly with detection errors and identifies categories from 79 image classes in 24 ms on an i7 CPU without GPU support. This method significantly reduces system costs and advances automatic exterior fault detection for trains.

1. Introduction

With the continuous expansion of railway networks and the increase in train speeds, the importance of train body fault detection technology has become increasingly prominent. Failure to accurately identify train faults in a timely manner poses serious safety risks and can increase operating costs. Therefore, researchers have designed systems such as the Trouble of Moving EMU Detection System (TEDS), Trouble of Moving Freight Detection System (TFDS), and Trouble of Moving Vehicle Detection System (TVDS) to monitor train fault conditions [1,2,3,4]. These systems use multiple cameras installed around the tracks to capture images of the train car bodies from different angles. Figure 1a shows a train carriage line-scan image acquisition system. When the train passes through the camera’s field of view, the cameras capture images of the carriage surface from various observation angles. By extracting standard reference images of the same category from a pre-established reference image library and performing comparative analysis, these systems can evaluate the train’s fault status. Figure 1e illustrates the fault detection process; for more details, please refer to our previous work [3]. The first step in this process is to accurately select the reference images for comparison, which is crucial for the accuracy of subsequent fault analysis. Ensuring the correct selection of reference images depends on accurately identifying the category of the current image. Thus, accurate identification of carriage image categories is essential in the train fault detection system.
Currently, in train image acquisition systems, image category identification is primarily achieved by recognizing the carriage number information using specialized equipment while capturing the line-scan images of the carriage. This information includes the model and a unique identifier of the carriage, from which the model information can be parsed to label the category of the current image. Existing carriage number recognition equipment can be classified into two types based on their working principles. One type uses microwave communication technology. When the train passes between microwave antennas installed on the tracks, the equipment reads the Radio Frequency (RF) tags installed at the bottom of the carriages, which store the carriage number information, as shown in Figure 2a. The other type uses visual character recognition technology to directly recognize the carriage number characters painted on the side of the carriage, as shown in Figure 2b.
However, the real-world operating conditions of trains are complex, especially for freight trains, where strong vibrations and unpredictable environmental factors can cause the pre-installed RFID tags to be lost or damaged. In such cases, the microwave communication-based carriage number recognition equipment cannot mark the collected image categories, rendering the fault detection system inoperative. Additionally, the carriage number markings can be worn out or obscured by dust, leading to blurred characters in the images, as shown in Figure 2b. This compromises the accuracy of character recognition-based carriage number identification. Furthermore, the high cost of AEI equipment significantly increases the cost of the fault detection system. To address these issues, this study focuses on developing a method for identifying train carriage categories directly from line-scan images, without relying on additional specialized equipment. This approach aims to overcome the limitations of existing carriage image identification technologies and improve the stability of train fault detection systems.
Due to the ease of achieving uniform illumination with line-scan cameras, the high consistency of multiple imaging of the same target, and their unique imaging principle which allows capturing high-resolution images of entire carriages without the need for image stitching algorithms, line-scan cameras are widely used in existing train fault detection systems [1,2,3,4]. However, line-scan carriage images have unique characteristics such as nonlinear scale distortion, ultra-high image resolution, and extreme aspect ratios, which pose new challenges to image recognition tasks. Firstly, when a train passes a line-scan camera at varying speeds, if the train’s speed is not accurately and promptly fed back to the camera’s trigger interface, the captured line-scan images of the carriage will exhibit nonlinear scale distortion in the horizontal direction, as shown in Figure 1c. The relationship between the horizontal coordinates of the distorted image and the standard scale image is illustrated in Figure 1d. This curve intuitively demonstrates the form of nonlinear distortion in the horizontal direction. This type of scale distortion maintains consistency in the vertical direction but presents unknown nonlinearity in the horizontal direction. It not only alters local image features but also significantly impacts global image features, directly affecting the accuracy of image recognition algorithms. Furthermore, to provide sufficient image detail features for accurate fault identification, carriage line-scan images often have extremely high resolutions. Due to the elongated geometric structure of the carriages, there is a significant difference in the aspect ratio of the line-scan images. Figure 3 shows line-scan images of four different types of carriages, captured by a 2K line-scan camera from both bottom and side perspectives. The resolution in the width direction primarily depends on the length of the carriage. As shown in Figure 3, even the smallest image resolution reaches approximately 32 megapixels, with an aspect ratio of about 1:8. Although existing deep learning-based image recognition algorithms have demonstrated excellent performance across multiple datasets, these models are primarily designed for low-resolution and regular aspect ratio images. Processing high-resolution images for feature extraction and handling consumes substantial computer memory or GPU memory, making it challenging to meet the above requirements due to capacity limitations. If high-resolution images with extreme aspect ratios are forcibly scaled down to conventional proportions and regular resolutions, significant image detail features will be lost, and the extreme scale distortion will cause uncertainty in feature representation, which is crucial for distinguishing different carriage types.
To address the challenge of identifying the categories of high-resolution, extreme aspect ratio line-scan images of carriages, this study utilizes the spatial arrangement of key components in the carriage images, which corresponds uniquely to each carriage type. We propose a template matching-based image identification method, with the main contributions summarized as follows:
  • To avoid feature distortion and high memory usage when processing image features during category identification, we leveraged the relationship between the layout of key components in the carriage image and the carriage type. By constructing a sparse point set based on the detected key components, we propose a template matching method that registers the sparse point set of the acquired image to the standard template point set of the target category, thereby identifying the carriage image category.
  • To ensure that the coordinate transformation in the template matching process can accommodate the nonlinear scale distortions present in line-scan images of carriages, this study employs a weighted radial basis function to describe the nonlinear transformation relationship between the horizontal coordinates of the two point sets. Furthermore, to address the issues of unknown point correspondences and mismatched point quantities during point set registration, we designed an objective function that maximizes the similarity of the mixed Gaussian distribution between the point sets, thereby solving for the weights of the radial basis function.
  • Extensive experiments demonstrate that our method exhibits excellent performance in terms of recognition accuracy, processing speed, and robustness. In the task of recognizing high-resolution line-scan images of 76 carriage categories, the algorithm achieves 100% recognition accuracy when the local nonlinear scale distortion is less than 15%. Furthermore, it can accurately identify image categories even when the number of false detections increases or decreases by up to 10%. The entire image identification process takes an average of only 24 ms on an i7 CPU.
The rest of this paper is organized as follows: Section 2 provides a review of related work. Section 3 introduces the proposed methodology framework in detail. Section 4 discusses experimental results. Finally, Section 5 concludes this article.

2. Related Works

Image classification is a typical task in the field of computer vision, widely applied in object recognition, medical image analysis, and autonomous driving. Based on different image feature descriptions and classification principles, image classification methods are mainly divided into traditional machine learning-based methods [5,6,7,8,9] and deep learning-based methods [10,11,12,13,14,15,16,17]. This article provides a detailed introduction to these methods, focusing on their implementation principles, their feature representation methods, and the advantages of each approach.

2.1. Traditional Machine Learning-Based Image Classification Methods

Support Vector Machines (SVMs) are widely used in image classification by finding an optimal hyperplane to separate classes. Chandra and Bedi [5] reviewed SVM’s principles, kernel function selection, and practical performance. Typically, handcrafted features like SIFT and HOG are used for feature extraction, transforming image information into vectors for classification. SVM is effective in high-dimensional spaces but has long training times and is sensitive to parameter selection, making it less suitable for large-scale datasets. Random Forests, an ensemble method, enhance robustness and generalization through multiple decision trees trained on subsamples and feature subsets. Bosch et al. [6] used Random Forests and Ferns, a variant of decision trees known for computational efficiency, for image classification. While Random Forests are fast and handle high-dimensional data, they can overfit with a small number of trees and their interpretability is limited. k-Nearest Neighbors (kNN) determines the class of a sample by calculating its distance to k-Nearest Neighbors. Amato and Falchi [7] improved kNN’s accuracy using local feature descriptors like SURF and ORB. kNN is simple and adaptive but computationally expensive for large datasets, and performance depends on the distance metric and k value. The Naive Bayes classifier applies Bayes’ theorem to calculate the conditional probability of features. Timofte et al. [8] used statistical principles in their Naive Bayes-based image classification method. While fast and suitable for high-dimensional data, Naive Bayes assumes feature independence, which is often unrealistic. The Bag-of-Words (BoW) model represents images as unordered collections of feature words. Wang and Huang [9] improved BoW for image classification by refining visual vocabulary generation. BoW is flexible and scalable but lacks spatial information, potentially missing structural details in images.

2.2. Deep Learning-Based Image Classification Methods

EfficientNet is a convolutional neural network that balances depth, width, and resolution through compound scaling, reducing parameters while maintaining performance [10]. Tan and Le [11] improved the architecture with EfficientNetV2, achieving smaller models and faster training. EfficientNet’s hierarchical design efficiently extracts high-level features for image classification. ResMLP uses multilayer perceptrons (MLPs) and residual connections for image classification, offering advantages in data efficiency and generalization [12]. ResNet, introduced by He et al. [13], solved the gradient vanishing issue with residual connections, improving performance in feature extraction. Liang [14] demonstrated ResNet’s superior performance on various benchmarks. Cheng et al. [15] proposed SeNet, enhancing classification by extracting edge information through structured edge detection. ShuffleNet, proposed by Zhang et al. [16], uses depthwise separable convolutions and channel shuffle to optimize performance on mobile devices while reducing complexity. The Vision Transformer (ViT) [17] extends Transformer models from natural language processing to image recognition by treating images as sequences of patches. ViT, pre-trained on large datasets like ImageNet-21k and JFT-300M, has demonstrated superior performance over traditional CNNs on large-scale image tasks.

2.3. High-Resolution Image Detection-Related Research

There has been extensive research on high-resolution image detection and recognition in the field of satellite remote sensing. For example, in [18], buildings in remote sensing images captured by drones were successfully identified. In [19,20], ships in Gaofen-3 SAR images were detected and identified. In [21], sparse representation was combined with various characteristics of remote sensing images for the first time, improving the accuracy of high-resolution image target recognition. Representatively, Ref. [22] proposed the HRDNet network model designed for small target detection in high-resolution images. To improve detection speed, Ref. [23] proposed a multi-GPU distributed computing method, optimizing performance for target detection tasks on 4K and 8K images. However, current research on high-resolution image detection primarily focuses on detecting conventional targets such as faces, pedestrians, buildings, ships, and aircraft [24]. These studies mainly focus on small target detection and typically deal with images of regular size ratios. Unlike target detection tasks, this study focuses on image recognition. The carriage line-scan images in this study are characterized by ultra-high resolution and extreme aspect ratios, with each image corresponding to an entire carriage. Thus, existing research results do not address the image recognition issues in this study.

2.4. Image Retrieval Methods

As shown in Figure 1, searching for target images from a reference library for train fault detection is a typical image retrieval task. Various mature methods are available:
Liu et al. [25] proposed Deep Supervised Hashing (DSH), using CNNs to learn binary codes for efficient image retrieval with low storage and computational costs, though it struggles with high-resolution images and requires extensive labeled data. Gordo et al. [26] introduced an end-to-end learning architecture with R-MAC descriptors and a Siamese network, improving retrieval accuracy but requiring complex training and high-quality labeled datasets. Liu et al. [27] proposed Guided Similarity Separation (GSS) using graph convolutional networks and unsupervised learning to enhance retrieval accuracy, but the method has high computational complexity. Ramzi et al. [28] introduced a rank loss optimization framework, improving retrieval performance across multiple datasets, although the optimization process is resource-intensive. Smeulders et al. [29] proposed Content-Based Image Retrieval (CBIR), which relies on extracting visual features like color and texture, but the feature extraction process can be slow for large databases. Noh et al. [30] developed DELF, a deep local feature descriptor effective in handling complex images, though initial training requires significant computational resources. Henkel et al. [31] proposed an end-to-end pipeline combining EfficientNet with hybrid Swin-Transformers, achieving state-of-the-art results in large-scale landmark retrieval, though the model’s complexity demands substantial computational power and labeled data.
In summary, numerous advanced technologies and application methods have been proposed in the field of image retrieval, demonstrating significant performance improvements in specific application scenarios. In train fault detection systems, the extremely high resolution and extreme aspect ratio of carriage line-scan images present significant challenges in memory and GPU usage for existing image retrieval methods. While traditional machine learning methods perform well on small-scale datasets, they often fall short on large-scale image data compared to deep learning algorithms. Despite deep learning methods showcasing outstanding performance in image recognition, these models are primarily designed for low-resolution and regular aspect ratio images. Forcibly resizing high-resolution carriage line-scan images to regular sizes would result in the loss of critical detailed features essential for vehicle type recognition.
Furthermore, accumulating images of different carriage types requires considerable time, and in training-based methods, algorithm accuracy is positively correlated with the number of images collected. When data categories change, the model needs retraining, which is time-consuming and cannot quickly adjust to new target categories. Due to the unique nature of line-scan carriage images, existing methods do not consider these aspects and fail to meet the specific needs of image recognition in train fault detection.
To address these challenges, this study explores methods for acquiring reference images from the perspective of image classification, aiming to develop a method suitable for quickly obtaining reference images of line-scan carriage images. This approach seeks to reduce dependence on computer hardware configuration and computational resources, improving the efficiency and accuracy of line-scan carriage image recognition.

3. Methodology

To tackle the recognition challenges of carriage linear array images, we propose a template matching-based image identification method. By exploiting the distinct correlation between the spatial arrangement of key carriage components and their categories, the high-resolution image recognition challenge was reframed as a sparse point set template matching task. Considering the nonlinear scale distortions in the horizontal dimension of these images, a weighted radial basis function was utilized for coordinate transformations during point set registration. This approach effectively addresses the template matching challenges, enhancing both accuracy and robustness. Before detailing the algorithm, it is essential to categorize the unique features of linear array carriage images.

3.1. Classification of Carriage Linear Array Images

During the acquisition of line-scan images of train carriages, the camera remains stationary, and the movement of the train results in four different categories of line-scan integrated images for the same carriage. As shown in Figure 4a, L and R represent the left and right sides of the carriage, respectively, while A and B represent the front and rear ends. The red line indicates the scanning line of the line-scan camera. The corresponding image categories are denoted as LRAB, LRBA, RLAB, and RLBA. If there are N types of carriages, the total number of line-scan image categories is 4×N. Images of the same side of the carriage exhibit a left–right flip relationship.

3.2. Template Matching-Based Image Recognition Method

This paper proposes a template matching-based image recognition algorithm, consisting of four main steps: image-to-point conversion, point set registration, matching quality evaluation, and score ranking. As shown in Figure 4c, the process begins by converting the target detection results of the line-scan carriage images into point sets. In Figure 3, different colored rectangles represent different detection targets. Based on predefined criteria, only the center coordinates of the bounding boxes are retained and no labels are assigned to the points, forming a two-dimensional point set. This step corresponds to Figure 4b. The same procedure is applied to both the image to be recognized and the template image, yielding a scene point set and a target point set, respectively. The image to be recognized may exhibit local nonlinear scale distortions, while the template image is a standard scale image for each category.
The algorithm then performs point set registration for the specified scene point set and the template point sets of each category, aiming to align the horizontal coordinates of the point sets. Subsequently, each registration result undergoes a quality evaluation to verify the matching status between point sets and determine the success of the match, scoring each accordingly. Finally, all matching scores are ranked and the template category with the highest score exceeding a set threshold is selected as the recognition result.
The key to the recognition process is accurately finding the target point set that matches the scene point set. The core challenge is addressing the point set registration problem under local nonlinear scale distortions in the horizontal direction of the line-scan carriage images. Due to the unique nature of line-scan carriage images, the initial vertical coordinates of correctly paired target and scene point sets are generally aligned. To improve point set registration efficiency, only horizontal alignment needs to be addressed, using an appropriate matching quality evaluation method to ensure accurate carriage category recognition. To achieve this, a coordinate transformation equation based on a weighted radial basis function is proposed, along with a numerical solution. Additionally, a specific point set matching quality evaluation method is provided. The following sections will detail these core components.

3.3. Weighted Radial Basis Function for Coordinate Transformation

Radial Basis Function (RBF) neural networks have garnered widespread attention for their excellent capability in approximating nonlinear functions, theoretically able to approximate any nonlinear function [32]. The core of RBF neural networks lies in the linear combination of radial basis functions within the network layer, making them a powerful tool for handling nonlinear problems. Leveraging this characteristic of RBF neural networks, this paper proposes a weighted radial basis function to describe and handle the coordinate transformation problem under local nonlinear scale distortions in line-scan images of train carriages. The horizontal coordinate transformation equation from the scene point set to the target point set to is defined as shown in Equation (1).
T x = x + i = 1 n w i · R B F x , c i , δ
In the equation above, w i denotes the weighting value of each radial basis function, a parameter to be determined. These weights determine the influence of their corresponding radial basis functions on the overall coordinate transformation. R B F · represents a kernel function, with the Gaussian radial basis function selected, as detailed in Equation (2). Here, x designates the horizontal coordinate of a point in the scene point set, c is the center of the radial basis function which corresponds to the horizontal coordinate of a point in the target point set, and δ denotes the standard deviation of the radial basis function. The value of s can either be predefined based on the problem’s specifics or adjusted for optimal results.
R B F x , c , s = x c 2 2 δ 2
By employing the weighted radial basis function concept, the strengths of RBF neural networks in approximating nonlinear functions are harnessed, providing a robust mathematical model to address the nonlinear distortions in carriage linear array images.

3.4. Objective Function Design

When both the scene and target point sets possess an equal number of points with known correspondence, the weight vector w can be directly determined using the least squares method. However, the correspondence between point sets, based solely on object detection results, is often uncertain. Additionally, object detection imperfections, such as false positives or negatives, can cause point count mismatches between the scene and target sets. This makes an analytical determination of the weight variables challenging. To address this, we employed numerical optimization techniques. Iterative optimization, guided by the objective function, yields the weight variables.
The objective function’s design aims to measure the alignment between two point sets, particularly when their sizes differ and correspondence is unclear. The Gaussian Mixture Model (GMM) is introduced to represent these point sets. Alignment is gauged by evaluating the congruence between the GMMs of both sets. The GMM’s probability density at location x is defined as g m m x , U = i = 1 k ω i ϕ x u i , Σ i , where u i represents a point in set U , the Gaussian distribution centered at u i is expressed as follows:
ϕ x u i , Σ i = exp 1 2 x μ i T Σ i 1 x μ i 2 π d det Σ i
The L2 distance is chosen to determine the similarity between the GMMs of the two sets. M represents the target point set, corresponding to centers of components in a standard carriage image, and x a point within M . S denotes the scene point set, representing centers of bounding boxes in the image being identified. w is the vector of transformation parameters from S to M , comprising weights. The goal is to identify the optimal weight vector w that minimizes the function F S , M , w :
F S , M , w = min w ( G x ) G = g m m x , T S , M , w g m m x , M 2
The squared term in the objective function can be decomposed into three components as shown in Equation (5). Since the third term, g m m x , M 2 , is constant during optimization, focus is placed on the remaining two parts.
g m m x , T S , M , w g m m x , M 2 = A B + C A = g m m x , T S , M , w B = 2 g m m x , T S , M , w g m m x , M C = g m m x , M 2
Considering that the scale distortion in linear array images mainly manifests in the scanning direction, only 1D GMM similarity calculations based on horizontal coordinates are considered. If optimal transformation parameters are identified within a specific horizontal range, any point sampled within this range ensures a minimized objective value. Thus, uniform sampling within the model point set’s horizontal range is adopted for quick GMM similarity estimation.
For optimization, the Levenberg–Marquardt (L-M) algorithm is utilized, a nonlinear least squares optimization method that merges the Gauss–Newton method with gradient descent [33]. Each iteration requires gradient and Hessian matrix computations for parameter updates. In this context, the objective function’s Jacobian matrix is essential. Applying the chain rule, we derived the following:
F w i = 2 i = 1 N Δ x i w i
where Δ x = g m m x , T M , w g m m x , S . As g m m x , S is a known constant, the partial derivative of Δ x with respect to w i can further be expressed as follows:
Δ x w i = j = 1 N g m m x , T x j T x j T x j w i
Here, T x j w i = R B F x j , c i , δ and N correspond to the sample count along the horizontal axis of the model point set.

3.5. Alignment Quality Assessment Method

In an ideal alignment between scene and target point sets, their Gaussian mixtures across horizontal coordinates should align closely, as depicted in Figure 5a. However, even after optimization, notable disparities can remain between point sets of different categories, as highlighted in Figure 5b. Furthermore, when two carriage images undergo vertical mirroring, their Gaussian mixtures might appear similar despite a mismatch in point set categories, as demonstrated in Figure 5c. Clearly, relying purely on Gaussian mixture similarity for point set matching is not sufficient.
To tackle these issues, this study proposes an intuitive yet effective method to evaluate the quality of point set alignments, providing a reliable metric to gauge the success of template matching. In detail, following the point set alignment, the scene point set, denoted as S , undergoes a transformation into the coordinate space of the target point set M . For each point s in S , its closest counterpart in M is determined, expressed as follows:
N N ( s ) = a r g min m M d i s t a n c e ( T ( s ) , t )
Here, N N s signifies the closest point to s within M , while d i s t a n c e ( a , b ) quantifies the spatial separation between points a and b . Given τ as a set distance threshold, if d i s t a n c e T s , t < τ , it is inferred that point s has correctly matched. The fraction of accurate matches, denoted as m a t c h _ r a t i o , is derived using a subsequent formula. When m a t c h _ r a t i o exceeds a threshold γ , the template matching process is judged successful. The m a t c h _ r a t i o serves as the score for match quality. Finally, among all templates, the category corresponding to the highest-scoring template is recognized as the image’s identification result.
m a t c h _ r a t i o = n u m b e r   o f   c o r r e c t l y   m a t c h e d   p o i n t s   i n   S t o t a l   n u m b e r   o f   p o i n t s   i n   S

4. Experiments

Freight train carriages are designed and manufactured to meet the diverse requirements of different types of cargo. Due to the wide variety of rail freight types, there is also a rich variety of corresponding carriage models. Additionally, the complex outdoor environment on railway lines means that the consistency of line-scan images of freight train carriages is far inferior to that of subway and high-speed train carriages. Therefore, using line-scan images of freight train carriages to evaluate the performance of image recognition algorithms is more convincing. In this experiment, we collected line-scan images of 19 different models of freight carriages. Based on the different movement directions of the vehicles, the images for each model were further subdivided into four subcategories: LRAB, RLBA, RLAB, and RLBA. Each subcategory contains 200 different images, exhibiting varying degrees of local nonlinear scale distortion in the horizontal direction. This results in a dataset comprising 76 categories and a total of 1.52 million high-resolution carriage images. Each image has a height resolution of 2048 pixels, and the width resolution ranges from n × 2000 (where n = 8 to 13), depending on the length of the carriage. The dataset is evenly divided into a training set and a testing set. Additionally, to evaluate our method, we constructed a standard reference image library consisting of 76 undistorted standard images, with one reference image for each category. Object detection was performed on both the test set and the template set, and the results were converted into corresponding point sets for subsequent algorithm performance evaluation.
In Section 2, we analyzed and discussed existing image classification methods. Among machine learning-based image recognition algorithms, the Bag-of-Words (BoW) model stands out for its high computational efficiency and scalability, particularly in handling large-scale feature descriptors. Given the high resolution of line-scan carriage images, the extracted local feature quantities are substantial. Considering processing speed and algorithmic complexity, other traditional machine learning methods are unsuitable for recognizing line-scan carriage images. Therefore, in our experiments, we selected the BoW model as a representative machine learning method, combined with different feature extraction methods, to compare recognition performance. Additionally, the unique nature of line-scan carriage images requires substantial memory and GPU resources for existing image retrieval methods. Currently, we do not have the hardware capabilities to conduct comparative experiments under these conditions. Ultimately, we selected three categories of methods for comparison: BoW-based methods, deep learning-based multi-class image classification methods, and the proposed method. Performance evaluation on the carriage dataset was conducted in terms of image recognition accuracy, processing speed, and algorithm robustness.
During the evaluation of deep learning-based methods, high-resolution input images caused GPU memory shortages, hindering model training and inference. Conversely, excessively low resolutions resulted in significant loss of image details, affecting recognition accuracy. To balance GPU memory consumption and accuracy loss, we uniformly scaled all network model input images to a resolution of 448 × 3200 pixels. In other comparative experiments, we downsampled the input images by a factor of 4 to ensure a resolution similar to that of the multi-classification network model, thereby ensuring the fairness and rigor of the comparative experiments.

4.1. Parameter Selection

This method involves four main parameters: τ , γ , δ , and Σ , each influencing the matching process and the final accuracy statistics differently. Below, I explain the selection rationale for each parameter.
Selection of  τ  and  γ : These two parameters are thresholds set to determine whether template matching is successful. Although they do not directly affect the point set matching process, they determine the accuracy statistics of the matching result.
τ is determined based on the maximum allowable error under scale distortions in the images. Considering the actual detection environment of the train images and the real target detection results, we set τ = 200 to account for potential scale variations.
γ represents the proportion of successfully matched points in the template point set. We set γ to 0.75, meaning that if the proportion exceeds this value, the match is considered successful. A higher γ can reduce false positives but may increase the risk of missing matches, while a lower γ may lead to more false positives. Based on experimental results, we selected this value as an optimal balance.
Selection of  δ : We chose δ = 1 for the iterative optimization process of point set alignment. The point set coordinates are normalized along the height direction of the image during alignment, but there is local nonlinear distortion in the width direction. The train’s speed feedback mechanism somewhat limits this distortion, and the horizontal offset usually does not exceed the height of the image. Therefore, δ = 1 ensures that each point’s Gaussian distribution in the template aligns with the height direction of the image, causing most matching points to fall within the high-response region of the Gaussian distribution. This enhances the robustness of the matching process and avoids excessive variance, which could lead to too much overlap of template points and local optima.
Selection of  Σ :  Σ is a key parameter in the template point set matching process. We tested the impact of Σ values ranging from 0.1 to 1 on the matching success rate. As shown in Table 1, the experimental results demonstrate the following:
When Σ is less than 0.9, the image recognition accuracy remains close to 100%. It is only when Σ exceeds 0.9 that the recognition accuracy starts to decline. This indicates that the method has good adaptability to parameter selection.
For matching success rate, Σ = 0.5 produces the best results, showing that this value yields the most stable matching outcome, as more successfully matched points enhance the robustness of the matching process.
Based on the above analysis, we selected the following final parameters for this study: δ = 1 ,   Σ = 0.5 ,   τ = 200 , and   γ = 0.75 . This combination demonstrated the best matching performance in our experiments.

4.2. Image Recognition Accuracy Evaluation

Due to different movement modes, the same carriage can produce four different types of line-scan integrated images. Since the structure of the carriage is asymmetrical, images captured from the left side differ in features from those captured on the right side. Images captured from the same side but with different movement directions will appear as left–right flipped versions of each other. Spatially, these four subcategories of images have significant differences; however, from an image feature perspective, they exhibit high similarity, which directly affects image recognition accuracy. Although the image category relationships may seem straightforward, they pose significant challenges for existing image recognition algorithms. To fully demonstrate these challenges and highlight the advantages of our proposed method, we reconstructed a dataset with 19 categories based on the original carriage dataset, ignoring distinctions in carriage movement modes. Comparing the accuracy of algorithms on these two datasets allows us to evaluate their ability to recognize different carriage models and their effectiveness in handling images from various movement modes. Given that our method achieves 100% recognition accuracy across all categories, we present the average Top-1 Accuracy of subcategories for each model category to save space.
Table 2 presents the accuracy comparison between our method and the Bag-of-Words (BoW) image recognition method. The BoW model combines five feature extraction methods to construct visual word representations of images: AKAZE [34], BRISK [35], ORB [36], SIFT [37], and SuperPoint [38], a deep learning-based feature extraction method known for its superior local feature representation capabilities compared to traditional methods. These five feature extraction methods are commonly used in computer vision and are representative of the field. From Table 2, it is evident that the representation ability of feature descriptors directly determines the classification accuracy of the BoW model—the stronger the feature representation, the higher the image recognition accuracy. The deep learning-based feature extraction method achieves the best recognition accuracy in the BoW model. However, the BoW model, which relies on local features, is sensitive to local feature differences in images. For carriage types with inconsistent images due to complex usage environments, the recognition accuracy is poor (e.g., B23, BX1K, BX1K-1, NX17K, and X70). For categories with high image consistency, the recognition accuracy approaches 100% (e.g., BDL1, C70, NX70A, and C64K).
To further evaluate the impact of local nonlinear distortions on traditional image recognition methods, we conducted a comparative experiment using the template matching algorithm based on normalized cross-correlation, as described in Yoo and Han [39]. This method is commonly employed in image processing tasks for template matching by calculating the correlation between image regions. In our experiment, we applied this method to the 79-class carriage linear array image classification task. The results demonstrate that the normalized cross-correlation method performs poorly on this task, with a significant decline in recognition accuracy. Specifically, the method fails to effectively handle the local nonlinear scale distortions present in the carriage linear array images, leading to a substantial reduction in recognition accuracy, and in many cases, the method fails to produce valid matches altogether.
The poor performance is attributed to the sensitivity of correlation-based methods to image distortions and inconsistencies. In particular, the height inconsistency between carriage images and the scale distortions cause significant changes in the image features, making it difficult for the normalized cross-correlation to produce reliable results. This further highlights the challenges posed by carriage linear array images and demonstrates the limitations of traditional image recognition methods when dealing with images subject to such distortions.
Comparing the classification accuracy of the two datasets in Table 2, the 19-category dataset performs better than the 76-category one. The total number of images in both datasets is the same, but the reduced number of categories increases the diversity of images within each category. This result indicates that the BoW model has some generalization ability for images under different movement modes, which is beneficial for general image recognition tasks. However, for line-scan carriage images, this generalization is undesirable because the goal is to distinguish images based on different movement modes. The fundamental reason for this phenomenon is that the BoW model represents images as unordered collections of feature words, losing spatial information among features. The analysis of the experimental results in Table 2 further verifies that the BoW model is unsuitable for recognizing line-scan carriage images.
Table 3 presents the accuracy comparison between our method and multi-class neural network methods. The six multi-class network models used for comparison are representative of current image classification methods, exhibiting high recognition accuracy on multiple datasets as detailed in Section 2. Compared with the results in Table 2, the multi-class neural networks significantly outperform the BoW method in overall recognition accuracy. The recognition accuracy distribution is consistent with that of the BoW method, with high accuracy for categories with consistent images (e.g., BDL1, C70, NX70A, and C64K). However, for inconsistent image categories, the recognition accuracy still falls short of practical application standards.
Comparing the accuracy performance on the two datasets in Table 3 also shows that the multi-class network models perform significantly better on the 19-category dataset than on the 69-category dataset and outperform the BoW results in Table 1. This indicates that the generalization ability of multi-class network models far exceeds that of the BoW model, primarily due to the enhanced feature representation capabilities of neural networks trained on the carriage dataset. However, the significant accuracy improvement also reveals the limitations of multi-class network models in the 76-category line-scan carriage image recognition task. Despite extensive training, they cannot accurately recognize carriage movement modes for inconsistent image categories. However, accurate identification of reference images for fault analysis is crucial in train fault detection.
Combining the accuracy statistics from Table 2 and Table 3, our method achieves 100% Top-1 Accuracy on both datasets, significantly outperforming other methods. Neither the BoW model nor the deep learning-based multi-class network models effectively recognize line-scan carriage images under different movement modes. In contrast, our method can accurately recognize carriage types and further distinguish carriage movement modes, ensuring the accurate acquisition of reference images for fault analysis.

4.3. Comparison of Image Recognition Speeds

To ensure fairness in evaluating the image recognition speeds of different algorithms, all algorithms were run on the same computer. The computer configuration is as follows: CPU—i7-11800, 2.3 GHz; GPU—NVIDIA RTX 3050, 8 GB VRAM; RAM—16 GB. The comparison results of image recognition speeds are shown in Table 4.
As observed, the Bag-of-Words (BoW)-based methods have the slowest recognition speeds. This is primarily due to the large size of the line-scan images of train carriages, which makes the feature extraction process time-consuming. Additionally, the total number of extracted feature points also impacts the processing time. Considering these factors, the BoW method using SIFT features is the fastest among the BoW methods, with an average recognition time of 2.274 s per carriage line-scan image. Compared to the two-stage BoW methods, multi-class neural network methods demonstrate significantly faster recognition speeds, mainly due to their end-to-end structure. Among these, the ResMLP model has the fastest inference speed, with an average detection time of 31 ms per image. Our proposed method exhibits the fastest image recognition speed among all methods. For the 76-category recognition task, the average recognition time per image is only 24 ms, showcasing a clear advantage over the other two categories of methods. The efficiency of our method can be attributed to the sparse point cloud registration in the template matching process, which inherently involves minimal computation. Additionally, the high efficiency of the C++ programming language contributes to the speed of our algorithm.

4.4. Algorithm Robustness Evaluation

In our proposed method, the point sets used for template matching are derived from the object detection results of images. Due to the complexity of real-world scenarios, certain types of carriage images may exhibit poor consistency. Significant feature differences among these images can lead to false detections or missed detections during the object detection process, resulting in point sets with extra or missing points for template matching. Although object detection algorithms are minimally affected by nonlinear scale distortions in images (data augmentation techniques used during model training effectively mitigate the impact of scale distortions on object detection), the use of weighted radial basis functions to describe the nonlinear scale distortions in images introduces potential challenges. The iterative optimization process for solving the parameters can become trapped in local optima when there are extra or missing points in the point set or when there are extreme nonlinear scale distortions in the image, leading to incorrect point set registration results and ultimately affecting image recognition accuracy. To evaluate the accuracy and robustness of our proposed method under such conditions, we assessed the algorithm’s recognition accuracy in terms of resistance to distortion, missed detections, and false detections.

4.4.1. Evaluation of Resistance to Local Nonlinear Distortions

To evaluate our method’s ability to resist local nonlinear scale distortions, we generated test point sets with varying levels of local distortion from 0% to 50% in 5% increments for 76 template point sets. The distortion percentage is relative to the horizontal dimension of the image. For each category, 100 sets of test point sets were generated with randomly positioned distortions at each specified distortion level. To realistically simulate the localization error of object detection algorithms, each point in the target point set was randomly offset by up to 20 pixels in both horizontal and vertical directions. Local nonlinear scale distortions were simulated by applying specific scale offsets to random horizontal positions and using cubic polynomial interpolation for local interpolation. Since the point sets of the same carriage type under four movement modes have a unique spatial relationship, their recognition accuracies are essentially the same. Therefore, we calculated the average recognition accuracy by carriage type when reporting accuracy statistics. Figure 6 shows the recognition accuracy of our method under different distortion scales. As expected, the matching accuracy generally decreases with increasing distortion levels. However, the recognition accuracy remains unaffected in the 0% to 10% distortion range. At a 15% distortion level, minor changes in recognition accuracy are observed for some categories. When the distortion level exceeds 25%, the recognition accuracy significantly decreases, but even at a 50% distortion level, the recognition accuracy for all categories remains above 69%. These results demonstrate the excellent robustness of our method in resisting local nonlinear scale distortions in line-scan images of train carriages.

4.4.2. Evaluation of Resistance to False Detections

False detections in object detection algorithms mainly fall into two categories: incorrect category identification and extraneous detection results. Since our method does not rely on point categories, the first type of false detection does not affect it. Therefore, this section focuses on evaluating the performance of our method under different levels of extraneous detections. To simulate different false detection levels, we generated new test point sets by adding random points to the original test point sets, increasing the total number of points by 5% increments up to 35%. Figure 7 shows the corresponding accuracy evaluation results using the same statistical method as previously described. The results indicate that although the recognition accuracy generally decreases with increasing false detection rates, recognition accuracy is unaffected when the false detection rate is below 10%. At a 15% false detection rate, some categories experience slight declines in recognition accuracy, but the overall accuracy remains high. When the false detection rate exceeds 20%, overall recognition accuracy starts to decline significantly, but the recognition accuracy for most categories remains relatively high. These experimental results demonstrate the robustness of our method in handling image category recognition problems in the presence of false detections.

4.4.3. Evaluation of Resistance to Missed Detections

To further evaluate our method’s performance under different missed detection rates, we generated new test point sets by randomly removing points from the original test point sets, reducing the total number of points by 5% increments up to 35%. Figure 8 shows the corresponding accuracy evaluation results. As the missed detection rate increases, image recognition accuracy shows a decreasing trend. Recognition results are unaffected when the missed detection rate is below 10%. At a 15% missed detection rate, recognition accuracy for some categories slightly decreases. Beyond a 20% missed detection rate, overall recognition accuracy significantly declines, but the accuracy for some categories remains above 90% even at a 35% missed detection rate. These experimental results confirm that our method retains strong image category recognition capabilities in the presence of missed detections.

5. Conclusions

This study introduces an image recognition method tailored specifically for carriage linear array images, which are characterized by extreme aspect ratios, ultra-high resolution, and local nonlinear scale distortions. These inherent properties significantly limit the recognition accuracy of many existing algorithms, particularly because they require substantial computational resources and memory to handle such complex image data. Traditional methods, such as feature-based approaches, often struggle with the sheer scale and variability of these images, resulting in inefficient training and poor performance.
To address these challenges, we reformulated the recognition problem into a template matching task focused on sparse point set alignment. A coordinate transformation using weighted radial basis functions was proposed to handle the local nonlinear scale distortions typically present in linear array images. Additionally, we designed a unique objective function to enhance the similarity between mixed Gaussian distributions of point sets, ensuring accurate parameter determination for the transformation equation, even in cases where point set correspondences are ambiguous or the point sets vary in size.
Extensive experimental data validate the effectiveness of our method: under local nonlinear scale distortions of up to 15%, the carriage image recognition accuracy reaches 100%. Even with a 10% increase or decrease in the number of detected targets, the recognition accuracy remains unaffected. Moreover, identifying the target category from 79 templates takes only 24 ms. This demonstrates the algorithm’s exceptional performance in terms of recognition accuracy, processing speed, and robustness. Our method allows for the direct identification of carriage models from line-scan images, eliminating the need for traditional AEI equipment.
However, we acknowledge that as the number of template categories increases, the image recognition processing time will also increase accordingly. This limitation will be addressed in future research as we explore ways to further improve the algorithm’s efficiency.

Author Contributions

Conceptualization, Z.F. and X.P.; methodology, Z.F.; software, Z.F.; validation, Z.F. and X.P.; formal analysis, Z.F.; investigation, Z.F.; resources, Z.F.; data curation, Z.F.; writing—original draft preparation, Z.F.; writing—review and editing, Z.F.; visualization, Z.F.; supervision, X.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Science Fund for Youth Science Foundation of China under Grant 62105015; in part by the Aviation Science Foundation 2023 under Grant 20230046051003; and in part by the Beihang University Frontier Cross Fund 2023.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request due to restrictions, e.g., privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chang, L.; Liu, Z.; Shen, Y.; Zhang, G. Novel multistate fault diagnosis and location method for key components of high-speed trains. IEEE Trans. Ind. Electron. 2020, 68, 3537–3547. [Google Scholar] [CrossRef]
  2. Lu, S.; Liu, Z.; Shen, Y. Automatic fault detection of multiple targets in railway maintenance based on time-scale normalization. IEEE Trans. Instrum. Meas. 2018, 67, 849–865. [Google Scholar] [CrossRef]
  3. Fu, Z.; Pan, X.; Zhang, G. Linear array image alignment under nonlinear scale distortion for train fault detection. IEEE Sens. J. 2024, 24, 23197–23211. [Google Scholar] [CrossRef]
  4. Chen, H.; Jiang, B. A review of fault detection and diagnosis for the traction system in high-speed trains. IEEE Trans. Intell. Transp. Syst. 2019, 21, 450–465. [Google Scholar] [CrossRef]
  5. Chandra, M.A.; Bedi, S.S. Survey on SVM and their application in image classification. Int. J. Inf. Technol. 2021, 13, 1–11. [Google Scholar] [CrossRef]
  6. Bosch, A.; Zisserman, A.; Munoz, X. Image classification using random forests and ferns. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
  7. Amato, G.; Falchi, F. kNN based image classification relying on local feature similarity. In Proceedings of the Third International Conference on Similarity Search and Applications, Istanbul, Turkey, 20–21 September 2010; pp. 101–108. [Google Scholar]
  8. Timofte, R.; Tuytelaars, T.; Van Gool, L. Naive Bayes image classification: Beyond nearest neighbors. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; pp. 689–703. [Google Scholar]
  9. Wang, C.; Huang, K. How to use bag-of-words model better for image classification. Image Vis. Comput. 2015, 38, 65–74. [Google Scholar] [CrossRef]
  10. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. Proc. Int. Conf. Mach. Learn. PMLR 2019, 6105–6114. [Google Scholar]
  11. Tan, M.; Le, Q. Efficientnetv2: Smaller models and faster training. Proc. Int. Conf. Mach. Learn. PMLR 2021, 10096–10106. [Google Scholar]
  12. Touvron, H.; Bojanowski, P.; Caron, M. ResMLP: Feedforward networks for image classification with data-efficient training. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5314–5321. [Google Scholar] [CrossRef]
  13. He, K.; Zhang, X.; Ren, S. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  14. Liang, J. Image classification based on RESNET. J. Phys. Conf. Ser. 2020, 1634, 012110. [Google Scholar] [CrossRef]
  15. Cheng, D.; Meng, G.; Cheng, G. SeNet: Structured edge network for sea–land segmentation. IEEE Geosci. Remote Sens. Lett. 2016, 14, 247–251. [Google Scholar] [CrossRef]
  16. Zhang, X.; Zhou, X.; Lin, M. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
  17. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  18. Zheng, L.; Ai, P.; Wu, Y. Building recognition of UAV remote sensing images by deep learning. In Proceedings of the IGARSS 2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1185–1188. [Google Scholar]
  19. Hou, X.; Ao, W.; Xu, F. End-to-end automatic ship detection and recognition in high-resolution Gaofen-3 spaceborne SAR images. In Proceedings of the IGARSS 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 9486–9489. [Google Scholar]
  20. Sun, Z.; Xiong, B.; Lei, Y. Ship classification in high-resolution SAR images based on CNN regional feature fusion. In Proceedings of the 2021 CIE International Conference on Radar, Haikou, China, 15–19 December 2021; pp. 1445–1449. [Google Scholar]
  21. Liu, Y.; Chang, M.; Xu, J. High-resolution remote sensing image information extraction and target recognition based on multiple information fusion. IEEE Access 2020, 8, 121486–121500. [Google Scholar] [CrossRef]
  22. Liu, Z.; Gao, G.; Sun, L. HRDNet: High-resolution detection network for small objects. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar]
  23. Růžička, V.; Franchetti, F. Fast and accurate object detection in high resolution 4K and 8K video using GPUs. In Proceedings of the 2018 IEEE High Performance extreme Computing Conference (HPEC), Waltham, MA, USA, 25–27 September 2018; pp. 1–7. [Google Scholar]
  24. Li, Y. Research and application of deep learning in image recognition. In Proceedings of the 2022 IEEE 2nd International Conference on Power, Electronics and Computer Applications (ICPECA), Shenyang, China, 21–23 January 2022; pp. 994–999. [Google Scholar]
  25. Liu, H.; Wang, R.; Shan, S. Deep supervised hashing for fast image retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2064–2072. [Google Scholar]
  26. Gordo, A.; Almazan, J.; Revaud, J. End-to-end learning of deep visual representations for image retrieval. Int. J. Comput. Vis. 2017, 124, 237–254. [Google Scholar] [CrossRef]
  27. Liu, C.; Yu, G.; Volkovs, M. Guided similarity separation for image retrieval. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
  28. Ramzi, E.; Audebert, N.; Rambour, C. Optimization of rank losses for image retrieval. arXiv 2023, arXiv:2309.08250. [Google Scholar]
  29. Smeulders, A.W.M.; Worring, M.; Santini, S. Content-based image retrieval at the end of the early years. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1349–1380. [Google Scholar] [CrossRef]
  30. Noh, H.; Araujo, A.; Sim, J. Large-scale image retrieval with attentive deep local features. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3456–3465. [Google Scholar]
  31. Henkel, C. Efficient large-scale image retrieval with deep feature orthogonality and hybrid-swin-transformers. arXiv 2021, arXiv:2110.03786. [Google Scholar]
  32. Lee, C.C.; Chung, P.; Tsai, J. Robust radial basis function neural networks. IEEE Trans. Syst. Man Cybern. B 1999, 29, 674–685. [Google Scholar]
  33. Ranganathan, A. The Levenberg-Marquardt algorithm. Tutoral LM Algorithm 2004, 11, 101–110. [Google Scholar]
  34. Alcantarilla, P.F.; Solutions, T. Fast explicit diffusion for accelerated features in nonlinear scale spaces. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1281–1298. [Google Scholar]
  35. Leutenegger, S.; Chli, M.; Siegwart, R. BRISK: Binary robust invariant scalable keypoints. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
  36. Rublee, E.; Rabaud, V.; Konolige, K. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  37. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  38. DeTone, D.; Malisiewicz, T.; Rabinovich, A. Superpoint: Self-supervised interest point detection and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 224–236. [Google Scholar]
  39. Yoo, J.C.; Han, T.H. Fast normalized cross-correlation. Circuits Syst. Signal Process. 2009, 28, 819–843. [Google Scholar] [CrossRef]
Figure 1. Train fault detection system. (a) Train line-scan image acquisition system; (b) the reference image library contains standard images of all carriage types without scale distortion, each annotated with key component information for subsequent fault analysis; (c) acquired line-scan images may have nonlinear scale distortions in the horizontal direction; (d) correspondence between horizontal pixel coordinates in (c) and standard reference image coordinates; (e) main processing steps in train fault detection.
Figure 1. Train fault detection system. (a) Train line-scan image acquisition system; (b) the reference image library contains standard images of all carriage types without scale distortion, each annotated with key component information for subsequent fault analysis; (c) acquired line-scan images may have nonlinear scale distortions in the horizontal direction; (d) correspondence between horizontal pixel coordinates in (c) and standard reference image coordinates; (e) main processing steps in train fault detection.
Applsci 14 08525 g001
Figure 2. Carriage identification equipment. (a) Carriage identification equipment based on microwave communication; (b) carriage identification equipment based on visual character recognition.
Figure 2. Carriage identification equipment. (a) Carriage identification equipment based on microwave communication; (b) carriage identification equipment based on visual character recognition.
Applsci 14 08525 g002
Figure 3. Line-scan images of different types of carriages and their corresponding key component detection results.
Figure 3. Line-scan images of different types of carriages and their corresponding key component detection results.
Applsci 14 08525 g003
Figure 4. Framework of the image recognition algorithm. (a) Shows four different types of images corresponding to the same carriage model. (b) Illustrates the transformation from the image to a point set. (c) Depicts the process of retrieving the matching image category from the template image library.
Figure 4. Framework of the image recognition algorithm. (a) Shows four different types of images corresponding to the same carriage model. (b) Illustrates the transformation from the image to a point set. (c) Depicts the process of retrieving the matching image category from the template image library.
Applsci 14 08525 g004
Figure 5. Comparison of point sets and their GMM probability densities before and after RBF transformation with optimized weights. (a,d) show the intra-category transformation within 1-LRAB. (b,e) highlight the transformation from 1-LRBA to 1-LRAB. (c,f) present the transformation from 1-RLAB to 1-LRAB.
Figure 5. Comparison of point sets and their GMM probability densities before and after RBF transformation with optimized weights. (a,d) show the intra-category transformation within 1-LRAB. (b,e) highlight the transformation from 1-LRBA to 1-LRAB. (c,f) present the transformation from 1-RLAB to 1-LRAB.
Applsci 14 08525 g005
Figure 6. Recognition accuracy under different distortion scales.
Figure 6. Recognition accuracy under different distortion scales.
Applsci 14 08525 g006
Figure 7. Recognition accuracy under different false detection rates.
Figure 7. Recognition accuracy under different false detection rates.
Applsci 14 08525 g007
Figure 8. Recognition accuracy under different missed detection rates.
Figure 8. Recognition accuracy under different missed detection rates.
Applsci 14 08525 g008
Table 1. Impact of Σ on mean recognition accuracy (%) and mean matching ratio (%).
Table 1. Impact of Σ on mean recognition accuracy (%) and mean matching ratio (%).
Σ 0.10.20.30.40.50.60.70.80.91.0
Mean Recognition Accuracy (%)100.00100.00100.00100.00100.00100.00100.00100.00100.0087.50
Mean Matching Ratio (%)99.7799.8499.7199.96100.0097.4696.8696.6696.2595.32
Table 2. Comparison of recognition accuracy: proposed method vs. Bag-of-Words method.
Table 2. Comparison of recognition accuracy: proposed method vs. Bag-of-Words method.
Class76-Category19-Category
AKAZE
[34]
BRISK
[35]
ORB
[36]
SIFT
[37]
Super
[38]
NCC
[39]
OursAKAZE
[34]
BRISK
[35]
ORB
[36]
SIFT
[37]
Super
[38]
Ours
B220.0870.0950.110.130.1220.059 1.00.1330.2550.2250.690.3551.0
B22-10.3970.2250.230.270.320.238 1.00.4880.2780.1830.710.7731.0
B230.140.1670.1120.12750.170.093 1.00.440.5180.3650.2270.3041.0
B23-10.4450.5020.4770.4550.4770.421 1.00.6880.820.9880.300.3611.0
BDL11.01.00.9750.9980.9850.942 1.01.00.7150.590.2950.6131.0
BH10.160.090.0570.2370.450.149 1.00.3080.1130.140.410.191.0
BX1K0.1120.1090.1080.1520.2520.097 1.00.4440.4380.3850.6441.01.0
BX1K-10.0820.1290.1020.0870.1120.052 1.00.1790.1850.1850.350.4481.0
C64K1.00.9821.01.00.930.932 1.01.00.630.5650.690.681.0
C70E0.2430.0430.0580.230.020.069 1.00.81.01.00.680.571.0
C701.00.9681.00.9980.9950.942 1.01.00.5780.5950.8430.5031.0
JSQ50.2730.1750.1680.1880.3350.178 1.00.3880.3150.4010.6880.9951.0
JSQ60.0890.1640.1960.1990.1390.107 1.00.1810.5440.6280.8180.741.0
JSQ6-10.1630.1550.1430.1550.2030.114 1.00.3150.33250.450.25250.4351.0
NX17K0.1330.2330.1650.1180.1180.103 1.00.140.3880.360.3080.4631.0
NX70A0.9980.9950.9630.9980.9950.940 1.01.01.00.9630.9980.5931.0
NX700.5150.5030.5480.4700.4830.454 1.00.530.6830.5380.3730.1231.0
X68BK0.50.5730.5430.4550.4730.459 1.00.7280.3630.180.4130.3031.0
X700.10.1130.1430.2800.0180.081 1.00.160.7930.6680.6051.01.0
Ave. ACC0.3910.3790.3730.3970.3990.338 1.00.5150.510.4810.5380.5491.0
Table 3. Comparison of recognition accuracy: proposed method vs. multi-classification models.
Table 3. Comparison of recognition accuracy: proposed method vs. multi-classification models.
Class76-Category19-Category
Efficient
NetV2
[11]
Resmlp
[12]
Resnet
[13]
Senet
[15]
Shufflenet
[16]
Vit
[17]
OursEfficient
NetV2
[11]
Resmlp
[12]
Resnet
[13]
Senet
[15]
Shufflenet
[16]
ViT
[17]
Ours
B220.1230.5130.5130.6000.1280.2531.00.1650.1930.6800.7870.5730.6551.0
B22-10.4750.3800.3800.3480.4380.3381.00.6730.4400.8180.4480.3280.1681.0
B230.2980.4880.4880.3930.1580.3831.00.5630.4060.4050.9930.6600.7901.0
B23-10.5000.5030.5030.5800.4980.5351.00.7500.1200.6350.5450.5530.6931.0
BDL11.01.01.01.01.01.01.01.00.5200.6780.7500.6630.8381.0
BH10.2150.3080.3080.2200.0850.1201.00.2280.5130.7130.8150.8080.0111.0
BX1K0.1490.4230.4230.4600.1680.4021.00.1490.7030.4180.7530.4130.3051.0
BX1K-10.1390.3090.3090.3860.0940.2151.00.1600.5480.6800.7850.5000.4531.0
C64K1.01.01.01.00.9851.01.01.00.7300.7280.2380.1850.7031.0
C70E0.0330.2600.2600.1600.2130.0351.00.0330.6931.00.8730.4230.4551.0
C701.01.01.01.01.01.01.01.00.4431.00.7650.1370.5651.0
JSQ50.7880.5100.5100.6230.3000.6251.00.9080.4850.6930.5600.9650.7081.0
JSQ60.3660.5280.5280.5580.1450.3191.00.4680.5330.6430.7780.3400.6801.0
JSQ6-10.2500.2680.2680.3780.1580.4331.00.3550.5730.5001.00.5131.01.0
NX17K0.2480.1580.1580.2380.1330.1181.00.2480.2600.6080.3980.9900.3981.0
NX70A1.01.01.01.01.01.01.01.00.2450.6130.4400.6450.6181.0
NX700.5200.5130.5130.5130.5130.5131.00.5230.3430.6590.9280.5700.2801.0
X68BK0.5000.5030.5030.5000.5000.5001.00.7500.4900.8600.2700.2980.2501.0
X700.5550.5830.5830.5530.1330.1781.00.6300.5130.2980.6800.6180.6131.0
Ave. ACC0.4820.5390.5390.5530.4020.4721.00.5580.4600.6640.6740.5360.5361.0
Table 4. Comparison of average recognition speeds among different methods.
Table 4. Comparison of average recognition speeds among different methods.
MethodAKAZEBRISKORBSIFTSuperpointEfficientNetV2ResmlpResnetSenetShufflenetViTOurs
Time(s)3.2634.5865.3122.2744.7620.0820.0310.0540.0630.1390.0820.024
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fu, Z.; Pan, X. Robust and Accurate Recognition of Carriage Linear Array Images for Train Fault Detection. Appl. Sci. 2024, 14, 8525. https://doi.org/10.3390/app14188525

AMA Style

Fu Z, Pan X. Robust and Accurate Recognition of Carriage Linear Array Images for Train Fault Detection. Applied Sciences. 2024; 14(18):8525. https://doi.org/10.3390/app14188525

Chicago/Turabian Style

Fu, Zhenzhou, and Xiao Pan. 2024. "Robust and Accurate Recognition of Carriage Linear Array Images for Train Fault Detection" Applied Sciences 14, no. 18: 8525. https://doi.org/10.3390/app14188525

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop