Next Article in Journal
Automated Sensor Node Malicious Activity Detection with Explainability Analysis
Previous Article in Journal
Importing Diffusion and Re-Designed Backward Process for Image De-Raining
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Charge-Coupled Frequency Response Multispectral Inversion Network-Based Detection Method of Oil Contamination on Airport Runway

1
College of Mechanical Engineering, Xi’an University of Science and Technology, Xi’an 710054, China
2
School of Rehabilitation, Kunming Medical University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(12), 3716; https://doi.org/10.3390/s24123716
Submission received: 16 April 2024 / Revised: 29 May 2024 / Accepted: 5 June 2024 / Published: 7 June 2024
(This article belongs to the Section Environmental Sensing)

Abstract

:
Aircraft failures can result in the leakage of fuel, hydraulic oil, or other lubricants onto the runway during landing or taxiing. Damage to fuel tanks or oil lines during hard landings or accidents can also contribute to these spills. Further, improper maintenance or operational errors may leave oil traces on the runway before take-off or after landing. Identifying oil spills in airport runway videos is crucial to flight safety and accident investigation. Advanced image processing techniques can overcome the limitations of conventional RGB-based detection, which struggles to differentiate between oil spills and sewage due to similar coloration; given that oil and sewage have distinct spectral absorption patterns, precise detection can be performed based on multispectral images. In this study, we developed a method for spectrally enhancing RGB images of oil spills on airport runways to generate HSI images, facilitating oil spill detection in conventional RGB imagery. To this end, we employed the MST++ spectral reconstruction network model to effectively reconstruct RGB images into multispectral images, yielding improved accuracy in oil detection compared with other models. Additionally, we utilized the Fast R-CNN oil spill detection model, resulting in a 5% increase in Intersection over Union (IOU) for HSI images. Moreover, compared with RGB images, this approach significantly enhanced detection accuracy and completeness by 25.3% and 26.5%, respectively. These findings clearly demonstrate the superior precision and accuracy of HSI images based on spectral reconstruction in oil spill detection compared with traditional RGB images. With the spectral reconstruction technique, we can effectively make use of the spectral information inherent in oil spills, thereby enhancing detection accuracy. Future research could delve deeper into optimization techniques and conduct extensive validation in real airport environments. In conclusion, this spectral reconstruction-based technique for detecting oil spills on airport runways offers a novel and efficient approach that upholds both efficacy and accuracy. Its wide-scale implementation in airport operations holds great potential for improving aviation safety and environmental protection.

1. Introduction

When an airport accident occurs, it is extremely important to be able to reconstruct the accident in a timely manner. In this process, the full use of video information during take-off or landing is of great importance in analyzing the causes of accidents and preventing potential ones. According to historical data and research, the processes of aircraft take-off and landing are phases at high risk of in-flight accidents. Identifying oil spills on the runway is one of the key factors in analyzing the causes of these events [1]. In some cases, aircraft may leave oil stains on the runway before take-off or after landing due to improper maintenance or operational errors, representing a potential factor contributing to an accident. The main sources of oil spills in airports are fuel spills from aircraft parking and landing and fuel system leaks from ground service vehicles. Although runway video information can provide a visual record of, for example, oil spills, there are still some challenges to its practical use, such as the similarity in color between oil and sewage, as well as other factors. However, oil and sewage have different absorption patterns in spectra of different wavelengths, so they present discernible differences in multispectral images, allowing for their accurate detection [2]. According to International Civil Aviation Organization (ICAO) Circular 355, the presence of deposits such as oil and mud between aircraft tires and the runway poses a significant hazard to aircraft during take-off and landing and a serious threat to public safety. Based on the above, in this study, we designed a new method to generate multispectral images by spectrally enhancing traditional RGB images of oil spills on airport runways for detection. This method can be applied not only to reconstruct airport accidents but also to keep airport runways safe. By using spectral enhancement techniques, we are able to accurately identify and locate oil spills in images so that timely cleaning and maintenance measures can be taken to ensure the safety of the aircraft landing and take-off processes. The application of this method could improve accuracy in airport accident reconstruction and help to ensure runway safety, providing a higher overall level of air transport safety [3]. By making full use of video information relative to take-off and landing and obtaining accurate data on oil spills, we will be able to better analyze the causes of accidents and identify potential accident factors, thus improving flight safety.
Currently, several researchers have made noteworthy contributions to oil spill detection. Zahra Ghorbani and colleagues [4] successfully utilized multi-class convolutional neural networks for offshore oil spill detection, while Mohammed S. Ozigis and others [5] employed multispectral images along with fuzzy forest and random forest methods to detect oil spills. In the context of airport runway oil spill detection, the prevalent methods primarily rely on RGB images captured with RGB cameras. However, it is important to note that these devices solely collect spectral information within the red, green, and blue bands; consequently, they fail to directly capture the specific spectral characteristics relevant to oil spill detection, thereby limiting the precise detection and differentiation of oil spills. Moreover, in runway oil spill detection, environmental factors introduce further interference in RGB images, such as the presence of sewage and other forms of spillage, as well as the potential similarity in coloration between sewage and oil. Consequently, environmental factors can impede accurate detection when relying solely on RGB images. In contrast, multispectral images offer a wealth of spectral information [6], enhanced differentiation capabilities [7], and improved quantitative analysis abilities [8]. They exhibit higher levels of detection accuracy and reliability. Multispectral images have found extensive applications in diverse fields, including agriculture [9], food detection [10], plant pathology analysis [11], material identification [12], and even oil spill detection on the sea surface [13]. Despite the wide-ranging applications of multispectral imagery, the direct implementation of multispectral cameras for detecting oil spills on airport runways presents certain drawbacks. Firstly, acquiring multispectral images can be time-consuming, and the equipment required for multispectral cameras is typically bulkier and less portable. Given that swift deployment and detection of foreign objects are essential in airport environments, the direct use of multispectral cameras for airport runway oil spill detection is constrained. Moreover, if an accident occurs at an airport, it is difficult to obtain oil information from RGB images, and so is tracing the accident back to its origin; however, because of the specific spectral characteristics of oil, the integration of multispectral and RGB imaging might offer a solution [14]. In light of these considerations, in this study, we established a method whereby RGB images of airport runway oil spills, acquired with RGB cameras, undergo spectral amplification to generate multispectral images specifically for oil spill detection. This approach presents superior detection accuracy and efficiency, thereby enabling the use of ordinary RGB cameras for airport runway oil spill detection.
Airport runway oil spill detection plays a crucial role in ensuring aviation safety and facilitating prompt accident investigations. Traditional methods based on RGB image analysis often struggle to accurately distinguish between oil spills and water stains due to their similar visual appearances. To address this limitation, we propose a novel airport runway oil spill detection method that leverages spectral reconstruction techniques to enhance the discriminative power of the captured images.
The proposed method consists of the following key steps: Firstly, RGB image data of oil spills on airport runways are acquired by using a camera, and the latter’s spectral response curve is obtained. Subsequently, a spectral reconstruction dataset is generated by utilizing the response curve and is then used to train a spectral reconstruction network model. Next, the acquired RGB images are reconstructed into multispectral images, and oil spill detection is performed on both the original and reconstructed images with the Fast R-CNN model. Finally, evaluation metrics such as Intersection over Union (IoU) are employed to assess the differences in detection performance before and after reconstruction, validating the effectiveness and practicality of the proposed method.
The main contributions of this study in the field of airport runway oil spill detection are as follows:
  • An innovative airport runway oil spill detection method that effectively overcomes the limitations of traditional RGB image detection in distinguishing oil spills from water stains by spectrally reconstructing RGB images into multispectral images, thereby achieving high-precision oil spill detection, is proposed.
  • The high-performance MST++ spectral reconstruction network model is adopted in combination with the Fast R-CNN oil spill detection model, resulting in significant improvements in accuracy and completeness in oil spill detection. Compared with directly utilizing RGB images, the proposed method increases Intersection over Union (IOU) by 5%, while detection accuracy and completeness are enhanced by 25.3% and 26.5%, respectively. Moreover, successful detection is achieved in various scenarios.
  • The proposed method offers higher practicality and convenience. By capturing images with ordinary RGB cameras and then generating multispectral images through spectral reconstruction for detection, the method significantly improves detection accuracy while greatly enhancing equipment portability, making it more suitable for the practical application requirements of airport runways.

2. Materials and Methods

2.1. Preparation of the Dataset

2.1.1. Characteristics of Oil Contamination

Oil spillage on airport pavement is a prevalent issue and predominantly comprises aviation kerosene, motor oil, lubricants, and similar substances. These oil contaminants often exhibit strikingly similar colors when captured in RGB images, posing challenges for accurate detection based on traditional visual inspection and detection methods and effective cleanup. However, it is worth noting that different types of oil spills exhibit varying levels of reflectivity. This fundamental insight forms the theoretical basis for leveraging multispectral images in the detection of oil spills on airport pavement. In a study conducted by Lai, X. [15], the authors conducted extensive testing and analysis on oil spills including paraffin, motor oil, and lubricating oil in oceanic environments. The results revealed discernible differences in the reflectance properties of various oil spill types, as depicted in Figure 1 below.
Moreover, it is worth noting that the reflectivity of different oil types on the road surface exhibits variations, presenting a promising avenue for leveraging multispectral imagery in the detection of oil spills on airport runways. Consequently, the utilization of multispectral imagery holds the potential to enhance the accuracy and dependability of identification and detection processes in the context of airport runway oil spillage. This breakthrough opens up new possibilities for improving efficiency in oil spill detection in airports, thus paving the way for enhanced environmental stewardship in aviation settings.

2.1.2. Data Collection

The dataset utilized in this study was collected from an airport located in Shanxi, China, and encompassed a diverse range of samples representing airport runway oil spills, including instances of aviation paraffin, motor oil, and lubricants. To optimize the reconstruction speed and accuracy of RGB images depicting oil spills on airport pavement, the data acquisition process was specifically conducted during periods of clear weather conditions. To account for the intricate nature of airport runway oil spillage, the dataset was further enriched by incorporating a mixture of aviation paraffin, motor oil, and lubricating oil, thereby encompassing a comprehensive representation of oil spill types commonly encountered on airport runways. Furthermore, given the varied composition of airport pavement, separate datasets were created for both asphalt and concrete surfaces, ensuring a comprehensive coverage of diverse pavement materials. Figure 2 provides a visual representation of some selected samples from the dataset, offering a glimpse into its richness and diversity.

2.1.3. Processing of Datasets

In general, if the light intensity is high, the photos of a dataset may be distorted, which may affect the reconstruction results. To ensure the reliability of a dataset, two reference colors (white and dark) can be used to calibrate the multispectral data according to Equation (1), with the aim of eliminating the effect of lighting conditions on the data to make the latter comparable and consistent under different conditions.
I c = I r I d I w I d
In this study, the collected images of the dataset were 1280 × 1300 pixels, but when they were used as input for spectral reconstruction, the network model reported errors due to the large image size. To reduce the load on the network, we cropped the acquired dataset images to 1000 × 1000 pixels. By doing so, the input requirements of the network model were met, and the computational load was reduced, improving the operational stability of the reconstruction network.

2.2. Spectral Reconstruction Methods

Multispectral imaging, characterized by its broader bandwidth and reduced susceptibility to visible light interference [16], presents a remarkable opportunity for target detection due to its ability to capture richer spectral information. Consequently, multispectral imaging holds great promise for the detection of contaminants in airports, including oil spills on runways. However, it is important to acknowledge that the adoption of multispectral cameras poses certain limitations. These cameras tend to be expensive and less portable, making them unsuitable for the swift and efficient detection of foreign objects in airport environments. To overcome these challenges, in this study, we designed a novel approach that involves reconstructing multispectral images from a single RGB image, which can then be applied to the detection of oil spills on airport runways. This method enables the utilization of existing RGB cameras, offering a cost-effective and practical solution for enhancing oil spill detection without the need for specialized multispectral equipment.

2.2.1. Sampling Imaging Principle

An RGB image is an image containing three bands. An RGB image is obtained by combining red, green, and blue colors of light to obtain other colors [17]. When the light source is daylight, the spectrum is continuous, and the output of the RGB image is given by the following Equation (2):
p k = Ω o λ c k λ d λ
where k denotes the R, G, and B channels; λ denotes the wavelength; and o is the incident light.
An RGB camera is an imaging device that records images by the optical imaging principle, which transmits the target image path through an optical lens and receives the target image through a charge coupler [18]. The light source emits electromagnetic waves of a certain wavelength, and different objects have different reflectivity to electromagnetic waves. When the light source irradiates the target object, the reflection spectrum of the surface of the target object is captured by the camera and converted by the imaging device of the camera to obtain the RGB image. Without considering the influencing factors, the process of forming the RGB image can be expressed by the following Equation (3):
I = E λ S λ C S S λ d λ
where E λ is the spectral energy distribution of the light source, S λ is the spectral reflectance of the object, CSS is the spectral response function of the camera, and λ is the wavelength of the electromagnetic wave, which typically ranges from 400 nm to 700 nm of visible light.
Mathematically, the imaging process of the RGB camera can be regarded as the product of the spectra of the object surface and the spectral response function of the RGB camera [19]. And the multiplication within Equation (3) is the spectral value of the object surface, so the following Equation (4) can be obtained:
I = Ω H S I n λ × C S S n λ d λ
where HSI is the object surface spectrum and CSS is the spectral response function of the RGB camera. Using Equation (4) above, it is clear that the most important thing to carry out when performing spectral reconstruction is to obtain the spectral response curve of the RGB camera.

2.2.2. Acquisition of Camera Spectral Response Curves

When imaging with an RGB camera, a number of factors affect the camera’s spectral response curve, such as white balance (AWB) and exposure time [20]. These factors in turn affect the subsequent training of the network. In particular, white balance (AWB) affects the captured RGB values, leading to uncertainty in the mapping between the RGB values and the corresponding spectra. Therefore, during the calibration process, we set the RGB gain to 1:1:1 and fixed the exposure time of the camera.
The experimental system platform developed in this study is depicted in Figure 3, with Figure 3a showcasing the components and arrangement. The platform comprised essential elements such as a standardized light source, a convex lens, a prism, and an RGB camera, which collectively facilitate the acquisition of spectral images. The experimental process for acquiring these spectral images is illustrated in Figure 3b. Notably, the spectral acquisition procedure necessitates a controlled environment within a dark room to minimize external light interference and ensure accurate and reliable results. Meticulously maintaining a dark room environment during the spectral acquisition process was crucial to the success of this study.
To establish the color bar for the CCD camera under varying light intensities, an experimental platform was meticulously constructed. Each color block on the bar corresponded to the reflection or transmission characteristics of light at a particular wavelength. Based on precise measurements, the color bars were obtained, and the spectral data were subsequently transformed into a standardized spectral response curve. This process involved carefully processing and curve-fitting the spectral data, resulting in the derivation of the CCD camera’s spectral response curve, graphically represented in Figure 4, providing a comprehensive visual representation of the camera’s sensitivity to different wavelengths of light. Such a curve assisted in accurately interpreting and analyzing the acquired spectral images during the subsequent stages of this study.

2.3. Spectral Reconstruction and Runway Oil Detection Framework

The overall framework of this research study is depicted in Figure 5, providing a comprehensive overview of the methodology. The initial step involved acquiring the spectral response curve of the RGB camera employed in this investigation, as determined through the aforementioned experiments. Subsequently, the oil spill spectral reconstruction network model was trained on the original RGB data. In this process, the original RGB image of an oil spill is inputted into the oil spill spectral reconstruction network, resulting in the generation of a reconstructed oil spill hyperspectral imaging (HSI) image. Then, both the reconstructed and original images are utilized as inputs for the oil spill detection network. By comparing the outcomes of the respective tests, the final detection results are obtained. This comprehensive framework enabled the evaluation and comparison of detection performance between the reconstructed HSI image and the conventional RGB image, thereby providing valuable insights into the efficacy of the proposed spectral reconstruction method for airport runway oil spill detection.

2.3.1. Spectral Reconstruction Neural Network

The main difficulty in reconstructing multispectral images from RGB images is the loss of information. Traditional spectral reconstruction methods, such as Principal Component Analysis (PCA) and Non-Negative Matrix Compression (NMF) [21], can downscale and compress spectral data but present the limitations of being sensitive to noise, being limited by linearity assumptions, and having low model complexity. For certain datasets, such as oil spills on airport runways, the performance of traditional methods may not be satisfactory.
In recent years, the field of computer vision has witnessed the widespread adoption of deep learning techniques, including their application in the domain of spectral reconstruction. Through the utilization of extensive training datasets, deep learning networks have demonstrated the capacity to learn the intricate mapping relationships between RGB and multispectral imagery [22]. Though convolutional neural networks (CNNs) and generative adversarial networks (GANs) have served as popular models, CNN-based approaches for spectral reconstruction face challenges such as data imbalance and computational complexity [23]. However, a noteworthy development in this arena has been the rise of Transformer models, which have achieved remarkable success in natural language processing (NLP) and have also found application in computer vision. In contrast to recurrent neural networks (RNNs) and convolutional neural networks (CNNs), Transformer models introduce self-attention mechanisms to tackle issues such as information loss and gradient vanishing [24]. Notably, the multi-head self-attention (MSA) mechanism within Transformers excels at capturing long-range dependencies and non-local self-similarity, thus mitigating the limitations associated with CNN-based spectral reconstruction algorithms.
Nonetheless, employing the Transformer model directly for spectral reconstruction presents certain challenges. Firstly, the multi-head self-attention (MSA) mechanism entails high computational complexity. Secondly, MSA is constrained by the input sequence length [25]. To address these issues, this study introduces the Multilevel Spectral Smart Transformer model (MST++) based on the Transformer framework [26]. The model structure is shown in Figure 6. Leveraging the sparse spatial distribution and self-similarity of multispectral image signals, the proposed approach incorporates a spectral-level self-attention mechanism (S-MSA) as a fundamental unit known as the spectral-level self-attention block (SAB). This mechanism efficiently reconstructs spectral information from RGB images. Furthermore, by integrating a spectral-level autonomous block, a single-stage spectral-level Transformer (SST) is constructed. The SST represents the input RGB image as a three-dimensional tensor and decomposes it into spectral and spatial dimensions. The structure of the SST follows a U-shaped architecture, featuring an encoder and a decoder, with both comprising multiple spectral-level self-attentive blocks (SABs), where the former extracts spectral information from the RGB image and the latter reconstructs the extracted spectral information into a multispectral image. Finally, the MST++ model is composed of multiple SSTs, working jointly to improve the spectral reconstruction process.

2.3.2. Neural Network for Runway Oil Detection

Presently, RGB image-based detection stands as the dominant method for identifying oil spills on airport tarmac. Nevertheless, this method is susceptible to various factors, such as lighting conditions, color variations of the ground, viewpoint differences, and limited data, which can lead to false positives or missed detection. In contrast, the utilization of multispectral imagery offers a promising solution to overcome these challenges effectively. By capturing information beyond the visible spectrum, multispectral imaging offers enhanced discrimination and detection capabilities, thereby mitigating the aforementioned issues associated with RGB-based detection.
In this study, we employed the Faster RCNN algorithm, a deep learning-based approach, for the detection of oil spills in multispectral images of airport environments. Faster RCNN comprises four core modules: Feature Extraction Network, Region Proposal Network (RPN), Generation of Region of Interest (ROI), and Classification and Regression [27]. The Feature Extraction Network module is responsible for extracting relevant features from the input image by employing convolutional, ReLU activation, and pooling layers, which collectively capture high-level representations of the input image. The Region Proposal Network generates multiple Regions of Interest (ROIs) based on the extracted feature map. Each candidate region is assigned a probability value and is subsequently classified by a classifier. The Generation of the ROI module takes the output regions from the RPN and the feature maps from the Feature Extraction Network as input. It combines the region features from both sources and feeds them to a fully connected network for classification. The Classification and Regression module takes all the feature maps as input and outputs both the object category and its position within the image. This module employs bounding boxes to refine the object’s position, improving localization accuracy. Figure 7 illustrates the architecture of Faster RCNN, showcasing the interconnectedness of its components.

2.4. Model Training

In this study, MATLAB R2022a, Anaconda3 (python3.6), and PyTorch libraries were used for the implementation of the spectral reconstruction algorithms. The training and testing of the spectral reconstruction network were performed on an Intel i7-12700k CPU (3.60 GHz) (Intel Corporation, headquartered in Santa Clara, CA, USA) with 32 GB RAM (Corsair Components, Inc., based in Fremont, CA, USA) and NVIDIA RTX3080 (CUDA 11.4) (Nvidia Corporation, also located in Santa Clara, CA, USA).
The training of the spectral reconstruction network model in this study was not based on a multispectral camera for data acquisition but instead on the NTIRE 2022 Spectral Dataset [28], which consists of 1000 hyperspectral images and their corresponding RGB images and covers a wide range of scenes, such as urban, rural, and natural landscapes. The hyperspectral images had a spatial resolution of 512 × 482 and a spectral resolution of 31 bands ranging from 400 to 700 nm. The RGB images were simulated from the hyperspectral images by using the spectral response functions of a standard digital camera.
To ensure a robust evaluation of the model’s performance and generalization ability, we randomly split the dataset into a training set and a validation set in a ratio of 9:1, which allowed for a sufficient amount of data for model training while reserving an independent subset for model validation and parameter tuning. The random splitting process was repeated 10 times to minimize the potential bias introduced by a single split. During the training process, it was essential to normalize and rescale the RGB images to a range of [0, 1]. The Adam optimizer was employed for parameter optimization, with default values of β1 = 0 and β2 = 0.999. To control the learning rate, the cosine annealing method was implemented, initially set to 0.004. If the learning rate did not exhibit significant decay, the training process was halted. The evaluation metrics loss, MRAE, and RMSE were employed to assess the performance of the model.
In the oil detection network, the input consists of the reconstructed hyperspectral imaging (HSI) image, which is obtained as the output of the spectral reconstruction network. The HSI images were divided into a training set and a test set in a ratio of 9:1, following a similar procedure to that employed for the spectral reconstruction dataset. The choice of dataset split ratio (9:1) for both the spectral reconstruction and oil detection tasks is based on the common practice in deep learning to allocate a larger portion of data for training while reserving a smaller subset for validation and testing [29]. This ratio provides a balance between model training and evaluation, allowing for sufficient data to learn representative features while assessing the model’s performance on unseen data. The repetition of random splitting further enhances the reliability and reproducibility of the results by reducing the impact of data split variations [30]. Considering the size of the HSI image, patches of size 3 were utilized to extract local features. The initial learning rate was set to 0.003, and the cross-entropy loss function was employed to measure the discrepancy between predicted and ground-truth labels. By leveraging these settings, we aimed to employ the oil detection network to effectively identify and classify oil spills in hyperspectral images.

2.5. Evaluation Indicators

In this study, the mean relative error (MRAE) and root mean square error (RMSE) are selected as evaluation metrics for spectral reconstruction [31,32]. MRAE is utilized to measure the average relative error between the reconstructed spectra and the actual spectra, as defined by Equation (1). The MRAE value directly reflects the prediction error of the spectral reconstruction, and a smaller MRAE value indicates better spectral reconstruction performance [33]. On the other hand, RMSE is employed to quantify the discrepancy between the reconstructed spectra and the actual spectra, as defined by Equation (2). A smaller RMSE value indicates a smaller prediction error in spectral reconstruction, demonstrating the superior performance of the spectral reconstruction model. However, when using RMSE for model evaluation, it is important to consider whether the spectral range of the compared objects is the same. Different spectral ranges can lead to variations in the RMSE value, potentially impacting the evaluation results [34]. These evaluation indices provide valuable insights into the accuracy and effectiveness of the spectral reconstruction model in this study.
M R A E = 1 N i = 1 N I R i I G i I G i
R M S E = 1 N i = 1 N I R i I G i 2
In the above equation, N is the total number of image elements, and I R i and I G i denote the ith pixel value of the reconstructed spectral image and the real image, respectively.

3. Experiments and Discussions

3.1. Spectral Reconstruction Quality Assessment

To evaluate the effectiveness of the proposed spectral reconstruction network, MST++, in this study, we compared it with several advanced reconstruction networks, including HSCNN+, HRNET, and MIRNET, on the NTIRE 2022 Spectral Dataset. The evaluation metrics employed were loss, mean relative error (MRAE), and root mean square error (RMSE). The experimental results are presented in Table 1, showcasing the performance of different models on the test set. From the results in Table 1, it is evident that MST++ achieved a significantly lower MRAE score (0.1595) than HSCNN+ (0.1727), HRNET (0.1685), and MIRNET (0.1945), with reductions of 7.6%, 5.3%, and 18%, respectively. Additionally, the RMSE of MST++ was reported as 0.0194, exhibiting decreases of 9.7%, 4.7%, and 15.3%, respectively, compared with the other network models. These results clearly demonstrate that the MST++ model outperformed the other models in terms of accuracy, strongly supporting its superiority in achieving the best results in spectral reconstruction.
To provide a more intuitive demonstration of the different effects of each network model on spectral reconstruction and to validate the performance of MST++, we randomly selected an image from the validation set to showcase the reconstruction results. This approach ensured an unbiased evaluation of the model’s generalization ability and its effectiveness in reconstructing spectral information on images not used during training. The reconstruction results are presented in Figure 8. Specifically, four wavelengths, namely, 500 nm, 550 nm, 650 nm, and 700 nm, were chosen to display the multispectral reconstruction outcomes. These wavelengths cover the visible and near-infrared spectra, allowing for a comprehensive assessment of the model’s performance in reconstructing features across different spectral regions [35,36]. The last row shows the ground-truth spectral images for comparison and verification.
To further demonstrate the model’s reconstruction performance on different objects within the same image, we selected two validation points: one from the main subject area (Position 1, coordinates 323, 250) and another from the background region (Position 2, coordinates 439, 169), representing areas with distinct spectral characteristics. By randomly selecting validation points from both the subject and background for analysis, we aimed to showcase the model’s ability to reconstruct objects with varying spectral features within an image. This provides a detailed visualization of the MST++ spectral reconstruction network model’s performance in preserving the spectral information of various objects. As illustrated in Figure 9, a comparison of the pixel-level reconstruction performed by the MST++ model and three other reconstruction models for the selected image and validation points is presented. The MST++ results exhibit a closer resemblance to the ground-truth spectral images compared with the results of the other three models. This demonstrates the superior performance of MST++ in accurately reconstructing spectral information at the pixel level, both for the main subject area and the background region.
To quantitatively evaluate and clarify the superior performance of the MST++ model in pixel-level spectral reconstruction, we calculated and compared the RMSE and MRAE values of the reconstruction results obtained by different models at the selected points with respect to the ground truth, as shown in the bar chart in Figure 9. For Position 1 (main subject area), the RMSE value of the MST++ model was 0.0356, which was 0.0178, 0.0317, and 0.0499 lower than those of the MIRNET, HRNET, and HSCNN+ models, respectively. The MRAE value of the MST++ model at Position 1 was 0.0900, which was 0.0605, 0.0856, and 0.1329 lower than those of the MIRNET, HRNET, and HSCNN+ models, respectively. For Position 2 (background region), the RMSE value of the MST++ model was 0.0128, which was 0.0013, 0.0094, and 0.0262 lower than those of the MIRNET, HRNET, and HSCNN+ models, respectively. The MRAE value of the MST++ model at Position 2 was 0.0645, which was 0.0266, 0.0529, and 0.1445 lower than those of the MIRNET, HRNET, and HSCNN+ models, respectively.
These results demonstrate that the MST++ model outperformed the other three models in reconstruction performance at both positions. Moreover, the superiority of the MST++ model was more pronounced in the main subject area compared with the background region, with larger differences in RMSE and MRAE values between the other models and the MST++ model. This further validates the stronger capability and stability of the MST++ model in capturing and reconstructing the spectral features of different objects, making it suitable for integration into multispectral oil spill detection models.
To validate the effectiveness of the proposed spectral reconstruction network, in this study, a portion of the dataset containing oil stains on concrete pavement was utilized as input for the MST++ model. Specifically, multispectral images captured in four bands were selected for analysis, as illustrated in Figure 10. From the figure, it is evident that in the reconstructed image, the spectral details of the oil spill are effectively preserved and that the oil spill can be clearly distinguished from the road surface. This outcome highlights the capability of the MST++ model to accurately reconstruct multispectral images, making it a valuable tool for the detection of oil spills in such images. The results depicted in Figure 10 provide compelling evidence of the MST++ model’s effectiveness in capturing and reconstructing the spectral characteristics of oil spills on concrete pavement. Its ability to retain important spectral details and accurately differentiate between an oil spill and the road surface underscores its potential for efficient and reliable multispectral image detection applications.
In order to verify the effects of spectral reconstruction, in this study, three images were randomly selected from the mixed-oil dataset for spectral reconstruction. Figure 11 below shows RGB images of aviation paraffin and lubricating oil, aviation paraffin and engine oil, and engine oil and lubricating oil; these were reconstructed, and the multispectral images were selected in four bands. From the figure, it can be seen that in the different bands, different types of oil spillage show clear distinctions and are clearly differentiated from the road surface. This provides a theoretical basis for the application of the proposed oil spill detection model for airport pavement monitoring.

3.2. Comparison of Oil Detection Models in Different Scenarios

Based on the results of our spectral reconstruction experiments, we determined that the MST++ model is highly suitable for reconstructing spectral information of oil spills on airport runways. Therefore, in this study, we employed MST++ to transform the acquired RGB images of airport runway oil spills into HSI images. To evaluate the effectiveness of the reconstructed oil spill HSI images, we conducted detection performance tests in various airport pavement scenarios. This included scenes of single oil spills on both concrete and asphalt pavement, as well as complex oil spill scenes occurring on the same pavement. The oil spill detection model utilized in this study took the acquired RGB images and the reconstructed HSI images as inputs, generating two sets of detection results. To validate the performance of HSI image detection in oil spill detection, we designed a control test to compare these two sets of detection results. Figure 12 showcases a selection of original data without spectral reconstruction from four different scenes. These original oil spill RGB images were then inputted into the spectral reconstruction model to obtain the reconstructed HSI images of oil spillage on the airport road. Subsequently, the original oil RGB images and the reconstructed HSI images were fed to the airport road oil spill detection model, yielding the final detection results. Through this experimental setup, we aimed to evaluate the performance of the HSI image-based detection model in oil spill detection. By comparing the detection results obtained from the original oil RGB images and the reconstructed HSI images, we assessed the effectiveness of the HSI image method in accurately detecting oil spills on airport runways.
In this study, we developed an oil spill detection network model specifically designed for airport pavement. This model combines feature extraction and target classification methods to achieve accurate detection results. To ensure compatibility with the detection model, both RGB images and the reconstructed HSI images need to be resized to a dimension of 800×800 pixels. The Feature Extraction Network within the oil detection model consists of a convolutional layer, enabling the extraction of oil-related features in the four different scenes. Specifically, the features of aviation paraffin, engine oil, and lubricating oil within these scenes are extracted. Subsequently, a region candidate network is employed for classification purposes, while interest region pooling is utilized for regression analysis. To evaluate the recognition accuracy of the model, we adopted the IOU (Intersection over Union) metric, which quantifies the degree of overlap between the predicted target frame location and the actual target frame location. By assessing IOU, we can effectively measure the accuracy of a model’s predictions [37]. By combining feature extraction, target classification, and evaluation using the IOU metric, our oil spill detection network model demonstrates the ability to accurately identify and classify oil spills on airport pavement. The integration of these techniques allows for robust and reliable detection performance, supporting maintenance and safety efforts in airport environments.
The acquired original RGB image and the reconstructed multispectral image of the same scene were used for oil detection, and the detection results are shown in Figure 13 below.
By comparing the predicted oil spill locations generated by the oil spill detection model with the actual oil spill location information obtained during data annotation and calculating the Intersection over Union (IOU), we evaluated the accuracy of the model. IOU is computed as the ratio of the area of intersection between two bounding boxes to the area of their union. The results of this evaluation are presented in Figure 14, where we can observe the calculated IOU values, which indicate the degree of overlap between the predicted and actual oil spill locations. Higher IOU values indicate better alignment between the predicted and ground-truth bounding boxes, suggesting higher accuracy in detecting and localizing oil spills on airport pavement.
In Figure 14, the yellow border represents the position of the real target box, which was obtained from the labeled data, while the blue border indicates the position of the predicted target box detected by the RGB image target model. The red border represents the overlap between these two target boxes. The results presented in Figure 14 clearly demonstrate the model’s ability to accurately detect the position of oil spills in the images. The red border, representing the overlap between the predicted and real target boxes, indicates a high degree of alignment. This alignment validates the effectiveness of the detection model in precisely identifying the location of the oil spills in the images. The findings illustrated in Figure 14 provide strong evidence of the detection model’s capability to reliably detect and locate oil spills. This information is crucial to implementing appropriate measures to address and mitigate oil spills on airport pavement.
According to the IOU statistics for each of the 60 images in the prediction set of RGB and HSI images, the statistical results are shown in Table 2 below. The prediction effects of the three types of oil spills include the actual position of the oil spill, and the IOU score was over 60%. The statistics of the average IOU value are shown in the table and summarized as follows: the average IOU values of the RGB images were 0.825, 0.814, and 0.842, and the total average IOU value was 0. 827; the average IOU values of the HSI images were 0.855, 0.816, and 0.921, and the total average IOU value was 0.864. The above data show that the detection network model can be more effective in detecting oil spills in HSI images.
The level of hazard posed by different oil spill incidents on airport pavement can vary significantly. Therefore, solely relying on the detection accuracy based on IOU may not adequately reflect the effectiveness of detecting each type of oil spill. To address this, in this study, we used the detection rate and recognition rate as additional evaluation metrics. In Table 3 below, we present the detection rate, recognition rate, and detection accuracy for both the RGB images and the HSI images. These metrics aim to capture the differences in detection performance between the two types of images.
As presented above in Table 3, the detection rates for aviation paraffin, engine oil, and lubricating oil in the HSI images were 85.6%, 82.5%, and 93.2%, respectively. These rates were significantly higher than those in RGB images (aviation paraffin: 56.7%; engine oil: 54.9%; and lubricating oil: 74.0%). Moreover, the overall detection rates in the HSI images were also notably higher than those in the RGB images. These results clearly indicate the superior performance of spectrally reconstructed HSI images compared with traditional RGB images for accurately identifying and recognizing oil spills on airport pavement, suggesting that the former are better suited for the detection of oil spills in this environment, offering improved outcomes and higher detection accuracy.
In this study, we demonstrated that airport pavement oil spills can be identified by using low-cost RGB images, which can be inputted into the spectral reconstruction network proposed in this study to acquire the corresponding HSI images, which can, in turn, be used to detect the oil spills. Based on the high cost of acquiring original HSI images, the complex operation of multispectral cameras, and the varied composition of airport pavement, in this paper, we developed a better method for the detection of oil spills on airport pavement.

4. Conclusions

Airport runway oil spill detection plays a crucial role in ensuring aviation safety and facilitating accident investigations. Traditional RGB image-based detection methods struggle to distinguish between oil spills and water stains due to their similar color appearances in the visible spectrum. However, oil spills and water stains exhibit distinct absorption characteristics in multispectral imagery, enabling precise oil spill detection based on hyperspectral images. In this study, we established a low-cost and efficient framework for airport runway oil spill detection based on spectral reconstruction and oil spill detection.
This research study was divided into two parts. The first part focused on the spectral reconstruction of airport runway RGB images. Low-cost RGB cameras were used to quickly acquire raw data on oil spills on airport runways, which were then inputted into a spectral reconstruction network to obtain hyperspectral images of the oil spills. In the spectral reconstruction phase, we employed the MST++ spectral reconstruction network model to effectively reconstruct RGB images into hyperspectral images. Compared with other models, namely, HSCNN+, HRNET, and MIRNET, MST++ demonstrated superior performance, reducing the mean relative absolute error (MRAE) and root mean square error (RMSE) by 0.0132–0.0350 and 0.0010–0.0033, respectively, achieving excellent results of 0.1595 and 0.0194, respectively. Notably, MST++ outperformed the other three models in reconstruction quality, regardless of whether the selected coordinates corresponded to the main oil spill area or the background region. Moreover, the advantage of the MST++ model was more pronounced in the main oil spill area than in the background region. This indicates that MST++ can reconstruct hyperspectral images more accurately, thereby helping to improve accuracy in oil spill detection.
The second part of the research study focused on oil spill detection. We performed detection on the obtained hyperspectral images of oil spills by using the Fast R-CNN model. Compared with RGB images, the detection method based on hyperspectral images increased the Intersection over Union (IOU) value by 5%, while detection accuracy and completeness were improved by 25.3% and 26.5%, respectively. Successful detection was achieved in various scenarios, validating the practicality and effectiveness of the proposed method. By leveraging spectral reconstruction techniques to mine the spectral information of oil spills, the method overcomes the limitations of traditional RGB image detection, effectively improving detection accuracy. The method offers the advantages of low cost and high efficiency, presenting broad application prospects in airport operations and contributing to the enhancement of aviation safety and environmental protection. Furthermore, were the dataset further expanded and enriched, the method would have potential application value in other transportation domains and infrastructure maintenance, providing new solutions for safety monitoring and maintenance in related fields.

Author Contributions

Conceptualization, Z.L. and S.Z.; methodology, Z.L. and L.W.; software, X.L.; validation, X.L. and Z.X.; formal analysis, Z.L. and Z.X.; investigation, L.W. and X.L.; resources, S.Z.; data curation, L.W. and X.L.; writing—original draft preparation, Z.L. and L.W.; writing—review and editing, Z.L.; visualization, Z.L. and X.L.; supervision, S.Z. and Z.X.; project administration, S.Z. and Z.X.; funding acquisition, S.Z. and Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shaanxi Provincial Key Research and Development Program project no. 2020ZDLGY04-06, Qin Chuan Yuan “Scientist + Engineer” Team Construction project no. 2023KXJ-249.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hao, G.; Wang, Y. Identification and Evaluation of Oil or Fuel Contaminants in Airport Asphalt Pavements. J. Mater. Civ. Eng. 2022, 34, 04022265. [Google Scholar] [CrossRef]
  2. John, R.C.; Essien, J.P.; Akpan, S.B.; Okpokwasilli, G.C. Polycyclic aromatic hydrocarbon-degrading bacteria from aviation fuel spill site at Ibeno, Nigeria. Bull. Environ. Contam. Toxicol. 2012, 88, 1014–1019. [Google Scholar] [CrossRef]
  3. Michel, J.; Fingas, M. Oil Spills: Causes, consequences, prevention, and countermeasures. In Fossil Fuels: Current Status and Future Directions; World Scientific Publishing Company: Singapore, 2016; pp. 159–201. [Google Scholar]
  4. Ghorbani, Z.; Behzadan, A.H. Monitoring offshore oil pollution using multi-class convolutional neural networks. Environ. Pollut. 2021, 289, 117884. [Google Scholar] [CrossRef]
  5. Ozigis, M.S.; Kaduk, J.D.; Jarvis, C.H.; da Conceição Bispo, P.; Balzter, H. Detection of oil pollution impacts on vegetation using multifrequency SAR, multispectral images with fuzzy forest and random forest methods. Environ. Pollut. 2020, 256, 113360. [Google Scholar] [CrossRef]
  6. Liu, X.; Gherbi, A.; Li, W.; Wei, Z.; Cheriet, M. TaijiGNN: A new cycle-consistent generative neural network for high-quality bidirectional transformation between RGB and multispectral domains. Sensors 2021, 21, 5394. [Google Scholar] [CrossRef]
  7. He, Y.; Jia, Z.; Yang, J.; Kasabov, N.K. Multispectral image change detection based on single-band slow feature analysis. Remote Sens. 2021, 13, 2969. [Google Scholar] [CrossRef]
  8. Bozsányi, S.; Farkas, K.; Bánvölgyi, A.; Lőrincz, K.; Fésűs, L.; Anker, P.; Zakariás, S.; Jobbágy, A.; Lihacova, I.; Lihachev, A.; et al. Quantitative Multispectral Imaging Differentiates Melanoma from Seborrheic Keratosis. Diagnostics 2021, 11, 1315. [Google Scholar] [CrossRef]
  9. Kim, S.; Bae, S.H.; Kim, M.K.; Kang, L.H. Deep Learning Performance Comparison Using Multispectral Images and Vegetation Index for Farmland Classification. Int. J. Aeronaut. Space Sci. 2023, 24, 1533–1545. [Google Scholar] [CrossRef]
  10. Wei, Y.; He, Y.; Li, X. Tea moisture content detection with multispectral and depth images. Comput. Electron. Agric. 2021, 183, 106082. [Google Scholar] [CrossRef]
  11. Tirelli, P.; Marchi, M.; Calcante, A.; Vitalini, S.; Iriti, M.; Borghese, N.A.; Oberti, R. Multispectral image analysis for grapevine diseases automatic detection in field conditions. In Proceedings of the International Conference of Agricultural Engineering CIGR-AgEng, Valencia, Spain, 8–12 July 2012; pp. 8–12. [Google Scholar]
  12. Pronti, L.; Ferrara, P.; Uccheddu, F.; Pelagotti, A.; Piva, A. Identification of pictorial materials by means of optimized multispectral reflectance image processing. In Proceedings of the 2015 IEEE International Workshop on Information Forensics and Security (WIFS), Rome, Italy, 16–19 November 2015; pp. 1–6. [Google Scholar]
  13. Liu, S.; Chi, M.; Zou, Y.; Samat, A.; Benediktsson, J.A.; Plaza, A. Oil spill detection via multitemporal optical remote sensing images: A change detection perspective. IEEE Geosci. Remote Sens. Lett. 2017, 14, 324–328. [Google Scholar] [CrossRef]
  14. Cao, X.; Yue, T.; Lin, X.; Lin, S.; Yuan, X.; Dai, Q.; Brady, D.J. Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world. IEEE Signal Process Mag. 2016, 33, 95–108. [Google Scholar] [CrossRef]
  15. Lai, Q.; Xie, Y.; Guo, P.; Wang, M.; Wang, C.; Tan, J. Multi-spectral bidirectional reflectance characteristics of crude oils. Infrared Phys. Technol. 2020, 109, 103420. [Google Scholar] [CrossRef]
  16. Bousefsaf, F.; Tamaazousti, M.; Said, S.H.; Michel, R. Image completion using multispectral imaging. IET Image Proc. 2018, 12, 1164–1174. [Google Scholar] [CrossRef]
  17. Fu, Y.; Zheng, Y.; Zhang, L.; Huang, H. Spectral reflectance recovery from a single RGB image. IEEE Trans. Comput. Imaging 2018, 4, 382–394. [Google Scholar] [CrossRef]
  18. Gao, X.; Wang, T.; Yang, J.; Tao, J.; Qiu, Y.; Meng, Y.; Li, Y. Deep-learning-based hyperspectral imaging through a RGB camera. J. Electron. Imaging 2021, 30, 053014. [Google Scholar] [CrossRef]
  19. Tran, V.L.; Lin, H.Y. A structured light RGB-D camera system for accurate depth measurement. Int. J. Opt. 2018, 2018, 8659847. [Google Scholar] [CrossRef]
  20. Garcia, J.E.; Girard, M.B.; Kasumovic, M.; Petersen, P.; Wilksch, P.A.; Dyer, A.G. Differentiating biological colours with few and many sensors: Spectral reconstruction with RGB and hyperspectral cameras. PLoS ONE 2015, 10, e0125817. [Google Scholar] [CrossRef]
  21. Ulloa, C.C.; Garrido, L.; Del Cerro, J.; Barrientos, A. Autonomous victim detection system based on deep learning and multispectral imagery. Mach. Learn. Sci. Technol. 2023, 4, 015018. [Google Scholar] [CrossRef]
  22. Zhao, Y.; Zhang, X.; Feng, W.; Xu, J. Deep learning classification by ResNet-18 based on the real spectral dataset from multispectral remote sensing images. Remote Sens. 2022, 14, 4883. [Google Scholar] [CrossRef]
  23. Jameel, S.M.; Hashmani, M.A.; Rehman, M.; Budiman, A. Adaptive CNN ensemble for complex multispectral image analysis. Complexity 2020, 2020, 8361989. [Google Scholar] [CrossRef]
  24. Wang, X.; Li, Y.; Qiu, J.; Chen, G.; Liu, H.; Liao, B.; Yao, X. Retroprime: A diverse, plausible and transformer-based method for single-step retrosynthesis predictions. Chem. Eng. J. 2021, 420, 129845. [Google Scholar] [CrossRef]
  25. Liu, Y.; Li, X.; Xu, Z.; Hua, Z. BSFormer: Transformer-Based Reconstruction Network for Hyperspectral Band Selection. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  26. Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y.; Pfister, H.; Van Gool, L. Mst++: Multi-stage spectral-wise transformer for efficient spectral reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 745–755. [Google Scholar]
  27. Chen, X.; Gupta, A. An implementation of faster rcnn with study for region sampling. arXiv 2017, arXiv:1702.02138. [Google Scholar]
  28. Arad, B.; Timofte, R.; Yahel, R.; Morag, N.; Bernat, A.; Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y.; et al. Ntire 2022 spectral recovery challenge and data set. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 863–881. [Google Scholar]
  29. Xu, Y.; Goodacre, R. On splitting training and validation set: A comparative study of cross-validation, bootstrap and systematic sampling for estimating the generalization performance of supervised learning. J. Anal. Test. 2018, 2, 249–262. [Google Scholar] [CrossRef]
  30. Ying, X. An overview of overfitting and its solutions. J. Phys. Conf. Ser. 2019, 1168, 022022. [Google Scholar] [CrossRef]
  31. Mu, K.; Zhang, Z.; Qian, Y.; Liu, S.; Sun, M.; Qi, R. SRT: A spectral reconstruction network for GF-1 PMS data based on transformer and ResNet. Remote Sens. 2022, 14, 3163. [Google Scholar] [CrossRef]
  32. Zou, C.; Wei, M. Cluster-based deep convolutional networks for spectral reconstruction from RGB images. Neurocomputing 2021, 464, 342–351. [Google Scholar] [CrossRef]
  33. Wang, W.; Wang, J. Double ghost convolution attention mechanism network: A framework for hyperspectral reconstruction of a single RGB image. Sensors 2021, 21, 666. [Google Scholar] [CrossRef]
  34. Zhao, J.; Kumar, A.; Banoth, B.N.; Marathi, B.; Rajalakshmi, P.; Rewald, B.; Guo, W. Deep-learning-based multispectral image reconstruction from single natural color RGB image—Enhancing UAV-based phenotyping. Remote Sens. 2022, 14, 1272. [Google Scholar] [CrossRef]
  35. Fingas, M.; Brown, C. Review of oil spill remote sensing. Mar. Pollut. Bull. 2014, 83, 9–23. [Google Scholar] [CrossRef]
  36. Gholizadeh, M.H.; Melesse, A.M.; Reddi, L. A comprehensive review on water quality parameters estimation using remote sensing techniques. Sensors 2016, 16, 1298. [Google Scholar] [CrossRef]
  37. Fang, F.; Li, L.; Zhu, H.; Lim, J.H. Combining faster R-CNN and model-driven clustering for elongated object detection. IEEE Trans. Image Process. 2019, 29, 2052–2065. [Google Scholar] [CrossRef]
Figure 1. Oil characteristics.
Figure 1. Oil characteristics.
Sensors 24 03716 g001
Figure 2. Partial dataset.
Figure 2. Partial dataset.
Sensors 24 03716 g002
Figure 3. Spectral response curve acquisition platform. (a) Component display diagram of the platform. (b) Schematic diagram of experimental collection process.
Figure 3. Spectral response curve acquisition platform. (a) Component display diagram of the platform. (b) Schematic diagram of experimental collection process.
Sensors 24 03716 g003
Figure 4. Spectral response curve acquisition. The various colour curves displayed in the graph represent the differing wavelength response values of the red, green and blue RGB channels, respectively.
Figure 4. Spectral response curve acquisition. The various colour curves displayed in the graph represent the differing wavelength response values of the red, green and blue RGB channels, respectively.
Sensors 24 03716 g004
Figure 5. Schematic diagram of oil spectral reconstruction and oil detection network architecture.
Figure 5. Schematic diagram of oil spectral reconstruction and oil detection network architecture.
Sensors 24 03716 g005
Figure 6. MST++ network structure diagram. (a) Multi-stage spectral-wise Transformer. (b) Single-stage spectral-wise Transformer. (c) Spectral-wise attention block. (d) Feed forward network.
Figure 6. MST++ network structure diagram. (a) Multi-stage spectral-wise Transformer. (b) Single-stage spectral-wise Transformer. (c) Spectral-wise attention block. (d) Feed forward network.
Sensors 24 03716 g006
Figure 7. Structure of Faster RCNN network.
Figure 7. Structure of Faster RCNN network.
Sensors 24 03716 g007
Figure 8. Comparison of spectral recovery of 4 bands in RGB image.
Figure 8. Comparison of spectral recovery of 4 bands in RGB image.
Sensors 24 03716 g008
Figure 9. Reconstructed pixel results with coordinates p1 (323,250), p2 (439,169) and the RMSE and RMAE error histogram calculated from the model reconstruction results and the ground truth.
Figure 9. Reconstructed pixel results with coordinates p1 (323,250), p2 (439,169) and the RMSE and RMAE error histogram calculated from the model reconstruction results and the ground truth.
Sensors 24 03716 g009
Figure 10. The effect of oil reconstruction.
Figure 10. The effect of oil reconstruction.
Sensors 24 03716 g010
Figure 11. Reconstruction effect of different oil stains in different bands.
Figure 11. Reconstruction effect of different oil stains in different bands.
Sensors 24 03716 g011
Figure 12. RGB images of oil in different scenes.
Figure 12. RGB images of oil in different scenes.
Sensors 24 03716 g012
Figure 13. Schematic illustration of the effect of RGB and HSI image detection of oil on airport pavement in multiple scenarios.
Figure 13. Schematic illustration of the effect of RGB and HSI image detection of oil on airport pavement in multiple scenarios.
Sensors 24 03716 g013
Figure 14. Results of oil IOUs in multiple scenarios.
Figure 14. Results of oil IOUs in multiple scenarios.
Sensors 24 03716 g014
Table 1. Spectral reconstruction evaluation indicator results.
Table 1. Spectral reconstruction evaluation indicator results.
ModelingLossMRAERMSE
HSCNN+0.25280.17270.0215
HRNET0.25190.16850.0204
MIRNET0.24840.19450.0227
MST++0.24150.15950.0194
Table 2. Mean IOU statistics of airport pavement oiling HSI and RGB image prediction set.
Table 2. Mean IOU statistics of airport pavement oiling HSI and RGB image prediction set.
Type of Oil Contamination on Airport PavementsHSI Image Prediction IOU MeanRGB Image Prediction IOU Average
Aviation kerosene0.8550.825
Engine oil0.8160.814
Lubricating oil0.9210.842
Aggregate0.8640.827
Table 3. Statistics of checking accuracy and completeness of the prediction set of HSI and RGB images of oil contamination on airport pavements.
Table 3. Statistics of checking accuracy and completeness of the prediction set of HSI and RGB images of oil contamination on airport pavements.
Airport Pavement Grease CategoryHSI Accuracy RateHSI Detection RateRGB Check RateRGB Detection Rate
Aviation kerosene0.8560.8260.5670.574
Engine oil0.8250.8550.5490.560
Lubricating oil0.9320.9640.7400.960
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, S.; Luo, Z.; Wang, L.; Li, X.; Xing, Z. Charge-Coupled Frequency Response Multispectral Inversion Network-Based Detection Method of Oil Contamination on Airport Runway. Sensors 2024, 24, 3716. https://doi.org/10.3390/s24123716

AMA Style

Zhao S, Luo Z, Wang L, Li X, Xing Z. Charge-Coupled Frequency Response Multispectral Inversion Network-Based Detection Method of Oil Contamination on Airport Runway. Sensors. 2024; 24(12):3716. https://doi.org/10.3390/s24123716

Chicago/Turabian Style

Zhao, Shuanfeng, Zhijian Luo, Li Wang, Xiaoyu Li, and Zhizhong Xing. 2024. "Charge-Coupled Frequency Response Multispectral Inversion Network-Based Detection Method of Oil Contamination on Airport Runway" Sensors 24, no. 12: 3716. https://doi.org/10.3390/s24123716

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop