Next Article in Journal
Deeply Implanted Conformal Antenna for Real-Time Bio-Telemetry Applications
Previous Article in Journal
Design and Performance Analysis of Foldable Solar Panel for Agrivoltaics System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

INSANet: INtra-INter Spectral Attention Network for Effective Feature Fusion of Multispectral Pedestrian Detection

1
Department of Software, Sejong University, Seoul 05006, Republic of Korea
2
Department of Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic of Korea
3
NAVER LABS, Seongnam 13561, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(4), 1168; https://doi.org/10.3390/s24041168
Submission received: 14 January 2024 / Revised: 2 February 2024 / Accepted: 8 February 2024 / Published: 10 February 2024
(This article belongs to the Section Optical Sensors)

Abstract

:
Pedestrian detection is a critical task for safety-critical systems, but detecting pedestrians is challenging in low-light and adverse weather conditions. Thermal images can be used to improve robustness by providing complementary information to RGB images. Previous studies have shown that multi-modal feature fusion using convolution operation can be effective, but such methods rely solely on local feature correlations, which can degrade the performance capabilities. To address this issue, we propose an attention-based novel fusion network, referred to as INSANet (INtra-INter Spectral Attention Network), that captures global intra- and inter-information. It consists of intra- and inter-spectral attention blocks that allow the model to learn mutual spectral relationships. Additionally, we identified an imbalance in the multispectral dataset caused by several factors and designed an augmentation strategy that mitigates concentrated distributions and enables the model to learn the diverse locations of pedestrians. Extensive experiments demonstrate the effectiveness of the proposed methods, which achieve state-of-the-art performance on the KAIST dataset and LLVIP dataset. Finally, we conduct a regional performance evaluation to demonstrate the effectiveness of our proposed network in various regions.

1. Introduction

Pedestrian detection, which involves predicting bounding boxes to locate pedestrians in an image, has long been studied due to its utility in various real-world applications, such as autonomous vehicles, video surveillance and unmanned aerial vehicles [1,2,3,4]. In particular, robust pedestrian detection in challenging scenarios is essential in autonomous driving application since it is directly related to human safety. However, modern RGB-based pedestrian detection methods failed to operate in challenging environments characterized by low illumination, rain, and fog [5,6,7,8]. To alleviate this problem, several methods [5,9,10] have emerged that leverage a thermal camera as a sensor complementary to the RGB camera already in use. Thermal cameras offer visual cues in challenging environments by capturing long-wavelength radiation emitted by subjects, thereby overcoming the limitations of RGB cameras in complex conditions.
To achieve successful multispectral pedestrian detection, it is important to consider three key factors: enhancing individual spectral features, understanding the relationships between inter-spectral features, and effectively aggregating these features. Building upon these principles, diverse multispectral pedestrian detection approaches have emerged, including single/multi-scale feature fusion [11,12,13,14,15,16] as well as iterative fusion-and-refinement methods [17,18]. These approaches have achieved impressive results with novel fusion techniques. However, most previous methods rely on a convolutional layer to enhance the modality-specific features and capture the correlations between them. Due to the lack of a receptive field in such convolution layers given their small kernel size, they have trouble capturing the long-range spatial dependencies of both intra- and inter-spectral images.
Recently, transformer-based fusion methods [19,20] that enhance the representation of each spectral feature map to improve the multispectral feature fusion have emerged. These methods capture the complementary information between multispectral images by employing an attention mechanism that assigns importance to input sequences by considering their relationships. While existing approaches achieve satisfactory detection results, they still have the disadvantage of neglecting or inadequately addressing the inherent relationship among intra-modality features.
In addition, we observed that the detection performance was restricted due to the imbalanced distribution of locations where pedestrians appear. This imbalanced distribution frequently occurs in both multispectral [5,10] and single-spectral thermal pedestrian detection datasets [21,22]. To analyze this phenomenon, we plot the distribution of the center of annotated pedestrians in the KAIST multispectral dataset and LLVIP dataset in Figure 1. As shown in the yellow square in Figure 1a, the number of pedestrian appearances is concentrated in specific regions biased to the right side. This result stems from the fact that KAIST dataset entries are acquired under right-hand traffic conditions, making it challenging to provide sufficient sight to detect pedestrians on the left side. In particular, pedestrian counts become intensely imbalanced in road scenarios where images were collected along arterial roads where sidewalks and traffic lanes are sharply divided (as shown in Figure 1b). As observed in Figure 1c, the phenomenon of pedestrian concentration persists even though the LLVIP dataset was captured from a video surveillance camera angle. To mitigate the distribution imbalance, it is a common practice to employ standard geometric data augmentations such as cropping and flipping. However, even when applying these data augmentation methods, we found that the over-appearance problem persisted in some regions.
This paper presents a comprehensive study of a method to improve the performance of a multispectral pedestrian detection framework by addressing the issues described above. We propose a novel fusion module, INtra-INter Spectral Attention, which consists of intra- and inter-modality attention blocks that effectively integrate complementary information across different spectral modalities. Specifically, the intra-modality attention block performs the self-attention within each modality feature map to suppress irrelevant information, effectively enhancing modality-specific information. These enhanced feature maps encourage the inter-modality attention block to calculate the mutual relationships between cross-modalities to improve the multispectral feature fusion outcome. We also analyze standard geometric transformations to address the imbalanced distribution of pedestrian locations in the training data. As a result, we find that shifting the image along the x-axis within a specific range mitigates the over-representation of pedestrians in certain regions. Our method achieves state-of-the-art performance on the KAIST multispectral pedestrian detection dataset and LLVIP dataset, demonstrating the effectiveness of our contributions.

2. Related Work

2.1. Multispectral Pedestrian Detection

Multispectral pedestrian detection research has made significant progress with thermal images able to detect accurately pedestrians in a variety of challenging conditions. Hwang et al. [5] released a large-scale multispectral pedestrian dataset and proposed a hand-crafted Aggregated Channel Feature (ACF) approach that utilized the thermal channel features. This work had a significant impact on subsequent multispectral pedestrian detection research. Liu et al. [23] analyzed the feature fusion performance outcomes at different stages using the NIN (Network-In-Network) fusion strategy. Li et al. [16] demonstrated that multi-task learning using semantic segmentation could improve object detection performance capabilities compared to a detection-only approach. Zhang et al. [17] proposed a cyclic multispectral feature fusion and refinement method that improves the representation of each modality feature. Yang et al. [24] and Li et al. [25] designed an illumination-aware gate that adaptively modulates the fusion weights between RGB and thermal features using illumination information predicted from RGB images. Zhou et al. [18] leveraged common- and differential-mode information simultaneously to address modality imbalance problems considering both illumination and feature factors. Zhang et al. [11] proposed a Region Feature Alignment (RFA) module that adaptively interacts with the feature offset in an effort to address weakly aligned phenomena. Kim et al. [15] proposed a novel multi-label learning method to distinguish between paired and unpaired images for robust pedestrian detection in commercialized sensor configurations such as stereo vision systems. Although previous studies have achieved remarkable performance gains, convolution-based fusion strategies struggle to capture the global context effectively in both intra- and inter-spectral images despite the importance of doing so during the feature fusion process. To address this issue, we design a transformer-based attention scheme in this paper.

2.2. Attention-Based Fusion Strategies

Attention mechanisms [26,27,28] have led to a model capable of learning enhanced modality-specific information. Zhang et al. [12] proposed a cross-modality interactive attention mechanism that encodes the interaction between RGB and thermal modalities and adaptively fuses features to improve the pedestrian detection performance. Fu et al. introduced a pixel-level feature fusion attention module that incorporates spatial and channel dimensions. Zhang et al. [13] designed Guided Attentive Feature Fusion (GAFF) to guide the feature fusion of intra-modality and inter-modality features with an auxiliary pedestrian mask. With the success of the attention-based transformer mechanism [29] in natural language processing (NLP) and the subsequent development of a vision transformer (ViT) [30], several methods have attempted to utilize transformer-based attention schemes for multispectral pedestrian detection. Shen et al. [20] proposed a dual cross-attention transformer feature fusion framework for simultaneous global feature interaction and complementary information capturing across modalities. The proposed framework uses a query-guided cross-attention mechanism to interact with cross-modal information. Zhu et al. [31] proposed a Multi-modal Feature Pyramid Transformer (MFPT) using a feature pyramid architecture that simultaneously attends to spatial and scale information within and between modalities. Fang et al. [19] leveraged self-attention to execute intra-modality and inter-modality fusion simultaneously and to capture the latent interactions between RGB and thermal spectral information more effectively. However, transformer-based feature fusion methods have not yet fully realized the potential of attention mechanisms, as they do not effectively learn the complementary information between modalities. In this paper, we propose an effective transformer-based module that enhances and interacts with intra- and inter-specific information.

2.3. Data Augmentations in Pedestrian Detection

Data augmentation is a key technique for improving the robustness and generalization of object detection. Pedestrian detection models commonly use augmentation approaches such as geometric transformations, including flips, rotation, and cropping, as well as other techniques such as zoom in, zoom out, cutmix [32], mixup [33], and others. In a previous study, Cygert et al. [34] proposed patch-based augmentation that utilized image distortions and stylized textures to achieve competitive results. Chen et al. [35] proposed shape transformations to generate more realistic-looking pedestrians. Chi et al. [36] and Tang et al. [37] introduced an occlusion-simulated augmentation method that divides pedestrians into parts and fills in with the mean values of ImageNet [38] or images to improve the degree of robustness to occlusions. To address the motion blur problem in an autonomous driving scene, Khan et al. [39] designed hard mixup augmentation, which is an image-aware technique that combines mixup [33] augmentation with hard labels. To address the paucity of data on severe weather conditions, Tumas et al. [40] used a DNN-based augmentation that modified training images with Gaussian noise to mimic adverse weather conditions. Kim et al. [15] proposed semi-unpaired augmentation, which stochastically applies augmentation to one of the multispectral images. This breaking of the pair allows the model to learn from both paired and unpaired conditions, demonstrating good generalization performance. In this paper, we propose a simple yet effective shift augmentation method that disperses peak regions in the image, allowing the model to learn from a variety of regions.

3. Materials and Methods

This section presents a comprehensive study on multispectral pedestrian detection. First, we describe the overall architecture of our detection network and our novel INtra-INter Spectral Attention module. We also introduce an effective data augmentation methods, the shift augmentation technique, to address the imbalanced distribution of pedestrian locations. Details about the architecture design and shift augmentation method are provided in Section 3.1 and Section 3.2, respectively.

3.1. Model Architecture

3.1.1. Overall Framework

The key concern when undertaking robust multispectral pedestrian detection is to integrate complementary information from different spectral images properly. In this aspect, many researchers have adopted a halfway-based architecture that extracts modality-specific features from the intermediate layers of convolutional neural networks and interacts with multispectral feature maps before forwarding them to the detection heads. Similar to how halfway-based fusion methods work, the model proposed here consists of three major parts (Figure 2): (1) modality-specific feature extractors, (2) an INtra-INter Spatial Attention (INSA) module for multispectral feature fusion, and (3) an auxiliary network followed by detection heads. Note that the INSA module and auxiliary network are weight sharing, contrary to modality-specific feature extractors. This weight-sharing design encourages the INSA module and auxiliary network to facilitate the integration of complementary information between multispectral features. On the other hand, two independent feature extractors without weight sharing explicitly consider the modality-specific information.
Specifically, f r g b and f t h e r represent the RGB and thermal feature extractors, respectively. These extractors take an RGB image and a thermal image as input, respectively, and extract modality-inherent feature maps that are down to one-quarter of the original resolution as follows:
F θ = f θ ( I θ ) , θ { r g b , t h e r }
Here, F θ and I θ refer to the modality-specific feature map and the image, respectively. After feature extraction, these independent features are fed to our INSA module, which consists of self-attention and cross-attention, followed by the feed-forward network. This module enhances feature representation within each modality while also facilitating exchanges of complementary information between modalities. A detailed explanation of the INSA module is given in Section 3.1.3. After passing through the INSA module, the enhanced spectral feature maps are merged by weighted summation. This fused feature map is then subjected to maxpooling to form a multispectral feature map.
F m s = m a x p o o l ( α F ^ r g b ( 1 α ) F ^ t h e r ) R H 8 × W 8 × C , α = = 0.5
where F ^ r g b and F ^ t h e r are enhanced RGB and thermal feature maps through proposed INSA module, respectively. Finally, the auxiliary network takes the merged feature map ( F m s ) and generates multi-scale feature maps F m s N through a series of convolutional and pooling layers. These multi-scale feature maps are then passed to the detection heads, which consist of two separate convolution layers for classifying and localizing the pedestrians. Our detection network reduces the computational cost and number of trainable parameters by directly passing the first merged features through the modality-sharing auxiliary network and detection heads.

3.1.2. Attention-Based Fusion

Preliminary Transformer-Based Fusion

We briefly introduce the attention mechanism of the transformer, which is a powerful technique that calculates the relationships among input sequences. The attention mechanism can be implemented as follows:
A t t n ( X a , X b ) = s o f t m a x ( X a W Q · W K T X b T d ) · X b W V C r o s s - A t t n ( X a , X b ) = A t t n ( X a , X b ) , A t t n ( X b , X a ) S e l f A t t n ( X a ) = A t t n ( X a , X a )
where W Q , W K and W V are learnable parameters that project the input token to query, key, and value, respectively. X and d correspondingly indicate the input token and length of the query dimension. In other words, the attention mechanism calculates the attention weights between the query and key using inner products and then applies softmax to scale the attention weights. Finally, the attention weights are applied to the value matrix. This mechanism helps the model focus on the most relevant parts of the input data.
In multispectral pedestrian detection, the aforementioned attention mechanism can be leveraged to consider the complementary information between multispectral images, as follows:
X ^ r g b , X ^ t h e r = C r o s s - A t t n ( X r g b , X t h e r )
where X ^ r g b and X ^ t h e r are the enhanced RGB and thermal feature maps, respectively, as determined by calculating the correlation among multispectral features. Shen et al. [20] proposed the following cross-modal feature-enhanced module that modifies the equation above:
X ^ r g b , X ^ t h e r = u p s a m p l e ( C r o s s A t t n ( p o o l ( X r g b ) , p o o l ( X t h e r ) ) )
In Equation (5), pool and upsample indicate the pooling operations for downsampling and upsampling, respectively. Because the attention mechanism has quadratic computation complexity to the input resolution, they applied a pooling operation before calculating the attention weight to reduce the computation cost. However, cross-attention-based feature enhancement may sacrifice a potential performance gain because it neglects to capture the context within each modality.
In another approach, concatenated self-attention (CatSelf-Attn), which undertakes the self-attention of concentrated multispectral features along the spatial axis, was developed.
C a t S e l f A t t n = u p s a m p l e ( S e l f A t t n ( X m s ) ) , X m s = C a t ( p o o l ( X r g b ) , p o o l ( X t h e r ) )
CatSelf-Attn can simultaneously aggregate intra-modality and inter-modality information, but the computational complexity grows quadratic with an increase in the number of input tokens involved in the attention operation. Furthermore, because excessive pooling is applied to feature maps prior to entering the previous feature enhancement modules for computational efficiency, these modules face the challenge of insufficient feature representation during the attention calculation.

3.1.3. INtra-INter Spectral Attention

The main concerns when designing a feature enhancement module are as follows: computing relationships within each modality and across modalities, and balancing the trade-off between computational efficiency and information loss. With regard to these consideration, we propose the INtra-INter Spectral Attention (INSA) module, which enhances the representation of each spectral feature map through a combination of intra- and inter-spectral attention.
As illustrated in Figure 3, our INSA module comprises three major parts: an intra-spectral attention block, an inter-spectral attention block, and a feed-forward network. The intra-spectral attention and inter-spectral attention blocks are implemented in a standard self-attention and cross-attention manner. This is expressed here as Equation (3). Specifically, when two spectral feature maps, F r g b and F t h e r , are input into the INSA module, the module initially applies intra-spectral attention blocks to the RGB and thermal feature maps independently. The goal of this is to enhance the modality-specific information within each set of spectral information by focusing on salient features and suppressing irrelevant features using an attention mechanism.
F ¯ r g b = S e l f A t t n ( F r g b ) , F ¯ t h e r = S e l f A t t n ( F t h e r )
Next, inter-spectral attention blocks are employed to capture the cross-spectral interactions between F ¯ r g b and F ¯ t h e r . This stage allows the model to understand the mutual relationship by aggregating complementary information across modalities. Finally, the processed feature maps are passed through a feed-forward network to refine the extracted information further.
F ^ r g b , F ^ t h e r = I N S A ( F r g b , F t h e r ) = F F N ( C r o s s A t t n ( F ¯ r g b , F ¯ t h e r ) )
In Equation (8), FFN means the feed-forward network. The output feature maps of the INSA module contain both modality-specific and cross-modality complementary information, improving the feature fusion and detection performance.
To boost the efficiency, the INSA module also employs a shifted local window attention strategy for both intra- and inter-spectral attention. Specifically, we divide the entire input feature into K × K windows. The intra- and inter-attention processes are then applied independently within each window. To capture a broader context and avoid local optima, we also shift the window partition after processing one intra- and inter-spectral attention cycle. Consequently, this strategy reduces the computational complexity by focusing attention on shifted windows, thereby achieving significant efficiency gains compared to global attention mechanisms.

3.2. Analysis of Geometric Data Augmentation

The distribution of objects within an image plays a crucial role in the performance of anchor-based detectors, in which densely tiled anchor boxes are leveraged to localize the objects. Most methods resort to a general strategy that uniformly distributes anchors across the entire image under the assumption of equal importance for all image regions. However, most object detection datasets, such as Pascal-VOC [41] and MS-COCO [42], violate this assumption. In particular, multispectral pedestrian datasets such as KAIST [5] often suffer from an imbalance in pedestrian locations. This occurs because these datasets frequently include images taken in situations such as arterial roads, where the sidewalk and the road are clearly separated. These imbalances may cause the model to focus on regions where pedestrians frequently appear, thereby leading to the prediction of trivial solutions that only detect them around over-appearance areas such as these.
To mitigate this issue, numerous studies on pedestrian detection [11,14,15,43] employ common geometric augmentation techniques, such as cropping and flipping. As depicted in Figure 4, applying geometric data augmentation to a histogram of pedestrian locations in the KAIST dataset (shown in rows 1 and 2 of Figure 4a) results in a distribution that is relatively uniform, contrasting with the right-skewed distribution (shown in rows 1 and 2 of Figure 4b). However, as illustrated in the third row of Figure 4b, where we adjusted the pedestrian location count threshold in the histogram over 65 to highlight the phenomenon of concentrations in specific areas, only applying geometric augmentation continues to lead to pedestrian concentrations in certain regions (also shown in red circle of row 1).
To address the aforementioned problem, we design a shift augmentation, which performs translation transformation alongside geometric data augmentation. This method involves randomly shifting the image within a certain range in a direction opposite to the over-appearance area in the dataset. This serves to disperse the locations in over-appearance areas, thereby mitigating the concentration phenomenon.
As can be seen in Figure 4c, our shift augmentation strategy disperses the locations in over-appearance areas, alleviating the concentration phenomenon. This result demonstrates that our method can effectively alleviate the imbalance problem of pedestrian locations as opposed to applying only common geometric augmentation methods.

4. Experiments

4.1. Experimental Setup

4.1.1. KAIST Dataset

The KAIST multispectral pedestrian dataset [5] consists of 95,328 fully overlapped RGB–thermal pairs in an urban environment. The provided ground truth consists of 103,128 pedestrian bounding boxes with pedestrians. In the experiment, we follow the standard criterion as train02, which samples frames such that a total of 25,076 images are used for training. For an evaluation, we also follow the standard evaluation criterion as test20, sampling 1 out of every 20 frames, such that all results are evaluated on 2252 frames consisting of 1455 day images and 797 night images. Additionally, we conducted experiments with different driving scenes that were divided into three subsets: Campus (set06, set09) with 823 frames, Road (set07, set10) with 850 frames, and Downtown (set 08, set11) with 579 frames. Note that we use paired annotations for training [11] and sanitized annotations for the evaluation [16]. This is the standard criterion for a fair comparison with recent works.

4.1.2. LLVIP Dataset

The LLVIP dataset [10] is a recently released multispectral pedestrian dataset in low-light vision environments. This dataset is composed of RGB-IR pairs consisting of 30,976 images, or 15,488 pairs, collected under challenging environments such as insufficient illumination conditions or heavily obscured. Contrasted with the KAIST dataset, which relies on systematic configuration for alignment, the LLVIP dataset was captured using a stereo configuration binocular camera. However, strict spatial and temporal alignment was achieved through post-processing image registration. In the experiment, we adhere to the established protocol of prior studies [10,19] utilizing 12,025 and 3463 image pairs for training and testing, respectively.

4.2. Implementation Details

We conducted experiments using NVIDIA A100 GPUs with PyTorch. Our baseline network was modified from a fusion architecture [23] using SSD [43]. We employed batch normalized VGG-16 as a backbone, which was initialized with ImageNet pre-trained weights up to conv3 before the fusion stage. We also reduced the model complexity by modifying the auxiliary network of SSD [43] by removing the conv11 layer. For training, we utilized Momentum Stochastic Gradient Descent (Momentum SGD) with an initial learning rate, momentum, and weight decay of 10 4 , 0.9, and 5 × 10 4 , respectively. The batch size was set to 8, and both training and evaluation input images were resized to 640 (W) × 512 (H). We utilize data augmentation in the following order: the proposed shift augmentation, a spectral-independent horizontal flip, and a random resized crop, with the probability of applying each transformation set to 0.3, 0.5, and 0.5, respectively. Specifically, shift augmentation randomly shifts the multispectral images along the x-axis by an integer value in the range of 0 to 20. While shift augmentation and random resized crop are applied to both RGB and thermal images, a spectral-independent horizontal flip is processed separately on each modality image. We utilized a general detection loss that consists of the classification and localization loss and employed the multi-label classification loss [15] to enable the model to learn the modality-inherent features. Finally, the network was trained for 40 epochs with a batch size of 8.

4.3. Evaluation Metric

We use the standard log-average miss rate (LAMR), the most popular metric for pedestrian detection tasks, as a representative metric, sampled for a false positive rate per image (FPPI) in the range [ 10 2 , 10 0 ], as proposed by Dollar et al. [44]. This metric is more appropriate for commercial solutions because it focuses on areas of high accuracy rather than areas of low accuracy.

4.4. Comparison with State-of-the-Art Multispectral Pedestrian Detection Methods

4.4.1. KAIST Dataset

To validate the effectiveness of the proposed method, we compare it with state-of-the-art multispectral pedestrian detectors, in this case ACF [45], Halfway Fusion [23], MSDS-RCNN [16], AR-CNN [11], MBNet [18], MLPD [15], CFT [19], GAFF [13], and CFR [17]. Table 1 shows the detection results of our method and of the state-of-the-art detectors on the KAIST dataset. In ALL, which included both day and night, we achieved a miss rate of 5.50%, which is 0.46% higher than the previous best method, CFR [17]. From these results, our method demonstrates efficiency as a fusion method that performs complementary information exchanges while preserving the unique characteristics of the two modalities depending on day and night scenes. Furthermore, despite the different pedestrian distributions on Campus, Road, and Downtown, the proposed method shows the best performance on Road and Downtown while maintaining competitive accuracy on Campus (7.45 from CFR vs. 7.64 from ours).
Figure 5 illustrates the qualitative results of our method in comparison with MLPD [15], GAFF [13], and CFR [17] on the KAIST dataset. In comparison with these other methods, our method shows better detection results by explicitly detecting ambiguous targets during both the day and night. In addition, while other methods tend to produce more false positive results on the left side due to the right-skewed pedestrian distributions, our method shows reliable detection results by alleviating the over-appearance of pedestrians using the shift augmentation strategy.

4.4.2. LLVIP Dataset

To further demonstrate the generality of the proposed method, we conducted experiments on the LLVIP dataset. Note that as mentioned in Section 1, LLVIP has a more uniform distribution compared to KAIST, but it still exhibits an over-appearance region on the right side of the images. Therefore, we conducted experiments with an identical setup as in the KAIST and compared it with state-of-the-art detectors, in this case, Yolo [46], FBCNet [47], and CFT [19]. Table 2 shows the detection results of the proposed method and of the state-of-the-art detectors on the LLVIP dataset. In the experimental results, we achieved a miss rate of 4.43%, which is 0.97% higher than the previous best method, CFT [19]. It is interesting to note that when comparing with and without shift augmentation, the miss rate is 5.64 without shift augmentation, which is 0.24% higher than CFT. However, the miss rate decreases by 1.21% after applying shift augmentation, which is 0.97% lower than CFT. From these results, our method also demonstrates its effectiveness on other benchmarks with state-of-the-art performance.

4.5. Ablation Study

4.5.1. Effects of INtra-INter Spectral Attention

We ablate the components of the proposed INtra-INter Spectral Attention (INSA) module in Table 3. First, we chose the halfway SSD architecture with a halfway fusion mechanism that performs multispectral feature fusion by directly applying intermediate feature maps to a weighted sum as the baseline model. Note that we evaluate the model that was trained with and without our shift augmentation strategy to focus on the effectiveness of the INSA components.
As shown in Table 3, inter-spectral attention, which allows the model to focus on relevant regions between multispectral images, achieves satisfactory results compared to the baseline (7.50 → 6.66 for the miss rate). This suggests that inter-spectral attention effectively captures complementary information across different spectral images. Moreover, intra-spectral attention can improve the detection performance (7.50 → 6.81 for the miss rate) by enhancing the modality-inherent information within each spectral image. This result indicates that enhancing the individual spectral features is equally important considering the mutual relationships across modalities in multispectral pedestrian detection. Lastly, as we carefully designed the INSA module that initially enhances the individual spectral features and then captures the mutual relationship among them, our model results in the best performance, as shown in the last row of Table 3.

4.5.2. Hyperparameters in INSA

We ablate the hyperparameters of the proposed INSA module in Table 4. As shown in Table 4, we found that the INSA module with two iterations, where the output of the first INSA module is fed back to the intra-attention block of the second INSA module, achieves the best detection performance (6.12%). Also, we ablate to find the optimal number of input tokens for the INSA module in Table 4. Performance results show that using 16 tokens, i.e., dividing the input feature map into 4 × 4 patches, achieves the best detection performance (6.12%). In accordance with the optimal hyperparameters identified in the ablation studies, we utilized 16 patches and two iterations of INSA modules for all experiments.

4.5.3. The Impact of Performance from Geo and Shift Augmentation

As discussed in Section 3.2, the pedestrian distribution within the images significantly affects the model training and accuracy. To investigate how the impact of the pedestrian distribution generalizes across models, we assess the detection performance capabilities of SSD with halfway fusion as well as the proposed framework with the INSA module in Table 5.
When the geo is marked, the results with both Baseline and INSA show a superior performance gain compared to models without geo. These results demonstrate that the imbalanced distribution of pedestrians significantly reduces the detection performance, whereas geometric transformation mitigates this impact effectively by augmenting the training data. It is interesting to note that Baseline with geo shows impressive results on Road, as shown in the third row of Table 5 (1.93 miss rate). However, the performance on the test datasets overall (ALL) fell short of the baseline when using both geo and shift (7.50 miss rate with geo only vs. 7.03 miss rate with geo and shift). This performance gap is attributed to Baseline with geo and shift achieving better performance on Campus and Downtown, where the pedestrian distributions are more diverse compared to that of Road.
We find that in the third row of each method in Table 5, applied shift augmentation alone outperforms without augmentation but underperforms when applied to geo alone. This is because geo utilizes diverse transformations, such as cropping and flipping, which have a more uniform distribution than shift augmentation alone. However, we focus on the significant performance improvement when applying shift augmentation alone in cases of road scenes where the pedestrians are extremely skewed to the right of the image. This is because even when using shift augmentation alone, it mitigates over-appearance regions.
These experimental results, along with the observation that the overall miss rate is lowest when both geo and shift augmentation are applied, we confirm that shift augmentation plays a complementary role to geo. In other words, shift augmentation helps address the persistent issue of pedestrian over-appearance in certain areas even after applying geo. As a result, shift augmentation encourages the model to learn broader pedestrian features and ultimately achieve better generalization detection performance.

4.5.4. Hyperparameters in Shift Augmentation

In Table 6, we compare the performance relative to the random movement range and probability of shift augmentation. We note that positive and negative values represent shifting an image to the right and left side, respectively. The proposed model exhibits an optimal performance miss rate of 5.50% at Δ 20 with an application probability rate of p = 0.3. When the image is shifted in the same rightward direction as in the KAIST dataset, which has a distribution of pedestrians concentrated on the right, the performance decreases compared to the performance without shift augmentation. This demonstrates that shift augmentation can alter the distribution of pedestrians within the dataset by moving the image. Moreover, this implies that shifting the image away from the concentrated distribution can improve detection performance by mitigating the over-appearance issue in regions.

5. Conclusions

In this paper, we design INSANet to address the limitations of CNN-based multispectral fusion strategies, which mainly focus on local feature interactions due to their limited receptive field. More specifically, our attention-based fusion module effectively integrates intra- and inter-modality information, overcoming the limitations of existing strategies that prevent the corresponding models from interpreting relationships across modalities. Furthermore, we investigate the effect of data augmentation to address the imbalanced over-appearance pedestrian location distributions in training data. With our contributions, including the INSA module and shift augmentation, our model could learn the representation of pedestrians at various locations and the complementary information between multispectral images. In the experimental section, we demonstrate that the proposed method outperforms the recent state-of-the-art methods in terms of detection accuracy on the KAIST multispectral dataset [5]. Although our shift augmentation method can effectively improve the performance of pedestrian detection on the dataset tested here, the optimal shift range may vary depending on the dataset used. To address this issue, our future work will focus on designing a generalizable augmentation framework that automatically selects the optimal hyperparameters. We believe that this will lead to the development of an effective multispectral pedestrian detection framework applicable to a wider range of real-world scenarios.

Author Contributions

Conceptualization, S.L., T.K. and J.S.; methodology, S.L. and J.S.; software, S.L.; validation, S.L.; formal analysis, J.S.; investigation, S.L.; resources, S.L.; data curation, S.L.; writing—original draft preparation, S.L.; writing—review and editing, T.K., J.S. and N.K.; visualization, S.L. and T.K.; supervision, Y.C.; project administration, Y.C.; funding acquisition, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) under the metaverse support program to nurture the best talents (IITP-2023-RS-2023-00254529, 50%) grant funded by the Korea government (MSIT). This work was partly supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (RS-2023-00262891, Development of AI-based HD map building and crop image analysis for smart farm agricultural automation robots, 50%).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The provided data can be only used for nonprofit purposes.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
  2. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuScenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2020; pp. 11621–11631. [Google Scholar]
  3. Wang, X.; Wang, M.; Li, W. Scene-specific pedestrian detection for static video surveillance. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 361–374. [Google Scholar] [CrossRef]
  4. Du, D.; Zhu, P.; Wen, L.; Bian, X.; Lin, H.; Hu, Q.; Peng, T.; Zheng, J.; Wang, X.; Zhang, Y.; et al. VisDrone-DET2019: The vision meets drone object detection in image challenge results. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Repbulic of Korea, 27–28 October 2019. [Google Scholar]
  5. Hwang, S.; Park, J.; Kim, N.; Choi, Y.; So Kweon, I. Multispectral pedestrian detection: Benchmark dataset and baseline. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1037–1045. [Google Scholar]
  6. Xu, D.; Ouyang, W.; Ricci, E.; Wang, X.; Sebe, N. Learning cross-modal deep representations for robust pedestrian detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5363–5371. [Google Scholar]
  7. Devaguptapu, C.; Akolekar, N.; M Sharma, M.; N Balasubramanian, V. Borrow from anywhere: Pseudo multi-modal object detection in thermal imagery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  8. Kieu, M.; Bagdanov, A.D.; Bertini, M.; Del Bimbo, A. Task-conditioned domain adaptation for pedestrian detection in thermal imagery. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 546–562. [Google Scholar]
  9. González, A.; Fang, Z.; Socarras, Y.; Serrat, J.; Vázquez, D.; Xu, J.; López, A.M. Pedestrian detection at day/night time with visible and FIR cameras: A comparison. Sensors 2016, 16, 820. [Google Scholar] [CrossRef]
  10. Jia, X.; Zhu, C.; Li, M.; Tang, W.; Zhou, W. LLVIP: A visible-infrared paired dataset for low-light vision. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Virtual, 10–17 October 2021; pp. 3496–3504. [Google Scholar]
  11. Zhang, L.; Zhu, X.; Chen, X.; Yang, X.; Lei, Z.; Liu, Z. Weakly aligned cross-modal learning for multispectral pedestrian detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 5127–5137. [Google Scholar]
  12. Zhang, L.; Liu, Z.; Zhang, S.; Yang, X.; Qiao, H.; Huang, K.; Hussain, A. Cross-modality interactive attention network for multispectral pedestrian detection. Inf. Fusion 2019, 50, 20–29. [Google Scholar] [CrossRef]
  13. Zhang, H.; Fromont, E.; Lefèvre, S.; Avignon, B. Guided attentive feature fusion for multispectral pedestrian detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (CVPR), Virtual, 19–25 June 2021; pp. 72–80. [Google Scholar]
  14. Zheng, Y.; Izzat, I.H.; Ziaee, S. GFD-SSD: Gated fusion double SSD for multispectral pedestrian detection. arXiv 2019, arXiv:1903.06999. [Google Scholar]
  15. Kim, J.; Kim, H.; Kim, T.; Kim, N.; Choi, Y. MLPD: Multi-Label Pedestrian Detector in Multispectral Domain. IEEE Robot. Autom. Lett. 2021, 6, 7846–7853. [Google Scholar] [CrossRef]
  16. Li, C.; Song, D.; Tong, R.; Tang, M. Multispectral pedestrian detection via simultaneous detection and segmentation. In Proceedings of the in British Machine Vision Conference (BMVC), Newcastle, UK, 3–6 September 2018; pp. 225.1–225.12. [Google Scholar]
  17. Zhang, H.; Fromont, E.; Lefevre, S.; Avignon, B. Multispectral fusion for object detection with cyclic fuse-and-refine blocks. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Virtual, 25–28 October 2020; pp. 276–280. [Google Scholar]
  18. Zhou, K.; Chen, L.; Cao, X. Improving multispectral pedestrian detection by addressing modality imbalance problems. In Proceedings of the European Conference on Computer Vision (ECCV), Springer, Glasgow, UK, 23–28 August 2020; pp. 787–803. [Google Scholar]
  19. Qingyun, F.; Dapeng, H.; Zhaokui, W. Cross-modality fusion transformer for multispectral object detection. arXiv 2021, arXiv:2111.00273. [Google Scholar]
  20. Shen, J.; Chen, Y.; Liu, Y.; Zuo, X.; Fan, H.; Yang, W. ICAFusion: Iterative cross-attention guided feature fusion for multispectral object detection. Pattern Recognit. 2024, 145, 109913. [Google Scholar] [CrossRef]
  21. Xu, Z.; Zhuang, J.; Liu, Q.; Zhou, J.; Peng, S. Benchmarking a large-scale FIR dataset for on-road pedestrian detection. Infrared Phys. Technol. 2019, 96, 199–208. [Google Scholar] [CrossRef]
  22. Tumas, P.; Nowosielski, A.; Serackis, A. Pedestrian detection in severe weather conditions. IEEE Access 2020, 8, 62775–62784. [Google Scholar] [CrossRef]
  23. Liu, J.; Zhang, S.; Wang, S.; Metaxas, D.N. Multispectral deep neural networks for pedestrian detection. arXiv 2016, arXiv:1611.02644. [Google Scholar]
  24. Yang, X.; Qian, Y.; Zhu, H.; Wang, C.; Yang, M. BAANet: Learning bi-directional adaptive attention gates for multispectral pedestrian detection. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 2920–2926. [Google Scholar]
  25. Li, C.; Song, D.; Tong, R.; Tang, M. Illumination-aware faster R-CNN for robust multispectral pedestrian detection. Pattern Recognit. 2019, 85, 161–171. [Google Scholar] [CrossRef]
  26. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  27. Li, X.; Wang, W.; Hu, X.; Yang, J. Selective kernel networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 510–519. [Google Scholar]
  28. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Munich, Germany, 8–14 September 2018; pp. 7132–7141. [Google Scholar]
  29. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  30. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  31. Zhu, Y.; Sun, X.; Wang, M.; Huang, H. Multi-Modal Feature Pyramid Transformer for RGB-Infrared Object Detection. IEEE Trans. Intell. Transp. Syst. 2023, 24, 9984–9995. [Google Scholar] [CrossRef]
  32. Yun, S.; Han, D.; Oh, S.J.; Chun, S.; Choe, J.; Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Virtual, 18–22 June 2019; pp. 6023–6032. [Google Scholar]
  33. Zhang, H.; Cisse, M.; Dauphin, Y.N.; Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv 2017, arXiv:1710.09412. [Google Scholar]
  34. Cygert, S.; Czyżewski, A. Toward robust pedestrian detection with data augmentation. IEEE Access 2020, 8, 136674–136683. [Google Scholar] [CrossRef]
  35. Chen, Z.; Ouyang, W.; Liu, T.; Tao, D. A shape transformation-based dataset augmentation framework for pedestrian detection. Int. J. Comput. Vis. 2021, 129, 1121–1138. [Google Scholar] [CrossRef]
  36. Chi, C.; Zhang, S.; Xing, J.; Lei, Z.; Li, S.Z.; Zou, X. Pedhunter: Occlusion robust pedestrian detector in crowded scenes. Proc. AAAI Conf. Artif. Intell. 2020, 34, 10639–10646. [Google Scholar] [CrossRef]
  37. Tang, Y.; Li, B.; Liu, M.; Chen, B.; Wang, Y.; Ouyang, W. Autopedestrian: An automatic data augmentation and loss function search scheme for pedestrian detection. IEEE Trans. Image Process. 2021, 30, 8483–8496. [Google Scholar] [CrossRef] [PubMed]
  38. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
  39. Khan, A.H.; Nawaz, M.S.; Dengel, A. Localized Semantic Feature Mixers for Efficient Pedestrian Detection in Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 5476–5485. [Google Scholar]
  40. Tumas, P.; Serackis, A.; Nowosielski, A. Augmentation of severe weather impact to far-infrared sensor images to improve pedestrian detection system. Electronics 2021, 10, 934. [Google Scholar] [CrossRef]
  41. Everingham, M.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
  42. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 740–755. [Google Scholar]
  43. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; pp. 21–37. [Google Scholar]
  44. Dollar, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian detection: An evaluation of the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 743–761. [Google Scholar] [CrossRef]
  45. Dollár, P.; Appel, R.; Belongie, S.; Perona, P. Fast feature pyramids for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1532–1545. [Google Scholar] [CrossRef]
  46. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  47. Yao, H.; Zhang, Y.; Jian, H.; Zhang, L.; Cheng, R. Nighttime pedestrian detection based on Fore-Background contrast learning. Knowl.-Based Syst. 2023, 275, 110719. [Google Scholar] [CrossRef]
Figure 1. Analyzing the distribution of pedestrians in the KAIST multispectral dataset and LLVIP dataset using Gaussian Kernel Density Estimation (Gaussian KDE). In the (a) KAIST dataset, especially in the (b) road scene, pedestrians are more concentrated on the right side of the image for several reasons, including the road environment, where sidewalks are clearly divided and a right-hand driving condition prevails. In the (c) LLVIP dataset, while displaying a more uniform distribution, there is a persistent bias toward pedestrian over-appearance on the right side of the images. A plasma colormap is used to encode the distribution of the density, with blue indicating low density and yellow indicating high density. High density is marked with a yellow square.
Figure 1. Analyzing the distribution of pedestrians in the KAIST multispectral dataset and LLVIP dataset using Gaussian Kernel Density Estimation (Gaussian KDE). In the (a) KAIST dataset, especially in the (b) road scene, pedestrians are more concentrated on the right side of the image for several reasons, including the road environment, where sidewalks are clearly divided and a right-hand driving condition prevails. In the (c) LLVIP dataset, while displaying a more uniform distribution, there is a persistent bias toward pedestrian over-appearance on the right side of the images. A plasma colormap is used to encode the distribution of the density, with blue indicating low density and yellow indicating high density. High density is marked with a yellow square.
Sensors 24 01168 g001
Figure 2. Overall framework of the proposed network: INtra-INter Spatial Attention Network (INSANet). F r g b , F t h e r and f m s i indicate RGB, the thermal feature map and the i-th merged feature map, respectively. Q, K, and V correspondingly indicate query, key, and value. After passing through the INSA module, F r g b and F t h e r are merged by weighted summation ( α == 0.5).
Figure 2. Overall framework of the proposed network: INtra-INter Spatial Attention Network (INSANet). F r g b , F t h e r and f m s i indicate RGB, the thermal feature map and the i-th merged feature map, respectively. Q, K, and V correspondingly indicate query, key, and value. After passing through the INSA module, F r g b and F t h e r are merged by weighted summation ( α == 0.5).
Sensors 24 01168 g002
Figure 3. Proposed INtra-INter Spectral Attention (INSA) module. Intra-Attn and Inter-Attn indicate intra- and inter-spectral attention blocks, respectively. F r g b and F t h e r are the inputs of the INSA module. They are initially passed through Intra-Attn to enhance their representation, as indicated by F ¯ r g b and F ¯ t h e r . Then, they are passed through Inter-Attn to capture the cross-modality interaction, resulting in the final outputs F ^ r g b and F ^ t h e r , while also maintaining the feature map size of the input.
Figure 3. Proposed INtra-INter Spectral Attention (INSA) module. Intra-Attn and Inter-Attn indicate intra- and inter-spectral attention blocks, respectively. F r g b and F t h e r are the inputs of the INSA module. They are initially passed through Intra-Attn to enhance their representation, as indicated by F ¯ r g b and F ¯ t h e r . Then, they are passed through Inter-Attn to capture the cross-modality interaction, resulting in the final outputs F ^ r g b and F ^ t h e r , while also maintaining the feature map size of the input.
Sensors 24 01168 g003
Figure 4. Histograms of pedestrian positions in the KAIST multispectral dataset, indicating the effects of different augmentations on pedestrian distribution. Note that we utilize the sanitized annotations of the training set to draw histograms. (a) Original: pedestrians heavily clustered in specific areas. (b) Geometric augmentation: distribution becomes more uniform, but some over-appearance persists. (c) Geo. w/Shift Aug: Combining geometric and shift augmentation significantly mitigates over-appearance, leading to a more uniform distribution. To visualize the phenomenon clearly, we highlight the over-appearance areas in row 1 with a red circle.
Figure 4. Histograms of pedestrian positions in the KAIST multispectral dataset, indicating the effects of different augmentations on pedestrian distribution. Note that we utilize the sanitized annotations of the training set to draw histograms. (a) Original: pedestrians heavily clustered in specific areas. (b) Geometric augmentation: distribution becomes more uniform, but some over-appearance persists. (c) Geo. w/Shift Aug: Combining geometric and shift augmentation significantly mitigates over-appearance, leading to a more uniform distribution. To visualize the phenomenon clearly, we highlight the over-appearance areas in row 1 with a red circle.
Sensors 24 01168 g004
Figure 5. Qualitative results of the proposed method on the KAIST dataset. The comparison results demonstrate that our method (e) effectively alleviates the concentrated distribution and efficiently learns the mutual spectral relationships. We compared the ground truth-(a) with three state-of-the-art multispectral pedestrian detectors: (b) MLPD [15], (c) GAFF [13], and (d) CFR [17]. For comparison according to the driving environment, the KAIST dataset is composed of Campus, Road, and Downtown from the top in units of two rows. The first row shows Day, the second row shows Night, and this is repeated for the remaining row groups. Following the standard evaluation protocol [5], we exclude predict boxes with a height is 55 or less.
Figure 5. Qualitative results of the proposed method on the KAIST dataset. The comparison results demonstrate that our method (e) effectively alleviates the concentrated distribution and efficiently learns the mutual spectral relationships. We compared the ground truth-(a) with three state-of-the-art multispectral pedestrian detectors: (b) MLPD [15], (c) GAFF [13], and (d) CFR [17]. For comparison according to the driving environment, the KAIST dataset is composed of Campus, Road, and Downtown from the top in units of two rows. The first row shows Day, the second row shows Night, and this is repeated for the remaining row groups. Following the standard evaluation protocol [5], we exclude predict boxes with a height is 55 or less.
Sensors 24 01168 g005
Table 1. Benchmark of the pedestrian detection task on the KAIST dataset. † is the re-implemented performance with the proposed fusion method. The highest performance is highlighted in bold, while the second-highest performance is underlined.
Table 1. Benchmark of the pedestrian detection task on the KAIST dataset. † is the re-implemented performance with the proposed fusion method. The highest performance is highlighted in bold, while the second-highest performance is underlined.
MethodMiss Rate (%)
ALLDAYNIGHTCampusRoadDowntown
ACF [45]47.3242.5756.1716.506.6818.45
Halfway Fusion [23]25.7524.8826.59---
MSDS-RCNN [16]11.3410.5312.9411.263.6014.80
AR-CNN [11]9.349.948.3811.733.3811.73
H a l f w a y   F u s i o n   8.318.368.2710.803.7411.00
MBNet [18]8.138.287.8610.654.259.18
MLPD [15]7.587.956.959.215.049.32
ICAFusion [20]7.176.827.85---
C F T    [19]6.757.764.599.453.478.72
GAFF [13]6.488.353.467.953.708.35
CFR [17]5.968.353.467.454.107.25
Ours ( w / o   s h i f t ) 6.127.194.379.053.247.25
Ours ( w / s h i f t ) 5.506.294.207.643.066.72
Table 2. Benchmark of the pedestrian detection task on the LLVIP dataset. The highest performance is highlighted in bold, while the second-highest performance is underlined.
Table 2. Benchmark of the pedestrian detection task on the LLVIP dataset. The highest performance is highlighted in bold, while the second-highest performance is underlined.
MethodSpectralMiss Rate (%)
Yolov3 [46]visible37.70
infrared19.73
Yolov5 [10]visible22.59
infrared10.66
FBCNet [47]visible19.78
infrared7.98
MLPD [15]multi6.01
CFT [19]5.40
Ours ( w / o   s h i f t ) multi5.64
Ours ( w / s h i f t ) 4.43
Table 3. Attention-wise ablation for the proposed INtra-INter Spectral Attention module. Self refers to models using only self-attention, and Cross refers to models using only cross-attention. The highest performance is highlighted in bold.
Table 3. Attention-wise ablation for the proposed INtra-INter Spectral Attention module. Self refers to models using only self-attention, and Cross refers to models using only cross-attention. The highest performance is highlighted in bold.
AttentionMiss Rate (%)
Intra (Self)Inter (Cross)ALL
--7.50
-6.81
-6.66
6.12
Table 4. Comparisons of performance results on the proposed INtra-INter Spectral Attention module hyperparameters. The highest performance is highlighted in bold.
Table 4. Comparisons of performance results on the proposed INtra-INter Spectral Attention module hyperparameters. The highest performance is highlighted in bold.
Iterations of Modules N
N MR (%)
16.16
26.12
46.20
Number of Patches
PatchMR (%)
86.61
166.12
326.88
Table 5. Ablation study of shift augmentation. geo refers to geometric transformations that use random flips and random crops with a given probability. Baseline is a detector that applies a weighted sum for both modalities without using the INSA module. When both geo and shift are not marked, the model is only trained using color jitter augmentation. The highest performance is highlighted in bold.
Table 5. Ablation study of shift augmentation. geo refers to geometric transformations that use random flips and random crops with a given probability. Baseline is a detector that applies a weighted sum for both modalities without using the INSA module. When both geo and shift are not marked, the model is only trained using color jitter augmentation. The highest performance is highlighted in bold.
MethodAug.Miss Rate (%)
GeoShiftALLDAYNIGHTCampusRoadDowntown
Baseline--11.5013.986.8314.828.2213.73
-10.5812.297.1113.673.1713.59
-7.508.844.7011.061.939.18
7.037.855.389.403.298.81
INSA(Ours)--10.1111.187.8612.746.0412.36
-9.3010.427.3511.892.9212.24
-6.127.194.379.053.247.25
5.506.294.207.643.066.72
Table 6. Performance comparison according to the hyperparameters of shift augmentation. In the random movement range Δ , negative values indicate a move to the left and positive values indicate a move to the right. The probability p represents the probability that augmentation will be applied. The bold performance is highest performance and underlined performance is the result without applying shift, and if the performance improves compared to this, it is indicated in italics.
Table 6. Performance comparison according to the hyperparameters of shift augmentation. In the random movement range Δ , negative values indicate a move to the left and positive values indicate a move to the right. The probability p represents the probability that augmentation will be applied. The bold performance is highest performance and underlined performance is the result without applying shift, and if the performance improves compared to this, it is indicated in italics.
Miss Rate (%)
Δ 30 Δ 20 Δ 10 0 Δ 10 Δ 20 Δ 30
p1.07.266.966.876.12
(p = 0)
7.187.497.57
0.76.716.076.636.806.707.61
0.56.765.855.937.007.217.16
0.36.335.505.956.276.226.93
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, S.; Kim, T.; Shin, J.; Kim, N.; Choi, Y. INSANet: INtra-INter Spectral Attention Network for Effective Feature Fusion of Multispectral Pedestrian Detection. Sensors 2024, 24, 1168. https://doi.org/10.3390/s24041168

AMA Style

Lee S, Kim T, Shin J, Kim N, Choi Y. INSANet: INtra-INter Spectral Attention Network for Effective Feature Fusion of Multispectral Pedestrian Detection. Sensors. 2024; 24(4):1168. https://doi.org/10.3390/s24041168

Chicago/Turabian Style

Lee, Sangin, Taejoo Kim, Jeongmin Shin, Namil Kim, and Yukyung Choi. 2024. "INSANet: INtra-INter Spectral Attention Network for Effective Feature Fusion of Multispectral Pedestrian Detection" Sensors 24, no. 4: 1168. https://doi.org/10.3390/s24041168

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop