Next Article in Journal
VEPdgets: Towards Richer Interaction Elements Based on Visually Evoked Potentials
Previous Article in Journal
A Fault Diagnosis Strategy for Analog Circuits with Limited Samples Based on the Combination of the Transformer and Generative Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Attention-Assisted Feature Comparison and Feature Enhancement for Class-Agnostic Counting

College of Information Engineering, Shenyang University, Shenyang 110044, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(22), 9126; https://doi.org/10.3390/s23229126
Submission received: 7 October 2023 / Revised: 2 November 2023 / Accepted: 9 November 2023 / Published: 11 November 2023

Abstract

:
In this study, we address the class-agnostic counting (CAC) challenge, aiming to count instances in a query image, using just a few exemplars. Recent research has shifted towards few-shot counting (FSC), which involves counting previously unseen object classes. We present ACECount, an FSC framework that combines attention mechanisms and convolutional neural networks (CNNs). ACECount identifies query image–exemplar similarities, using cross-attention mechanisms, enhances feature representations with a feature attention module, and employs a multi-scale regression head, to handle scale variations in CAC. ACECount’s experiments on the FSC-147 dataset exhibited the expected performance. ACECount achieved a reduction of 0.3 in the mean absolute error (MAE) on the validation set and a reduction of 0.26 on the test set of FSC-147, compared to previous methods. Notably, ACECount also demonstrated convincing performance in class-specific counting (CSC) tasks. Evaluation on crowd and vehicle counting datasets revealed that ACECount surpasses FSC algorithms like GMN, FamNet, SAFECount, LOCA, and SPDCN, in terms of performance. These results highlight the robust dataset generalization capabilities of our proposed algorithm.

1. Introduction

Image-based object counting is a prominent research challenge, with diverse applications in fields such as video surveillance, traffic monitoring, and beyond. Counting objects swiftly, especially in scenarios with dense or unevenly distributed objects, poses a challenge, due to limitations in the human visual system. Consequently, the development of a dependable object counting system has emerged as a recent focal point in computer vision research.
Following a comprehensive review of prior research, it is evident that the majority of studies assume the presence of a single class in the training dataset, such as populations, cells, animals, or cars. Examples are illustrated in Figure 1a. This approach is commonly referred to as class-specific counting (CSC). Consequently, when deploying the model, it can only handle task objectives that belong to the same class as the training data. Mainstream approaches typically perform object counting using a series of convolutional neural networks (CNNs) with fixed-size convolution kernels. However, they still require a substantial amount of training data corresponding to specific semantic classes for effective learning.
The few-shot counting (FSC) algorithm, initially proposed by Lu et al. [1], addresses the class-agnostic counting (CAC) problem, as exemplified in Figure 1b. Unlike class-specific counting models, FSC is trained on data encompassing various semantic categories. During the inference process, FSC can effectively capture the visual features of a query image containing new categories by leveraging examples from the corresponding classes, enabling it to count the instances of these categories in a new scene. As articulated by Min in BMNet [2], CAC represents a promising direction in object counting—a shift from merely learning to count objects to learning to count various methods or categories.
Specifically, FSC is designed to determine the quantity of salient objects in an image, regardless of their semantic class, by utilizing a user-provided set of “exemplars” that refer to the specific objects to be counted. Typically, existing FSC methods consist of two main components: feature extraction and matching. Initially, they extract visual features from the exemplars and then compare these features to those extracted from the query image. The outcome of this similarity matching serves as an intermediate representation for inferring object counts.
In essence, FSC comprises two crucial elements: expressive features and a similarity metric. A recent study, BMNet [2], focused on enhancing the similarity metric by applying predefined rules, to identify self-similarity within images. Another approach, SAFECount [3], categorized CAC algorithms into two groups: feature-based and similarity-based approaches. It proposed a fusion of the strengths of both methods, to achieve superior target counting results. As depicted in Figure 2c, this paper extended upon the principles of SAFECount [3] for the CAC task, improving its similarity comparison techniques and feature enhancements. This resulted in a CAC network with enhanced counting performance and more robust generalization capabilities.
This paper presents three key contributions:
Introduction of ACECount: We introduce ACECount, a versatile visual object counting architecture that combines Transformer [4,5] and CNN models, featuring multiple attention mechanisms. ACECount utilizes local and global attention, to explicitly capture features within and between image patches. It incorporates a cross-attention mechanism for similarity to user-provided few-shot examples. Additionally, ACECount employs densely connected multi-scale regression heads, to capture feature similarity at different scales. This comprehensive approach enhances its counting performance.
Cross-Dataset Generality: We thoroughly investigated ACECount’s ability to generalize across different datasets. In the zero-shot scenario, we utilized two learnable tensors to replace exemplar features as inputs. This adaptation enabled the network to disregard irrelevant input portions and transform into a single-class counting network. ACECount demonstrated remarkable performance on both the population count dataset and the FSC-147 dataset under zero-shot conditions, highlighting the effectiveness of our multi-scale regression head for diverse counting tasks.
Ablation Study on CAC Dataset: This study highlights the significant performance improvement achieved by our core module, the feature interaction enhancement module. We also emphasize our careful network structure design and hyperparameter selection, which collectively contribute to ACECount’s outstanding performance in FSC scenarios, establishing it as one of the top-performing solutions.
The content of the following chapters is outlined as follows: in Section 2, we discuss related work, covering the research background and commonly used algorithms in both class-specific object counting and class-agnostic counting tasks; in Section 3, we introduce the ACECount framework and elaborate on the principles and individual roles of each module within the network; in Section 4, we outline the specific experimental program, encompassing an introduction to the dataset, technical algorithm details, comparisons of various methods, and ablation experiments; in Section 5, we provide visualization results for the FSC-147 dataset; in Section 6, we offer a comprehensive summary of the text, highlighting the algorithm’s strengths and weaknesses, along with potential future solutions and directions.

2. Related Works

2.1. Class-Specific Object Counting

The counting of specific object classes, including crowds [6], vehicles [7], and animals [8], is a common task. Among them, the field of crowd counting has been extensively researched. Traditional methods usually employ machine learning [9,10] for crowd counting. However, this method encounters challenges related to target occlusion and the 2D perspective of images. Consequently, traditional methods are primarily suitable for crowd counting in sparsely populated environments.
With the advent of deep learning, deep-learning-based methods have gradually excelled in crowd counting. The existing crowd counting methods have been predominantly developed around deep learning research.
Deep-learning-based crowd counting methods can be categorized into detection-based counting, regression-based counting, and density-map-based counting. Methods such as [11,12,13] fall into the category of detection-based methods. These methods achieve crowd counting by first detecting individuals and subsequently counting them. Typically, these methods employ object detection algorithms to identify whole bodies, faces, or upper bodies of individuals, with the final count obtained by summing the number of detected instances. However, complex urban scenes or large crowds densely packed together often result in significant occlusion issues. Consequently, such methods struggle to handle occlusion problems within crowds.
To address the challenge of crowd occlusion, Lian and Li et al. proposed the RGBD-Net [14], a detection model that leverages additional depth information. However, this approach necessitates the availability of depth annotation information during training, limiting its applicability.
The regression-based approach [15,16] involves learning the mapping between features extracted from local image patches and their corresponding counts, eliminating the need for object detectors. This approach effectively handles individual occlusion and feature tracking, making it suitable for crowd counting in complex scenes. Unlike detection-based methods, regression-based methods estimate crowd density using an overall crowd description, which is not constrained by high-density scenes. As a result, regression-based methods can more effectively estimate crowd density in complex scenes without requiring explicit object boundaries or individual tracking.
Methods like [17,18,19] are density-map-based, capable of achieving high-performance crowd counting in crowded and complex scenes. Most recent studies in crowd counting have been conducted based on this method. For instance, Pham et al. [18] proposed a density-based method to regress the density map from input images. Additionally, researchers in [17,19] designed multi-column convolutional networks and applied CNNs with various convolutional kernel sizes to extract features at different scales separately, addressing the scale variance problem in crowd counting.
Nonetheless, in any case, these methods face challenges when dealing with object categories not present in the training dataset or counting multiple categories of objects.

2.2. Class-Agnostic Counting

Lu et al. were the first to propose class-agnostic counting (CAC) [20,21], and they introduced a generalized matching network called GAM [1]. In this approach, a shared convolutional neural network (CNN) is employed to extract features from both the query and exemplar. These features are then used to calculate similarity, which is subsequently regressed, to generate the density map.
Recognizing that direct regression from connected features might lead to overfitting, recent advancements have focused on explicit similarity modeling. For instance, CFOCNet [21] employs the feature map of the exemplar as a 2D convolutional kernel, convolving the query image with this kernel, to calculate similarity.
To address the counting instance scale challenge encountered by CAC, FamNet [20] introduced a model based on self-similarity matching and introduced the first large CAC dataset, FSC-147. Meanwhile, SAFECount [3] utilizes support features, to enhance the query feature through a feature enhancement module. This enhancement refines the extracted features and leads to regressing a density map with improved accuracy.
Additionally, BMNet [2] incorporated a similarity loss, to supervise the results of similarity matching, drawing inspiration from metric learning. LOCA [22] introduced a novel module for object prototype extraction, combining shape and appearance data with image features iteratively. CounTR [23] streamlined training image generation with a two-stage process, enforcing model use of a specified sample. SPDCN [24] presented scale priori deformable convolution, enhancing counting network efficiency by incorporating sample information for better semantic feature extraction. These existing algorithms demonstrate proficiency in the CAC task but still fall short of achieving the accuracy levels attained by single-class counting algorithms in the class-specific counting (CSC) task.

3. Proposed Method

Overview. Few-shot counting (FSC) aims to determine the number of objects in a query image that belong to specific classes. Unlike class-specific counting (CSC), FSC’s primary goal is to learn the capability to count new classes based on existing classes, rather than solely counting objects within predefined classes. In CSC, training data only consist of pixel-level point annotations for object classes. In FSC, object classes are split into two parts: one for training and the other for testing.
We denote the training class as C t and the inference class as C i . C t and C i encompass different semantic classes of counted objects, with C t C i = . During network training, for each query image from C t , represented as X i t R H × W × 3 , we provide a certain number of box coordinates to represent the exemplars in the image. These exemplars are denoted as y i t = b i t K , where K 0 , 1 , 2 , 3 , , to calculate the number of corresponding categories in the image. Additionally, a ground-truth density map, denoted as D i t R H × W × 1 , is provided for supervised network training.
For query images from C i , represented as X j i R H × W × 3 , only a certain number of exemplars—i.e., y j i = b j i K , where K 0 , 1 , 2 , 3 , —are provided, to enable the network to estimate the number of corresponding categories. FSC’s objective is to use a minimal number of exemplars to calculate the number of objects in C i . When the user specifies the number of exemplars in a query image as K, the task is referred to as K-shot FSC.
Framework Architecture. The ACECount framework comprises six components: the query encoder, the exemplar encoder, the pyramid feature aggregation module (PFA), the similarity comparison module (SCM), the feature attention enhancement module (FAEM), and a multi-scale dense counting regression head (MDCR). The SCM and FAEM collectively form the core part of the framework, known as the feature interaction enhancement module (FIEM).
In broad strokes, the query encoder and exemplar encoder are responsible for extracting features from input queries and exemplars, respectively. The PFA aggregates low-level and high-level features. The SCM integrates image regions with arbitrary given regions through cross-attention, facilitating the comparison of image regions to any given shots and the calculation of their similarity. The FAEM employs spatial and channel attention to enhance the input feature map, augmenting the overall feature representation. The MDCR utilizes densely connected dilated convolution, to expand the receptive fields of the regression head, enhancing multi-scale object recognition and extending network generality for counting different categories.
The network’s versatility for counting different categories is extended, as illustrated in Figure 3.
The pipeline of ACECount involves several key steps. Queries and exemplars are processed through two different encoders, CNN-base and Twins-base, for encoding. Twins reshapes the sequence of vectors back to a 2D feature map, F i R C i × H i × W i , at each downsampling. These features at different stages are sent to the pyramid feature aggregation module (PFA) for feature fusion, to obtain a feature map containing global information, F Q R C Q × H Q × W Q . Meanwhile, the exemplar is transformed into exemplars features, F E R M × D , and support features, F S R C S × H S × W S , and it is used as key and value (K and V) by the CNN-base encoder. These are then fed into the PFA along with the query. In the feature attention enhancement module (FAEM), the weight of similar features in R is adjusted using the support feature, F S , to obtain the reshaped similarity map, R . This map is then used to enhance the original feature, F Q , resulting in the enhanced feature, F R . Finally, F R is input into the multi-scale dense counting regression head (MDCR), to obtain the density map and counting results. To simply sum it up:
y i = Φ MDCR Φ SCM Φ QE X i , Φ P F A l = 1 n Φ EE S i l k , Φ Q E n 1 X i , k { 0 , 1 , , K } ,
where k denotes the number of user-given shots, and i denotes the number of query images. In the next section, we will introduce these six modules in detail.

3.1. Visual Encoder

Our model’s encoder comprises two components: one following the Twins structure [25], and the other following the CNN structure. These two distinct encoders process two types of input images derived from the same source image: the former is responsible for extracting features from the query images, while the latter focuses on extracting features from the exemplars.
We represent the input image as I Q R 3 × H × W , where 3, H, and W denote its channel size, height, and width, respectively. The input exemplar is denoted as I E R N × 3 × K × K , with N, 3, and K representing its shot number, channel size, height, and width. In the following sections, we will describe these two components separately.

3.1.1. Query Encoder

As transformer technology [5] continues to advance in natural language processing, an increasing number of researchers are exploring its applications in computer vision. Vision in transformers (VIT) [26] was introduced to address the computational complexity of self-attention mechanisms in image processing. Since its introduction, VIT has gradually extended its reach to various computer vision tasks, achieving outstanding results. VIT employs a self-attention layer that divides images into fixed-size patches, embedding them into 1D vector sequences, which serve as input for the self-attention mechanism. These sequences are then linearly transformed, to produce the K, Q, and V matrices, which are used in the self-attention operation. The self-attention calculation process is as follows:
Att ( Q , K , V ) = S Q K T d V ,
where S ( ) denotes softmax, Q denotes query, K denotes key, V denotes value, and d denotes the length of the vector.
We utilize Twins-SVT [25], a pyramidal VIT variant, as the query encoder. Twins introduces a novel attention mechanism that combines locally grouped self-attention (LSA) and global sub-sampled attention (GSA) in two combinations, which are then stacked together in multiple modules. This configuration is referred to as spatially separable self-attention (SSSA).
To elaborate, in a given layer l of Twins, the feature maps are initially divided into patches, each measuring k 1 × k 2 pixels, and projected into tokens, using a multi-layer perceptron (MLP). At this stage, information exchange only occurs within the sub-window. However, GSA facilitates information flow between different patches, overcoming the limitation of local information processing. Smaller receptive fields could otherwise impact the network’s performance. In the GSA stage, each sub-window produces a single representative through convolutional operations. This representative aggregates critical information from its corresponding sub-window. Subsequently, a self-attention calculation is applied to the representatives of all the sub-windows.
This approach enables sub-windows to communicate with one another through their representatives, capturing global features. Formally, we represent SSSA as follows:
Z l = LSA LayerNorm Z l 1 + Z l 1 , Z l = MLP LayerNorm Z l + Z l , Z l = GSA LayerNorm Z l + Z l , Z l = MLP LayerNorm Z l + Z l ,
where L S A ( ) represents the self-attentiveness of the local grouping within the sub-window, G S A ( ) signifies the global sub-sampling attention obtained by interacting with the representative keys (generated by the sub-sampling function) from each sub-window, and finally, Z R k 1 × k 2 .
Twins capture fine-grained and short-distance information in the image through the attention operation within the specified sub-window, using LSA. They also capture the global and long-distance information of the image by fusing the attention from various sub-windows through GSA. This mode effectively captures image features while being less computationally and parametrically demanding than the standard self-attention operation, making it easy to deploy.
We incorporate Twins-SVT Large with pre-trained weights from ImageNet, consisting of four stages with 11 layers of GSA and LSA. It is worth noting that the input image undergoes downsampling to 1 32 of its original size, while the channel dimension expands to 1024. From this architecture, we extract the output feature maps F i from stages 1, 2, and 3, to be used as inputs for the PFA component. The resulting output is represented as follows:
F i = Φ Q E ( X i ) R M × D .

3.1.2. Exemplars Encoder

In the exemplar encoder, when performing K-shot counting (K = 0, 1, 2, 3), we employ a standard convolutional structure for feature extraction from the exemplars. Specifically, the exemplar encoder comprises four 3 × 3 convolutional layers with ReLU activation and average pooling. The object scale remains nearly unchanged as the exemplar images are resized to 3 × K × K dimensions. Consequently, we refrain from applying additional operations to the exemplars but instead focus on extracting features and mapping them onto high-dimensional feature maps, using a few shallow CNN layers. The output feature map F S from the penultimate convolutional layer serves as the enhanced support feature input in the FAEM. In scenarios where zero-shot counting is performed (i.e., when no exemplar is provided), to meet the input requirements of the SCM part (K, Q, and V) we incorporate a conditional mechanism to skip this step and replace the feature vector obtained from the feature extractor module with a learnable token, which then serves as the key and value inputs to the SCM:
F E = Φ E E ( S i k ) R M × D .

3.2. Pyramid Feature Aggregation Module

PFA accepts the feature maps F 1 , F 2 , and F 3 from the final three stages of the Twins network. These maps are down-sampled to 1 8 , 1 16 , and 1 32 of the original image size, respectively. Additionally, PFA incorporates a top-down feature fusion operation.
As the Twins network becomes deeper, the downsampling factor increases, resulting in high-level features with richer semantic information but at the cost of losing fine-grained details present in low-level features. Simply upsampling with interpolation cannot effectively recover these lost details, including positional, textural, and color information. This limitation makes it challenging to accurately capture the target’s boundary and location solely based on the feature map output of the deep network. This issue is particularly problematic in tasks involving dense object counting.
Numerous approaches have been developed to address this challenge. For instance, the feature pyramid network (FPN) [27] was introduced, to enhance localization accuracy by integrating high-level semantic and low-level features through top-down pathways. PAN [27] further extended FPN, with a bottom-up pathway, to reduce the information path lengths for both low-level and top-level features, facilitating the propagation of accurate signals in low-level features. These architectures have found widespread use in object detection tasks (e.g., the YOLO series [28,29,30,31]), demonstrating the efficacy of multi-scale feature fusion in target detection. Consequently, many object counting approaches have incorporated pyramidal feature fusion modules. However, most of these methods merely perform basic feature concatenation between high-level and low-level features and do not fully exploit the multi-scale information derived from the encoder. This limitation hinders their effectiveness in counting small objects.
This concept draws inspiration from the remarkable success of YOLOv6 v3.0 [28] in object detection. We have devised a multi-scale aggregation mechanism for integrating information across different scales. Our approach is grounded in the utilization of SIMSPPF [30] and RepBlock [30] as foundational components. An illustration of one of the SIMSPPF structures is provided in Figure 4b, which represents an enhanced iteration of the SPP [29] module. It incorporates the concept of a spatial pyramid, seamlessly merging local and global features, substantially enhancing the feature map’s information representation capacity. This approach effectively mitigates performance degradation, particularly when dealing with substantial size variations in target counting tasks.
Furthermore, RepBlock is employed within the network, featuring a multi-branch topology during training. However, during inference, this multi-branch structure is seamlessly fused into a single 3 × 3 convolutional layer. The architectural arrangement is illustrated in Figure 4a. Specifically, this module takes as its input the feature maps, F i , from the exemplar encoder, each at a different scale. To prevent excessive computational overhead, we reduce the channel dimensions of F i , using a 1 × 1 convolutional layer. F 3 initially enters the SIMSPPF module, where the feature maps pass through pooling layers of varying kernel sizes before being amalgamated. This approach aims to mitigate image distortion stemming from cropping and scaling operations during data preprocessing, a critical consideration for small object localization and recognition.
The next step involves upsampling the feature maps, using a 3 × 3 convolutional kernel and bilinear interpolation, to maintain their spatial dimensions consistent with those of F 2 . Subsequently, these up-sampled feature maps are concatenated in the channel dimension. Following the concatenation, the PFA passes through a sequence of layers, including RepBlock, convolutional layers (conv), and upsampling layers, sequentially. Finally, it is combined with the low-level feature, F 1 . This process culminates in the generation of the final output of the PFA component, denoted as F Q . To summarize briefly:
F Q = C o n c a t ( F i ( s ) , F i 1 ( s ) , , F l + 1 δ ( s ) ) , i ( 1 , 2 , 3 ) ,

3.3. Feature Interaction Enhancement Module

As shown in Figure 5a, FSC is typically approached through two mainstream methods: the feature-based approach and the similarity-based approach. In the feature-based approach, query features are compared pointwise, to obtain a feature similarity map, which is then used for regressing a density map via a trained regression head. Examples of such methods include GAM [1], CAFOCNet [21], FAMNet [20], and SAFECount [3].
However, it is important to note that the information contained in similarity maps is often much less comprehensive compared to that in raw features. This limitation makes it challenging to obtain detailed information about the target, hindering the network’s ability to distinguish precise object boundaries. Moreover, the feature-based representation method may lose spatial information from the exemplars, due to the presence of pooling layers, which can adversely impact localization.
In this paper, we propose a similar feature interaction enhancement module that seeks to combine the strengths of both approaches. Specifically, we enhance the original features, denoted as F E , from the query image by adjusting the feature weights based on similarity information, represented as F S . Subsequently, these enhanced features are used for density map regression. This section’s structure comprises the following two modules.

3.3.1. Similarity Comparison Module

Lu et al. proposed in GAM [1] that the counting problem can be transformed into a matching problem by leveraging the self-similarity property inherent in images, which naturally occurs in object counting tasks. This approach allows for counting arbitrary objects in a class-agnostic manner.
For this paper, we employed a series of standard transformer decoder structures, to assess the self-similarity of images. Specifically, we treated query image features as queries, and we used an MLP to transform exemplar features into two distinct projections: key and value. The cross-attention mechanism in the transformer decoder computed the similarity between query and value, comparing arbitrary regions in the image to a user-defined number of exemplar shots, to measure their similarity. We used standard 2-layer transformer decoder structures. We projected each 1 8 -size feature map pixel from the input image into a sequence. This sequence entered an embedding layer with a hidden dimension of 512. After passing through the SCM, the vector sequences were converted back into 2D feature maps:
F S C M = Φ S C M F P F A W Q F C N N W K T d F C N N W V R M × D .
By comparing the query and key in the similarity space, the model was asked to select the appropriate information from the features automatically and, with the help of the features of the few shots given by the user, the score R of the user’s region of interest was calculated by comparison.

3.3.2. Feature Attention Enhancement Module

Channel attention involves compressing the feature map into a 1D representation along the spatial dimension and performing attention operations along its channel dimension. This module enables the model to prioritize regions of interest while suppressing attention in non-interesting regions. For instance, CBAM [32] employs a large-scale convolutional kernel to compute an attention map across both channel and spatial dimensions. It then multiplies the feature map with this attention map, to adaptively optimize the features. This approach allows for capturing long-range dependencies along one spatial direction while preserving precise location information along the other. The resulting feature maps are transformed into direction-aware and position-sensitive attention maps, which can be complementarily applied to the input feature maps, to enhance the representation of the object of interest.
More specifically, when we consider the features F Q , F E , and F S obtained from the query image and exemplars, along with the feature similarity graph R generated from SCM, our proposed FAEM aims to enhance the original features. FAEM initially processes the support features, F S , through the GAM module. Leveraging spatial and channel attention mechanisms, the network emphasizes the salient objects in the image (i.e., exemplars). The feature similarity map R is then dimensionally reduced, using a 1 × 1 convolutional layer, to reduce computational complexity. Subsequently, the feature map attention region is reshaped, and the coordinate attention module derives the reshaped similarity map, denoted as R . Following this, R is combined with the query feature map, F Q , to enhance the relevant feature dimensions of the k-shot image provided by the user. F Q , F S , and F E are input into FAEM, and the attention distribution is adjusted, based on the weights of F Q in spatial and channel dimensions, utilizing two attention modules, CA and GAM, respectively. This component effectively blends the CNN and the attention mechanism, to make the network more focused on our task objectives.
The realization of this part can be divided into two steps:
Similar attention enhancement: In this step, we aggregate the support features, F S , by incorporating coordinate attention (CA) [33] and the global attention mechanism (GAM) [34]. The structures are illustrated in Figure 5b,c. Essentially, in relation to the exemplar features F E provided by the user, regions with higher similarity in F should be assigned greater weights in both spatial and channel dimensions. The spatial and channel attention mechanism accomplishes this weighted aggregation.
Original image feature enhancement: In this step, we incorporate the enhanced similarity graph, R , into the original features of the query image, F Q , using a shallow convolutional layer. This process enhances the weights of the feature maps that are close to the target class in exemplars. The enhancement is accomplished through a straightforward matrix addition before entering the convolutional layer.

3.4. Multi-Scale Dense Counting Regression Head

We have obtained global and multi-scale information through PFA and enhanced the features corresponding to exemplars using FAEM, which employs feature similarity and attention mechanisms. Next, we require a robust regression head, to accurately estimate the density map. The structure of this regression head is depicted in Figure 6a.
In general, a standard CNN followed by a pooling layer can be employed for counting tasks, and this architecture is often stacked multiple times, to generate a density map in the decoder part (e.g., SAFECount). However, this approach continuously reduces the spatial resolution of the feature map, leading to the loss of detailed information and a diminished ability to detect small targets.
To make better use of the information within the feature map, it is essential to expand the perceptual field of the regression head. Directly using larger fixed-size convolutional kernels would significantly increase the computational demands of the network. Dilated convolutional (DConv) layers [35] provide an excellent solution to enhancing the perceptual field without increasing computational complexity. A common practice is to stack DConv layers together [35]. However, this can lead to the gridding effect of DConv layers, where, after passing through multiple convolutional layers, some pixels in the feature map may be missed in subsequent convolutions, as illustrated in Figure 7b. Therefore, the dilation rate of DConv layers must be carefully chosen, to ensure that the convolution captures a larger receptive field while avoiding the gridding effect and the loss of information at a distance.
In ACECount, ensuring the density map’s resolution requires using a final feature map size that is 1 8 of the input image’s size. When dealing with class-independent counting, where target scale variations can be substantial and a diverse range of targets is encountered, having a large receptive field becomes essential. To address this, we employ a dense ASPP structure with the smallest feasible expansion coefficient. Specifically, the multi-scale dense counting regression head (MDCR) module consists of three parallel Dconv layers denoted as C i (i = 1, 2, 3), accompanied by a 1 × 1 convolutional branch functioning as a shortcut. The output of each of these expanded convolutions is then passed through subsequent convolutional layers before being concatenated and processed through the DConvs with varying expansions (R = 1, 2, 3). The overall structure of this module can be simplified, to resemble Figure 6b, equivalent to a sequence of cascading null convolutions. Subsequently, we reduce the feature map’s channel count through two straightforward convolutional layers, resulting in a one-channel density heatmap. The number of targets is obtained by simply summing the heatmap values. The computational load of this spatial processing component is defined as follows:
K = K 3 , d = 1 + K 3 , d = 2 + K 3 , d = 3 3 ,
where 3 denotes convolution kernel size and d denotes dilation rates.
We denote this part as follows:
y i = Φ M D C R ( C 1 , C 2 , C 3 , S ) R 1 × M × D .

3.5. Zero-Shot Counting

In the context of zero-shot counting, where the training data either lacks exemplar annotations or does not utilize them, we replace the exemplar features, F E , and support features, F S , with two learnable tensors ( T 1 and T 2 ). Specifically, T 1 is used as both the key and value inputs within the SCM component, while T 2 serves as the input for the FAEM segment’s support features. This innovative approach enables the network to seamlessly adapt to different values of ‘k’ in k-shot counting scenarios, thereby enhancing its performance in zero-shot counting tasks.

3.6. Loss Function

We use a loss function that is common in population counting, which comes from DM-Count [16] and is represented by a weighted sum of count loss, optimal transmission (OT) loss, and total variance (TV) loss. Count loss: let · 1 be the L 1 parameterization of the vector, z 1 , z ^ 1 denote the predicted density map number and the actual value number, respectively, and the count loss requires the values of z 1 , z ^ 1 to be as close as possible. The final count loss is defined by the absolute value of the difference between the two only:
l c ( z , z ^ ) = z 1 z ^ 1 .
Optimal transmission loss: z, z ^ are both non-normalized density functions, which can be converted into probability density functions by treating them as their respective sums. OT loss can measure the similarity between two density maps and, simultaneously, help the counting model obtain a firm fit, providing an effective gradient to train the network to minimize the distribution gap between the predicted density map and the ground truth. It is defined as follows:
l O T ( z , z ^ ) = W z z 1 , z ^ z ^ 1 = α * , z z 1 + β * , z ^ z ^ 1 .
Total variance: Wang et al. (2020) [16] pointed out that OT loss approximates target dense regions well but produces inaccurate results for regions with low target density. In this regard, TV loss is used to stabilize the supervised signal and increase the stability of the training process. It is defined as follows:
l T V ( z , z ^ ) = z z 1 z ^ z ^ 1 T V = 1 2 z z 1 z ^ z ^ 1 1 .
Finally, the DM loss function is a weighted sum of the counting loss, OT loss, and TV loss, which is expressed as
l d m = l 1 ( z , z ^ ) + λ 1 l o T ( z , z ^ ) + λ 2 l T V ( z , z ^ ) ,
where z, z ^ denote the target number of estimated and true density maps, respectively, and λ 1 and λ 2 are adjustable hyperparameters.

4. Experiments

4.1. Dataset

We conducted experiments using our framework on five benchmark datasets: FSC-147 [20], CARPK [34], ShanghaiTech Part A [17], ShanghaiTech Part B [17], and UCF-QNRF [16]. These experiments included class-agnostic counting on FSC-147, class-specific crowd counting on ShanghaiTech, UCF-QNRF, and class-specific car counting on CARPK. These datasets vary in terms of counting classes, image resolution ratios, the number of objects, object density, and color spaces.
For the FSC-147 dataset during training, we used all the bounding box annotation information as exemplars in the 3-shots case. In the i-shots case (where i = 1, 2), i boxes were randomly selected from the three boxes, as exemplars input.
The ShanghaiTech, and UCF-QNRF datasets do not provide bounding box annotations. In these cases, we trained the datasets in a zero-shot setting, using a learnable tensor instead of exemplars input.
In the case of CARPK, which provides bounding box annotations, we obtained the point annotation information by computing the centroid coordinates of the bounding boxes, to generate the true value density map. Exemplars were obtained by randomly selecting the bounding boxes, and the remaining settings were kept consistent with the training of the FSC-147 dataset.
FSC-147 is one of the few available datasets for few-shot counting (FSC). It features a multi-category setup with 147 classes and 6135 images. Each image includes three bounding boxes representing instances of objects from count categories specified by exemplars. The dataset’s data distribution is as follows: the training set comprises 89 classes and 3659 images, while the validation and test sets consist of 1286 and 1190 images, respectively, including 29 additional disjoint classes. The object count per image ranges from 7 to 3701, with the number of images containing over 1000 objects being relatively low, averaging 56. The FSC-147 dataset presents two primary challenges. First, most prior algorithms were designed for counting a specific target class, whereas FSC-147 encompasses diverse categories and necessitates the ability to handle unseen categories during inference. Second, the dataset spans various scenarios, demanding algorithms to effectively filter out irrelevant context interference in task goals.
The ShanghaiTech dataset comprises two parts, A and B. Part A includes images collected from the internet, exhibiting a wide range of population density variations. The data distribution for Part A consists of 300 images in the training set and 128 images in the test set. By contrast, Part B contains images captured by surveillance cameras on Shanghai streets, featuring significant intrascene scale and density variations. Part B’s data distribution comprises 400 images in the training set and 316 images in the test set.
UCF-QNRF images were sourced from various websites and exhibit diversity in scenes and sizes. These images primarily contain objects at a small scale. The dataset consists of a total of 1535 images, with 1201 in the training set and 334 in the test set. No validation set is specified.
CARPK serves as a class-specific car counting benchmark commonly used to assess the generalizability of FSC network datasets. It includes 1448 images of cars captured by drones in four different parking lots, encompassing a total of 90,000 cars.

4.2. Implementation Detail

For data augmentation, we uniformly applied random cropping to each image. To be precise, the FSC-147 and ShanghaiTech images were randomly cropped to 256 × 256 pixels, while images from UCF-QNRF were cropped to 512 × 512 pixels. Randomized horizontal flipping operations were also performed on the datasets above.
In the network’s MDCR, the significant increase in feature map channels due to dense connectivity required strategies to manage model size and computational load. To tackle this, we used multiple 1 × 1 convolutional layers to reduce the number of feature channels in the intermediate stage, thereby effectively reducing the overall model’s parameter count.
During training, we froze the weights of the query encoder part of the network and only updated the parameters of the remaining components. After 50 epochs of training, the weights of the encoder part were unfrozen. This strategy preserved the generic semantic information obtained from pre-training on ImageNet, which was loaded into the encoder part of the network. We employed the AdamW [36] optimizer with a batch size 16, a well-suited choice for transformer-based models. The initial learning rate was set to 1 × 10 5 , and L 2 regularization of 0.0001 was employed, to prevent overfitting. In the class-specific counting dataset, we initialized the network parameters with weights trained on the FSC-147 dataset and subsequently fine-tuned them on the corresponding CSC dataset. Simultaneously, we kept the weights of the backbone part frozen.
We conducted our experiments within the PyTorch framework, employing an NVIDIA GeForce RTX 3090 environment.

4.3. Evaluation Metrics

Reviewing the previous work in object counting, we adopted two commonly used counting methods as our evaluation metrics: mean absolute error (MAE) and root mean squared error (RMSE). They are defined as follows:
MAE = 1 N l i = 1 N I C i C i G T ,
RMSE = 1 N I i = 1 N I C i C i G T 2 ,
where C i denotes the number of people predicted by ACECount, C i G T denotes the actual number of people in each image, and N indicates the total number of images.

4.4. Compared to Different Methods

4.4.1. Class-Agnostic Counting

Training Configuration on Class-Agnostic Counting. The dimensions of the query images and exemplar images were set to 256 × 256 and 64 × 64 for H × W and H E × W E , respectively. As presented in Table 1, ACECount demonstrated great performance in terms of zero-shot and few-shot counting, surpassing previous methods, particularly in the validation set results. The visualisation results are displayed in Figure 8.
Comparing Methods. We evaluated and compared the proposed ACECount model to existing CAC methods on the FSC-147 dataset: Under the few-shot training setting, we compared CFOCNet [16], FSOD [37], Counting-DETR [38], BMNet [2], and FamNet [20]. For BMNet [2], we used the variant BMNet+ [2] that implements both the core components, i.e., the self-similarity module and the dynamic similarity metric. Additionally, for more comparisons, we also tested SAFECount [3], which applies a similar design to our work, i.e., combining feature-based and similarity-based approaches. We also compared RepRPN-C [39] and RCC [40] under the zero-shot training setting, demonstrating that our method can perform well under zero-shot counting as well.
Performance on FSC-147. Table 1 lists the quantitative results for FSC-147. Specifically, ACECount demonstrated a clear advantage over some methods (CFOCNet, FSOD, Counting-DETR, BMNet+, and FamNet) that are either based solely on features or rely on a single similarity metric. Compared to SAFECount, which is also based on similarity enhancement, ACECount was able to more accurately distinguish the boundary contours of the target object and adapt to scale variations in target objects, identifying smaller targets. ACECount achieved an MAE of 14.93 and an RMSE of 50.11 on the validation set, and an MAE of 14.06 and an RMSE of 83.51 on the test set. It is important to note that SAFECount and BMNet+ already served as a strong baseline for ACECount. This performance improvement was mainly attributed to our network’s enhanced FIEM, which effectively leveraged the attention mechanism and combined similarity and raw feature information, to optimize the use of the encoder’s initial features.
Table 1. Comparison with the CAC algorithm on the FSC-147 dataset.
Table 1. Comparison with the CAC algorithm on the FSC-147 dataset.
MethodVenueShotsValTest
MAEMSEMAEMSE
RepRPN-C [39]ACCV23031.69100.3128.32128.76
RCC [40]Arxiv22020.3964.6221.64103.47
ACECount (ours)2023017.9268.3116.16105.23
CFOCNet [16]WACV21321.1961.4122.10112.71
FSOD [37]CVPR20336.36115.0032.53140.65
FamNet [20]CVPR21323.7569.0722.0899.54
BMNet+ [2]CVPR22315.7458.5314.6291.83
Counting-DETR [38]ECCV223--16.79123.56
SAFECount [3]WAC23315.2847.2014.3283.54
ACECount (ours)2023314.9350.1114.0683.51

4.4.2. Class-Specific Object Counting

Training Configuration on Class-Specific Counting. In our experiments on the crowd counting dataset, we utilized a zero-shot counting training configuration. We compared our method to two categories of competitors: single-class crowd counting methods and FSC methods. Conversely, for the vehicle counting dataset, we adopted a three-shot training configuration. Our results demonstrated that our method outperforms all FSC methods in crowd counting and matches the performance of State-of-the-Art single-class crowd counting methods. Furthermore, our approach excels in vehicle counting. It is worth noting that our model was not originally designed for specific car counting or crowd counting tasks. the visualisation results are displayed in Figure 9.
Performance on ShanghaiTech. The FSC approach lacks the ability to handle multi-scale targets and struggles to detect small-scale crowd targets when transitioning from the FSC-147 dataset to the crowd counting dataset. Thanks to our MDCR module, our network efficiently identifies and locates small pixel targets. ACECount outperforms all FSC methods, in terms of CSC task results, and it is on a par with State-of-the-Art single-class crowd counting methods.
Performance on UCF-QNRF. The results are presented in Table 2. Our method consistently outperformed other CAC methods. This is attributable to the detailed information contained within our PFA, facilitating the detection of small objects. Furthermore, our MDCR excelled in capturing multi-scale features and global contextual information, enabling accurate crowd size regression. In comparison to class-specific crowd counting algorithms, our MAE and RMSE were surpassed only by the previous top-performing algorithms.
Performance on CARPK. The model, initially trained on the FSC-147 dataset, underwent fine-tuning on CARPK and was then compared to dedicated car counting models and other general counting models. The results are presented in Table 3.

4.5. Ablation Study

For this section, we performed comprehensive ablation experiments on the proposed network structure, using the FSC-147 dataset. These experiments aimed to demonstrate the effectiveness of our proposed modules and to assess each module’s contribution to the network. Additionally, we conducted extensive ablation studies, to select hyperparameters, determine the optimal size and number of regression head convolutions, investigate the impact of weight freezing during training, and evaluate the effectiveness of the core modules, SCM and FAEM.

4.5.1. Pyramid Feature Aggregation Module

In Table 4, we present the network’s performance when it received features from different encoder stages, aiming to assess the effectiveness of the progressive feature aggregation (PFA) module. In the deeper layers of the encoder, the network tended to lose intricate details, retaining primarily the image’s high-level semantic features, which could be detrimental to our ultimate goal. F 1 , F 2 , and F 3 were derived from the output features of the three phases within the backbone network. Simultaneously leveraging features from all three phases enabled the aggregation of comprehensive information across various scales, resulting in improved counting performance. The experimental findings highlighted that not only the incorporation of multi-stage features impacts network performance but also that the organization of these features plays a pivotal role. Our designed PFA module consistently outperformed both the straightforward concatenation of the three output features (as indicated in line 4) and the feature fusion using the ASPP approach (line 5).

4.5.2. Multi-Scale Columns

As shown in Table 5, we assessed the impact of varying the number of columns in the DConv layer structure within the MDCR on counting performance. C 1 , C 2 , and C 3 denote the successive null convolutions, as depicted in Figure 6a. By default, all the experiments included a shortcut path comprising a 1 × 1 convolution.
Intriguingly, this structural design primarily aimed to enhance generalization across diverse datasets. Notably, on the FSC dataset, employing a single C 3 column also yielded favorable results for the network. However, a more practical approach involved the comprehensive aggregation of all columns, leading to enhanced network performance. We achieved an MAE and an MSE of 14.9 and 50.1, respectively, on the validation set. These results compare favorably to the best outcomes of 16.7 and 62.1, obtained using only a single column of DConv layers, reflecting improvements of −1.8 and −12, respectively. This underscores the indispensability of MDCR.
Simultaneously, we explored different stacking strategies for DConv layers. While stacking three DConv layers identically and setting the dilation rates to 2 offered some performance improvement, it introduced meshing issues, as discussed in Section 4.5.4, ultimately resulting in performance degradation, compared to the approach outlined in this paper. Stacking DConvs with varying dilation rates in depth mitigated this problem, albeit with negligible improvements in the context of object counting tasks.

4.5.3. Freezing Backbone

As we mentioned in the training details in Section 4.5, we undertook the operation of freezing the encoder weights first and then unfroze them later, during training. This aimed to preserve as much as possible the high-level semantic information of various categories learned in ImageNet contained in the weights, so that the network was more general and could achieve effective counting in the face of different categories. We investigated the effect of freezing the weights on the network’s performance, and the results are shown in Table 6. It can be seen intuitively that the operation of freezing weights did bring some performance improvement to our network. The MAE at the time of testing was reduced from 16.3 to 14.0.

4.5.4. Component Analysis of ACECount

We comprehensively assessed the network’s designed structures on the FSC-147 dataset, to gauge each component’s influence on the network’s overall performance. We derived the count structure measures by integrating the components: namely, PFA, SCM, FEAM, and MDCR. The findings are summarized in Table 7, revealing notable performance enhancements attributed to incorporating PFA, SCM, FEAM, and MDCR structures. PFA, SCM, and FEAM structures were the most influential in augmenting the model’s performance. This demonstrates the core modules’ usefulness in our design for the target counting task.

5. Visualization in the FSC-147 Dataset

In Figure 10, we provide visualizations of ACECount’s performance in three-shot counting. ACECount stands out compared to methods solely reliant on features or a single similarity metric. It excels in the task of separating densely overlapping objects into individual entities, as demonstrated in Figure 10. This not only leads to more accurate counting results for such objects but also enables the network to precisely distinguish object boundaries, clarify overlapping object contours, and provide valuable information about target localization, including the number of targets, their density distributions, and precise positions. Additionally, ACECount proves effective for counting smaller and densely arranged targets, such as the flying birds and folded clothes in Figure 10, offering accurate counts even for objects that are challenging to discern with the naked eye.

6. Conclusions

In this paper, we present a streamlined pipeline based on the FSC approach, customized for the CAC task. Our pipeline comprises six crucial modules: a query encoder designed to effectively capture global contextual information using pyramid vision converters; an exemplar encoder for extracting exemplar features and supporting features through CNN layers; PFA for comprehensive utilization of coarse-to-fine information; SCM for comparing image similarity information through self-attention mechanisms; FEAM for enhancing raw features with spatial and channel attention mechanisms; and the regression head MDCR for achieving a multi-scale receptive field. Our approach demonstrates significant advantages across various popular benchmarks. Nevertheless, irrelevant background information can hinder the counting network’s performance. Background elements resembling the target object may lead to erroneous identifications. To address this concern, we plan to incorporate relevant additional information (such as scale, texture, and color) into the counting network in future research. This enhancement will enable the network to extract semantic features of objects akin to the provided example and to eliminate irrelevant background interference. Additionally, in the future, we aim to enhance the network and extend its application into the realm of weakly supervised learning, which has the potential to significantly reduce the requirement for laborious labeling of image data in goal counting algorithms. In summary, our approach provides a robust baseline for future research on FSC algorithms and has the potential for application in other counting tasks.

Author Contributions

Conceptualization, L.D. and Y.Y.; methodology, L.D. and Y.Y.; validation, L.D., Y.Y. and D.Z.; writing—original draft, Y.Y.; writing—review and editing, L.D. and Y.Y.; supervision, D.Z. and Y.H.; funding acquisition, L.D. and Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the China Postdoctoral Science Foundation, (2020M680979), by the PhD Research Startup Foundation project of Liaoning Province of China, (2021-BS-281), and by the funding project of Northeast Geological S&T Innovation Center of China Geological Survey (NO. QCJJ2022-24).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

We used publicly available datasets.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, E.; Xie, W.D.; Zisserman, A. Class-Agnostic Counting. In Lecture Notes in Computer Science, Proceedings of the 14th Asian Conference on Computer Vision (ACCV), Perth, Australia, 2–6 December 2018; Springer International Publishing Ag: Cham, Switzerland, 2019; Volume 11363, pp. 669–684. [Google Scholar] [CrossRef]
  2. Shi, M.; Lu, H.; Feng, C.; Liu, C.X.; Cao, Z.G. Represent, Compare, and Learn: A Similarity-Aware Framework for Class-Agnostic Counting. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 9519–9528. [Google Scholar] [CrossRef]
  3. You, Z.Y.; Yang, K.; Luo, W.H.; Lu, X.; Cui, L.; Le, X.Y. Few-shot Object Counting with Similarity-Aware Feature Enhancement. In Proceedings of the 23rd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2–7 January 2023; pp. 6304–6313. [Google Scholar] [CrossRef]
  4. Huo, Y.; Gang, S.; Guan, C. FCIHMRT: Feature Cross-Layer Interaction Hybrid Method Based on Res2Net and Transformer for Remote Sensing Scene Classification. Electronics 2023, 12, 4362. [Google Scholar] [CrossRef]
  5. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  6. Wang, Q.; Gao, J.Y.; Lin, W.; Li, X.L. NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting and Localization. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2141–2149. [Google Scholar] [CrossRef] [PubMed]
  7. Mundhenk, T.N.; Konjevod, G.; Sakla, W.A.; Boakye, K. A Large Contextual Dataset for Classification, Detection and Counting of Cars with Deep Learning. In Lecture Notes in Computer Science, Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; Springer International Publishing Ag: Cham, Switzerland, 2016; Volume 9907, pp. 785–800. [Google Scholar] [CrossRef]
  8. Arteta, C.; Lempitsky, V.; Zisserman, A. Counting in the Wild. In Lecture Notes in Computer Science, Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; Springer International Publishing Ag: Cham, Switzerland, 2016; Volume 9911, pp. 483–498. [Google Scholar] [CrossRef]
  9. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar] [CrossRef]
  10. Gao, C.Q.; Liu, J.; Feng, Q.; Lv, J. People-flow counting in complex environments by combining depth and color information. Multimed. Tools Appl. 2016, 75, 9315–9331. [Google Scholar] [CrossRef]
  11. Dollar, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian Detection: An Evaluation of the State of the Art. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 743–761. [Google Scholar] [CrossRef] [PubMed]
  12. Liu, J.; Gao, C.Q.; Meng, D.Y.; Hauptmann, A.G. DecideNet: Counting Varying Density Crowds through Attention Guided Detection and Density Estimation. In Proceedings of the 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 5197–5206. [Google Scholar] [CrossRef]
  13. Liu, Y.T.; Shi, M.J.; Zhao, Q.J.; Wang, X.F. Point in, Box out: Beyond Counting Persons in Crowds. In Proceedings of the 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 6462–6471. [Google Scholar] [CrossRef]
  14. Lian, D.Z.; Li, J.; Zheng, J.; Luo, W.X.; Gao, S.H. Density Map Regression Guided Detection Network for RGB-D Crowd Counting and Localization. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1821–1830. [Google Scholar] [CrossRef]
  15. Chan, A.B.; Liang, Z.S.J.; Vasconcelos, N. Privacy preserving crowd monitoring: Counting people without people models or tracking. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1766–1772. [Google Scholar] [CrossRef]
  16. Wang, B.; Liu, H.; Samaras, D.; Nguyen, M.H. Distribution matching for crowd counting. Adv. Neural Inf. Process. Syst. 2020, 33, 1595–1607. [Google Scholar]
  17. Zhang, Y.Y.; Zhou, D.S.; Chen, S.Q.; Gao, S.H.; Ma, Y. Single-Image Crowd Counting via Multi-Column Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 589–597. [Google Scholar] [CrossRef]
  18. Pham, V.Q.; Kozakaya, T.; Yamaguchi, O.; Okada, R. COUNT Forest: CO-voting Uncertain Number of Targets using Random Forest for Crowd Density Estimation. In Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 3253–3261. [Google Scholar] [CrossRef]
  19. Sam, D.B.; Surya, S.; Babu, R.V. Switching Convolutional Neural Network for Crowd Counting. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4031–4039. [Google Scholar] [CrossRef]
  20. Ranjan, V.; Sharma, U.; Nguyen, T.; Hoai, M. Learning To Count Everything. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 3393–3402. [Google Scholar] [CrossRef]
  21. Yang, S.D.; Su, H.T.; Hsu, W.H.; Chen, W.C. Class-agnostic Few-shot Object Counting. In Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2021; pp. 869–877. [Google Scholar] [CrossRef]
  22. Djukic, N.; Lukezic, A.; Zavrtanik, V.; Kristan, M. A Low-Shot Object Counting Network with Iterative Prototype Adaptation. arXiv 2023, arXiv:2211.08217. [Google Scholar]
  23. Liu, C.; Zhong, Y.; Zisserman, A.; Xie, W. CounTR: Transformer-based Generalised Visual Counting. arXiv 2023, arXiv:2208.13721. [Google Scholar]
  24. Lin, W.; Yang, K.; Ma, X.; Gao, J.; Liu, L.; Liu, S.; Hou, J.; Yi, S.; Chan, A.B. Scale-Prior Deformable Convolution for Exemplar-Guided Class-Agnostic Counting. 2022. Available online: http://visal.cs.cityu.edu.hk/static/pubs/conf/bmvc2022-spdcn.pdf (accessed on 18 May 2023).
  25. Chu, X.X.; Tian, Z.; Wang, Y.Q.; Zhang, B.; Ren, H.B.; Wei, X.L.; Xia, H.X.; Shen, C.H. Twins: Revisiting the Design of Spatial Attention in Vision Transformers. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS), Online, 6–14 December 2021. [Google Scholar]
  26. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16×16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  27. Lin, T.Y.; Dollar, P.; Girshick, R.; He, K.M.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar] [CrossRef]
  28. Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
  29. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  30. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
  31. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar] [CrossRef]
  32. Woo, S.H.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Lecture Notes in Computer Science, Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Springer International Publishing Ag: Cham, Switzerland, 2018; Volume 11211, pp. 3–19. [Google Scholar] [CrossRef]
  33. Hou, Q.B.; Zhou, D.Q.; Feng, J.S. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13708–13717. [Google Scholar] [CrossRef]
  34. Liu, Y.; Shao, Z.; Hoffmann, N. Global attention mechanism: Retain information to enhance channel-spatial interactions. arXiv 2021, arXiv:2112.05561. [Google Scholar]
  35. Li, Y.H.; Zhang, X.F.; Chen, D.M. CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1091–1100. [Google Scholar] [CrossRef]
  36. Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar]
  37. Fan, Q.; Zhuo, W.; Tang, C.K.; Tai, Y.W. Few-Shot Object Detection with Attention-RPN and Multi-Relation Detector. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 4012–4021. [Google Scholar] [CrossRef]
  38. Nguyen, T.; Pham, C.; Nguyen, K.; Hoai, M. Few-Shot Object Counting and Detection. In Lecture Notes in Computer Science, Proceedings of the 17th European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022; Springer International Publishing Ag: Cham, Switzerland, 2022; Volume 13680, pp. 348–365. [Google Scholar] [CrossRef]
  39. Ranjan, V.; Nguyen, M.H. Exemplar Free Class Agnostic Counting. In Lecture Notes in Computer Science, Proceedings of the 16th Asian Conference on Computer Vision (ACCV), Macao, China, 4–8 December 2022; Springer International Publishing Ag: Cham, Switzerland, 2023; Volume 13844, pp. 71–87. [Google Scholar] [CrossRef]
  40. Hobley, M.; Prisacariu, V. Learning to count anything: Reference-less class-agnostic counting with weak supervision. arXiv 2022, arXiv:2205.10203. [Google Scholar]
  41. Liu, W.Z.; Salzmann, M.; Fua, P. Context-Aware Crowd Counting. In Proceedings of the 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 5094–5103. [Google Scholar] [CrossRef]
  42. Wang, Q.; Gao, J.Y.; Lin, W.; Yuan, Y. Learning from Synthetic Data for Crowd Counting in the Wild. In Proceedings of the 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8190–8199. [Google Scholar] [CrossRef]
  43. Meng, Y.D.; Zhang, H.R.; Zhao, Y.T.; Yang, X.Y.; Qian, X.S.; Huang, X.W.; Zheng, Y.L. Spatial Uncertainty-Aware Semi-Supervised Crowd Counting. In Proceedings of the 18th IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 15529–15539. [Google Scholar] [CrossRef]
  44. Sun, G.; Liu, Y.; Probst, T.; Paudel, D.P.; Popovic, N.; Van Gool, L. Boosting crowd counting with transformers. arXiv 2021, arXiv:2105.10926. [Google Scholar]
  45. Wan, J.; Liu, Z.Q.; Chan, A.B. A Generalized Loss Function for Crowd Counting and Localization. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 1974–1983. [Google Scholar] [CrossRef]
  46. Goldman, E.; Herzig, R.; Eisenschtat, A.; Goldberger, J.; Hassner, T. Precise detection in densely packed scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5227–5236. [Google Scholar]
Figure 1. Comparison between class-specific counting and class-agnostic counting: (a) only the number of instances of a certain class is counted in both training and inference; (b) class-agnostic counter learns the counting method, trains on a certain class, and can count the number of instances of a new class that have never been seen.
Figure 1. Comparison between class-specific counting and class-agnostic counting: (a) only the number of instances of a certain class is counted in both training and inference; (b) class-agnostic counter learns the counting method, trains on a certain class, and can count the number of instances of a new class that have never been seen.
Sensors 23 09126 g001
Figure 2. Comparison of existing FSC methods: (a) Feature-based approach. In this method, query features are directly combined with pooled exemplar features and used for density map regression; (b) Similarity-based approach. This method involves comparing the original features from the query image and exemplars, to generate a feature similarity map, which is then used for regression density map estimation; (c) Enhanced similarity-based approach. This approach enhances the original features by incorporating compared similarity features and support features. The resulting enhanced features are then used for regression density map computation.
Figure 2. Comparison of existing FSC methods: (a) Feature-based approach. In this method, query features are directly combined with pooled exemplar features and used for density map regression; (b) Similarity-based approach. This method involves comparing the original features from the query image and exemplars, to generate a feature similarity map, which is then used for regression density map estimation; (c) Enhanced similarity-based approach. This approach enhances the original features by incorporating compared similarity features and support features. The resulting enhanced features are then used for regression density map computation.
Sensors 23 09126 g002
Figure 3. Detailed illustration of our ACECount for object counting.
Figure 3. Detailed illustration of our ACECount for object counting.
Sensors 23 09126 g003
Figure 4. (a) overall structure of PFA; (b) structure of the SIMSPPF module.
Figure 4. (a) overall structure of PFA; (b) structure of the SIMSPPF module.
Sensors 23 09126 g004
Figure 5. (a) overall structure of FAEM; (b) structure of the coordinate attention module; (c) structure of the global attention mechanism.
Figure 5. (a) overall structure of FAEM; (b) structure of the coordinate attention module; (c) structure of the global attention mechanism.
Sensors 23 09126 g005
Figure 6. Architecture for MDCR: (a) the architecture consists of a series of densely connected cavity convolutions; (b) is the equivalent topology of MDCR, which consists of a series of stacked densely sampled Dconv layers.
Figure 6. Architecture for MDCR: (a) the architecture consists of a series of densely connected cavity convolutions; (b) is the equivalent topology of MDCR, which consists of a series of stacked densely sampled Dconv layers.
Sensors 23 09126 g006
Figure 7. The receptive field on the input feature map visible to each layer of the final convolution kernel varies with different dilation rate selections. In the visual representation, blue squares denote the receptive field, and red dots represent the actual pixels involved in the convolution operation: (a) when a fixed dilation rate of 2 is used, it results in the gridding effect; (b) using fixed dilation rates of 1, 2, and 3 does not lead to the gridding effect.
Figure 7. The receptive field on the input feature map visible to each layer of the final convolution kernel varies with different dilation rate selections. In the visual representation, blue squares denote the receptive field, and red dots represent the actual pixels involved in the convolution operation: (a) when a fixed dilation rate of 2 is used, it results in the gridding effect; (b) using fixed dilation rates of 1, 2, and 3 does not lead to the gridding effect.
Sensors 23 09126 g007
Figure 8. Visualization of FSC-147. The first and third columns are the input images and the second and fourth columns are the corresponding density maps.
Figure 8. Visualization of FSC-147. The first and third columns are the input images and the second and fourth columns are the corresponding density maps.
Sensors 23 09126 g008
Figure 9. Visualization results of ACECount used for object counting in the ShanghaiTech Part A dataset. The first and third columns are the input images and the second and fourth columns are the corresponding density maps.
Figure 9. Visualization results of ACECount used for object counting in the ShanghaiTech Part A dataset. The first and third columns are the input images and the second and fourth columns are the corresponding density maps.
Sensors 23 09126 g009
Figure 10. Some qualitative results of ACECount at the three-shot counting setting.
Figure 10. Some qualitative results of ACECount at the three-shot counting setting.
Sensors 23 09126 g010
Table 2. Comparison with the CSC algorithm on the class-specific crowd counting dataset.
Table 2. Comparison with the CSC algorithm on the class-specific crowd counting dataset.
MethodPart APart BUCF-QNRF
MAEMSEMAEMSEMAEMSE
1CAN [41]62.31007.812.2107.0183.0
SFCN [42]59.795.77.411.8102.0171.0
SUA-Fully [43]66.9125.612.317.9119.2213.3
BCCT [44]53.182.27.311.383.8143.4
DM-Count [16]59.795.77.411.885.6148.3
GL [45]61.395.47.311.184.3147.5
2GMN [1]95.8-----
FamNet [20]159.1-24.90---
SAFECount [3]73.70-9.98---
ACECount (ours)59.496.77.211.590.3156.2
1. Single-class crowd counting methods; 2. Few-shot counting methods.
Table 3. Comparison with the CSC algorithm on the class-specific car counting dataset.
Table 3. Comparison with the CSC algorithm on the class-specific car counting dataset.
MethodVenueCARPK
MAEMSE
LOCA [22]Arxiv239.912.5
FamNet [20]CVPR2118.133.6
PDEM [46]CVPR196.78.5
BMNet [2]CVPR229.614.8
GAM [1]CVPR197.49.9
SPDCN [24]BMVC2210.014.1
BMNet+ [2]CVPR225.77.8
ACECount (ours)20235.67.5
Table 4. Feature aggregation in PFA.
Table 4. Feature aggregation in PFA.
StrategyVALTEST
MAEMSEMAEMSE
F 1 21.369.821.9105.3
F 2 16.762.117.296.9
F 3 16.860.015.994.3
F 1 + F 2 + F 3 15.556.414.789.2
F 1 + F 2 + F 3 15.352.514.385.5
F 1 + F 2 + F 3 14.950.114.083.5
F 1 + F 2 + F 3 means that the three levels of features are directly stitched together. F 1 + F 2 + F 3 denotes the aggregation of the three levels of features in an ASPP structure. F 1 + F 2 + F 3 denotes the aggregation of the three levels of features in the form of PFA in ACECount.
Table 5. Multi-scale columns in MDCR.
Table 5. Multi-scale columns in MDCR.
StrategyVALTEST
MAEMSEMAEMSE
C 1 15.758.316.197.3
C 2 16.659.515.396.4
C 3 16.360.214.998.1
C 1 + C 2 + C 3 15.856.216.094.5
C 1 + C 2 + C 3 15.550.615.286.3
C 1 + C 2 + C 3 14.950.114.083.5
C 1 + C 2 + C 3 means stacking three R = 2 DConv layers in depth. C 1 + C 2 + C 3 denotes the stacking in depth of the three DConvs with R = 1, 2, 3, respectively. C 1 + C 2 + C 3 means the structure used in ACECount.
Table 6. Freezing Backbone.
Table 6. Freezing Backbone.
StrategyVALTEST
MAEMSEMAEMSE
Unfreezing17.869.416.3102.9
Freezing14.950.114.083.5
Table 7. Pipeline component analysis of ACECount.
Table 7. Pipeline component analysis of ACECount.
SCMFEAMPFAMDCRVALTEST
MAEMSEMAEMSE
×××25.270.326.1106.5
××18.660.518.497.6
×15.854.016.089.6
14.950.114.083.5
A ✓ indicates that the structure is used, and × indicates that the structure is not used.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dong, L.; Yu, Y.; Zhang, D.; Huo, Y. Attention-Assisted Feature Comparison and Feature Enhancement for Class-Agnostic Counting. Sensors 2023, 23, 9126. https://doi.org/10.3390/s23229126

AMA Style

Dong L, Yu Y, Zhang D, Huo Y. Attention-Assisted Feature Comparison and Feature Enhancement for Class-Agnostic Counting. Sensors. 2023; 23(22):9126. https://doi.org/10.3390/s23229126

Chicago/Turabian Style

Dong, Liang, Yian Yu, Di Zhang, and Yan Huo. 2023. "Attention-Assisted Feature Comparison and Feature Enhancement for Class-Agnostic Counting" Sensors 23, no. 22: 9126. https://doi.org/10.3390/s23229126

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop