Next Article in Journal
Towards Automatic Recognition of Wakes Generated by Dark Vessels in Sentinel-1 Images
Previous Article in Journal
Sentinel-1 SAR Backscatter Analysis Ready Data Preparation in Google Earth Engine
Previous Article in Special Issue
DE-Net: Deep Encoding Network for Building Extraction from High-Resolution Remote Sensing Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiple Pedestrians and Vehicles Tracking in Aerial Imagery Using a Convolutional Neural Network

1
German Aerospace Center (DLR), Remote Sensing Technology Institute (IMF), 82234 Wessling, Germany
2
Department of Aerospace and Geodesy, Technical University of Munich, 80333 Munich, Germany
3
Department of Informatics, Technical University of Munich, 85748 Garching, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(10), 1953; https://doi.org/10.3390/rs13101953
Submission received: 6 April 2021 / Revised: 30 April 2021 / Accepted: 4 May 2021 / Published: 17 May 2021

Abstract

:
In this paper, we address various challenges in multi-pedestrian and vehicle tracking in high-resolution aerial imagery by intensive evaluation of a number of traditional and Deep Learning based Single- and Multi-Object Tracking methods. We also describe our proposed Deep Learning based Multi-Object Tracking method AerialMPTNet that fuses appearance, temporal, and graphical information using a Siamese Neural Network, a Long Short-Term Memory, and a Graph Convolutional Neural Network module for more accurate and stable tracking. Moreover, we investigate the influence of the Squeeze-and-Excitation layers and Online Hard Example Mining on the performance of AerialMPTNet. To the best of our knowledge, we are the first to use these two for regression-based Multi-Object Tracking. Additionally, we studied and compared the L 1 and Huber loss functions. In our experiments, we extensively evaluate AerialMPTNet on three aerial Multi-Object Tracking datasets, namely AerialMPT and KIT AIS pedestrian and vehicle datasets. Qualitative and quantitative results show that AerialMPTNet outperforms all previous methods for the pedestrian datasets and achieves competitive results for the vehicle dataset. In addition, Long Short-Term Memory and Graph Convolutional Neural Network modules enhance the tracking performance. Moreover, using Squeeze-and-Excitation and Online Hard Example Mining significantly helps for some cases while degrades the results for other cases. In addition, according to the results, L 1 yields better results with respect to Huber loss for most of the scenarios. The presented results provide a deep insight into challenges and opportunities of the aerial Multi-Object Tracking domain, paving the way for future research.

Graphical Abstract

1. Introduction

Visual Object Tracking (VOT), that is, locating objects in video frames over time, is a dynamic field of research with a wide variety of practical applications such as in autonomous driving, robot aided surgery, security, and safety. The recent advances in machine and deep learning techniques have drastically boosted the performance of VOT methods by solving long-standing issues such as modeling appearance feature changes and relocating the lost objects [1,2,3]. Nevertheless, the performance of the existing VOT methods is not always satisfactory due to hindrances such as heavy occlusions, difference in scales, background clutter or high-density in the crowded scenes. Thus, developing more sophisticated VOT methods overcoming these challenges is highly demanded.
The VOT methods can be categorized into Single-Object Tracking (SOT) and Multi-Object Tracking (MOT) methods, which track single and multiple objects throughout subsequent video frames, respectively. The MOT scenarios are often more complex than the SOT because the trackers must handle a larger number of objects in a reasonable time (e.g., ideally real-time). Most previous VOT works using traditional approaches such as Kalman and particle filters [4,5], Discriminative Correlation Filter (DCF) [6], or silhouette tracking [7], simplify the tracking procedure by constraining the tracking scenarios with, for example, stationary cameras, limited number of objects, limited occlusions, or absence of sudden background or object appearance changes. These methods usually use handcrafted feature representations (e.g., Histogram of Gradients (HOG) [8], color, position) and their target modeling is not dynamic [9]. In real-world scenarios, however, such constraints are often not applicable and VOT methods based on these traditional approaches perform poorly.
The rise of Deep Learning (DL) offered several advantages in object detection, segmentation, and classification [10,11,12]. Approaches based on DL have also been successfully applied to VOT problems, and significantly enhancing the performance, especially in unconstrained scenarios. Examples include the Convolutional Neural Network (CNN) [13,14], Recurrent Neural Network (RNN) [15], Siamese Neural Network (SNN) [16,17], Generative Adversarial Network (GAN) [18] and several customized architectures [19].
Despite the many progress made for VOT in ground imagery, in the remote sensing domain, VOT has not been fully exploited, due to the limited available volume of images with high enough resolution and level of details. In recent years, the development of more advanced camera systems and the availability of very high-resolution aerial images have opened new opportunities for research and applications in the aerial VOT domain ranging from the analysis of ecological systems to aerial surveillance [20,21].
Aerial imagery allows collecting very high-resolution data from wide open areas in a cost- and time-efficient manner. Performing MOT based on such images (e.g., with Ground Sampling Distance (GSD) < 20 cm/pixel) allows us to track and monitor the movement behaviours of multiple small objects such as pedestrians and vehicles for numerous applications such as disaster management and predictive traffic and event monitoring. However, few works have addressed aerial MOT [22,23,24], and the aerial MOT datasets are rare. The large number and the small sizes of moving objects compared to the ground imagery scenarios together with large image sizes, moving cameras, multiple image scale, low frame rates as well as various visibility levels and weather conditions makes MOT in aerial imagery especially complicate. Existing drone or ground surveillance datasets frequently used as MOT benchmarks, such as MOT16 and MOT17 [25], are very different from aerial MOT scenarios with respect to their image and object characteristics. For example, the objects are bigger and the scenes are less crowded, with the objects appearance features usually being discriminative enough to distinguish the objects. Moreover, the videos have higher frame rates and better qualities and contrasts.
In this paper, we aim at investigating various existing challenges in the tracking of multiple pedestrian and vehicles in aerial imagery through intensive experiments with a number of traditional and DL-based SOT and MOT methods. This paper extends our recent work [26], in which we introduced a new MOT dataset, the so-called Aerial Multi-Pedestrian Tracking (AerialMPT), as well as a novel DL-based MOT method, the so-called AerialMPTNet, that fuses appearance, temporal, and graphical information for a more accurate MOT. In this paper, we also extensively evaluate the effectiveness of different parts of AerialMPTNet and compare it to traditional and state-of-the-art DL-based MOT methods. Additionally, we propose a MOT method inspired by the SORT method [27], the so-called Euclidean Online Tracking (EOT), which employs GSD adapted Euclidean distance for object association in consecutive frames.
We conduct our experiments on three aerial MOT datasets, namely AerialMPT and KIT AIS (https://www.ipf.kit.edu/code.php, accessed on 10 May 2021) pedestrian and vehicle datasets. All image sequences were captured by an airborne platform during different flight campaigns of the German Aerospace Center (DLR) (https://www.dlr.de, accessed on 10 May 2021) and vary significantly in object density, movement patterns, and image size and quality. Figure 1 shows sample images from the AerialMPT dataset with the tracking results of our AerialMPTNet. The images were captured at different flight altitudes and their GSD (reflecting the spatial size of a pixel) varies between 8 cm and 13 cm. The total number of objects per sequence ranges up to 609. Pedestrians in these datasets appear as small points, hardly exceeding an area of 4 × 4 pixels. Even for human experts, distinguishing multiple pedestrians based on their appearance is laborious and challenging. Vehicles appear as bigger objects and are easier to distinguish based on their appearance features. However, different vehicle sizes, fast movements together with the low frame rates (e.g., 2 fps) and occlusions by bridges, trees, or other vehicles presents challenges to the vehicle tracking algorithm, illustrated in Figure 2.
AerialMPTNet is an end-to-end trainable regression-based neural network comprising a SNN module which takes two image patches as inputs, a target and a search patch, cropped from a previous and a current frame, respectively. The object location is known in the target patch and should be predicted for the search patch. In order to overcome the tracking challenges of the aerial MOT such as the objects with similar appearance features and densely moving together, AerialMPTNet incorporates temporal and graphical information in addition to the appearance information provided by the SNN module. Our AerialMPTNet employs a Long Short-Term Memory (LSTM) for temporal information extraction and movement prediction, and a Graph Convolutional Neural Network (GCNN) for modeling the spatial and temporal relationships between adjacent objects (graphical information). AerialMPTNet outputs four values indicating the coordinates of the top-left and bottom-right corners of each object’s bounding box in the search patch. In this paper, we also investigate the influence of Squeeze-and-Excitation (SE) and Online Hard Example Mining (OHEM) [28] on the tracking performance of AerialMPTNet. To the best of our knowledge, we are the first work applying adaptive weighting of convolutional channels by SE and employ OHEM for the training of a DL-based tracking-by-regression method.
According to the results, our AerialMPTNet outperforms all previous methods for the pedestrian datasets and achieves competitive results for the vehicle dataset. Furthermore, LSTM and GCNN modules adds value to the tracking performance. Moreover, while using SE and OHEM can significantly help in some scenarios, in other cases they may degrade the tracking results. In summary, the contributions of this paper over our previous work [26] are:
  • We apply OHEM and SE to a MOT task for the first time.
  • We propose EOT which outperforms tracking methods with Intersection over Union (IoU)-based association strategy.
  • We conduct an ablation study to evaluate the role of all different parts of AerialMPTNet.
  • We evaluate the role of loss functions in the tracking performance by comparing L 1 and Huber loss functions.
  • We evaluated and compared various MOT methods for pedestrian tracking in aerial imagery.
  • We conduct intensive qualitative and quantitative evaluations of AerialMPTNet on two aerial pedestrian and one aerial vehicle tracking datasets.
We believe that our paper can promote research on aerial MOT (esp. for pedestrians and vehicles) by providing a deep insight into its challenges and opportunities.
The rest of the paper is organized as follows: Section 2 presents an overviews on related works; Section 3 introduces the datasets used in our experiments; Section 4 represents the metrics used for our quantitative evaluations; Section 5 provides a comprehensive study on previous traditional and DL-based tracking methods on the aerial MOT datasets, with Section 8.4 explaining our AerialMPTNet with all its configurations; Section 7 represents our experimental setups; Section 8 provides an extensive evaluation of our AerialMPTNet and compares it to the other methods; and Section 10 concludes our paper and gives ideas for future works.

2. Related Works

Visual object tracking is defined as locating one or multiple objects in videos or image sequences over time. The traditional tracking process comprises four phases including initialization, appearance modeling, motion modeling, and object finding. During initialization, the targets are detected manually or by an object detector. In the appearance modeling step, visual features of the region of interest are extracted by various learning-based methods for detecting the target objects. The variety of scales, rotations, shifts, and occlusions makes this step challenging. Image features play a key role in the tracking algorithms. They can be mainly categorized into handcrafted and deep features. In recent years, research studies and applications have focused on developing and using deep features based on DNNs which have shown to be able to incorporate multi-level information and more robustness against appearance variations [29]. Nevertheless, DNNs require large enough training datasets, which are not always available. Thus, for many applications, the handcrafted features are still preferable. The motion modeling step aims at predicting the object movement in time and estimate the object locations in the next frames. This procedure effectively reduces the search space and consequently the computation cost. Widely used methods for motion modeling include Kalman filter [30], Sequential Monte Carlo methods [31] and RNNs. In the last step, object locations are found as the ones close to the estimated locations by the motion model.

2.1. Various Categorizations of VOT

Visual object tracking methods can be divided into SOT [32,33] and MOT [22,34] methods. While SOTs only track a single predetermined object throughout a video, even if there are multiple objects, MOTs can track multiple objects at the same time. Thus, MOTs can face exponential complexity and runtime increase based on the number of objects as compared to SOTs.
Object tracking methods also can be categorized into detection-based [35] and detection-free methods [36]. While the detection-based methods utilize object detectors to detect objects in each frame, the detection-free methods only need the initial object detection. Therefore, detection-free methods are usually faster than the detection-based ones; however, they are not able to detect new objects entering the scene and require manual initialization.
Object tracking methods can be further divided based on their training strategies using either online or offline learning strategy. The methods with an online learning strategy can learn about the tracked objects during runtime. Thus, they can track generic objects [37]. The methods with offline learning strategy are trained beforehand and are therefore faster during runtime [38].
Tracking methods can be categorized into online and offline. Offline trackers take advantage of past and futures frames, while online ones can only infer from past frames. Although having all frames by offline tracking methods can increase the performance, in real-world scenarios future frames are not available.
Most existing tracking approaches are based on a two-stage tracking-by-detection paradigm [39,40]. In the first stage, a set of target samples is generated around the previously estimated position using region proposal, random sampling, or similar methods. In the second stage, each target sample is either classified as background or as the target object. In one-stage-tracking, however, the model receives a search sample together with a target sample as two inputs and directly predicts a response map or object coordinates by a previously trained regressor [17,22].
Object tracking methods can be categorized into the Traditional and DL-Based ones. Traditional tracking methods mostly rely on the Kalman and particle filters to estimate object locations. They use velocity and location information to perform tracking [4,5,41]. Tracking methods only relying on such approaches have shown poor performance in unconstrained environments. Nevertheless, such filters can be advantageous in limiting the search space (decreasing the complexity and computational cost) by predicting and propagating object movements to the following frames. A number of traditional tracking methods follow a tracking-by-detection paradigm based on template matching [42]. A given target patch models the appearance of the region of interest in the first frame. Matched regions are then found in the next frame using correlation, normalized cross-correlation, or the sum of squared distances methods [43,44]. Scale, illumination, and rotation changes can cause difficulties with these methods. More advanced tracking-by-detection-based methods rely on discriminative modeling, separating targets from their backgrounds within a specific search space. Various methods have been proposed for discriminative modeling, such as boosting methods and Support Vector Machines (SVMs) [45,46]. A series of traditional tracking algorithms, such as MOSSE and KCF [6,47], utilizes correlation filters, which model the target’s appearance by a set of filters trained on the images. In these methods, the target object is initially selected by cropping a small patch from the first frame centered at the object. For the tracking, the filters are convolved with a search window in the next frame. The output response map assumes to have a peak at the target’s next location. As the correlation can be computed in the Fourier domain, such trackers achieve high frame rates.
Recently, many research works and applications have focused on using DL-based tracking methods. The great advantage of DL-based features over handcrafted ones such as HOG, raw pixels values or grey-scale templates have been presented previously for a variety of computer vision applications. These features are robust against appearance changes, occlusions, and dynamic environments. Examples of DL-based tracking methods include re-identification with appearance modeling and deep features [34], position regression mainly based on SNNs [16,17], path prediction based on RNN-like networks [48], and object detection with DNNs such as YOLO [49].

2.2. SOTs and MOTs

Among various categorizations, in this section, we consider the SOT and MOT one for reviewing the existing object tracking methods. We believe that this is the fundamental categorization of the tracking methods which significantly affects the method design. In the following, we briefly introduce a few recent methods from both categories and experimentally discuss their strengths and limitations on aerial imagery in Section 5.

2.2.1. SOT Methods

Kalal et al. proposed Median Flow [50], which utilizes point and optical flow tracking. The inputs to the tracker are two consecutive images together with the initial bounding box of the target object. The tracker calculates a set of points from a rectangular grid within the bounding box. Each of these points is tracked by a Lucas-Kanade tracker generating a sparse motion flow. Afterwards, the framework evaluates the quality of the predictions and filters out the worst 50%. The remaining point predictions are used to calculate the new bounding box positions considering the displacement.
MOSSE [6], KFC [47] and CSRT [51] are based upon DCFs. Bolme et al. [6] proposed MOSSE which uses a new type of correlation filter called Minimum Output Sum of Squared Errors (MOSSE), which aims at producing stable filters when initialized using only one frame and grey-scale templates. MOSSE is trained with a set of training images f i and training outputs g i , where g i is generated from the ground truth as a 2D Gaussian centered on the target. This method can achieve state-of-the-art performances while running with high frame rates. Henriques et al. [47] replaced the grey-scale templates with HOG features and proposed the idea of Kernelized Correlation Filter (KCF). KCF works with multiple channel-like correlation filters. Additionally, the authors proposed using non-linear regression functions which are stronger than linear functions and provide non-linear filters that can be trained and evaluated as efficiently as linear correlation filters. Similar to KCF, dual correlation filters use multiple channels. However, they are based on linear kernels to reduce the computational complexity while maintaining almost the same performance as the non-linear kernels. Recently, Lukezic et al. [51] proposed to use channel and reliability concepts to improve tracking based on DCFs. In this method, the channel-wise reliability scores weight the influence of the learned filters based on their quality to improve the localization performance. Furthermore, a spatial reliability map concentrates the filters to the relevant part of the object for tacking. This makes it possible to widen the search space and improves the tracking performance for non-rectangular objects.
As we stated before, the choice of appearance features plays a crucial role in object tracking. Most previous DCF-based works utilize handcrafted features such as HOG, grey-scale features, raw pixels, and color names or the deep features trained independently for other tasks. Wang et al. [32] proposed an end-to-end trainable network architecture able to learn convolutional features and perform the correlation-based tracking simultaneously. The authors encode a DCF as a correlation filter layer into the network, making it possible to backpropagate the weights through it. Since the calculations remain in the Fourier domain, the runtime complexity of the filter is not increased. The convolutional layers in front of the DCF encode the prior tracking knowledge learned during an offline training process. The DCF defines the network output as the probability heatmaps of object locations.
In the case of generic object tracking, the learning strategy is typically entirely online. However, online training of neural networks is slow due to backpropagation leading to a high run time complexity. However, Held et al. [17] developed a regression-based tracking method, called GOTURN, based on a SNN, which uses an offline training approach helping the network to learn the relationship between appearance and motion. This makes the tracking process significantly faster. This method utilizes the knowledge gained during the offline training to track new unknown objects online. The authors showed that without online backpropagation, GOTURN can track generic objects at 100 fps. The inputs to the network are two image patches cropped from the previous and current frames, centered at the known object position in the previous frame. The size of the patches depends on the object bounding box sizes and can be controlled by a hyperparameter. This determines the amount of contextual information given to the network. The network output is the coordinates of the object in the current image patch, which is then transformed to the image coordinates. GOTURN achieves state-of-the-art performance on common SOT benchmarks such as VOT 2014 (https://www.votchallenge.net/vot2014/, accessed on 10 May 2021).

2.2.2. MOT Methods

Bewley et al. [27] proposed a simple multi-object tracking approach, called SORT, for online tracking applications. Bounding box position and size are the only values used for motion estimation and assigning the objects to their new positions in the next frame. In the first step, objects are detected using Faster R-CNN [12]. Subsequently, a linear constant velocity model approximates the movements of each object individually in consecutive frames. Afterwards, the algorithm compares the detected bounding boxes to the predicted ones based on IoU, resulting in a distance matrix. The Hungarian algorithm [52] then assigns each detected bounding box to a predicted (target) bounding box. Finally, the states of the assigned targets are updated using a Kalman filter. SORT runs with more than 250 Frames per Second (fps) with almost state-of-the-art accuracy. Nevertheless, occlusion scenarios and re-identification issues are not considered for this method, which makes it inappropriate for long-term tracking.
Wojke et al. [34] extended SORT to DeepSORT and tackled the occlusion and re-identification challenges, keeping the track handling and Kalman filtering modules almost unaltered. The main improvement takes place into the assignment process, in which two additional metrics are used: (1) motion information provided based on the Mahalanobis distance between the detected and predicted bounding boxes, (2) appearance information by calculating the cosine distance between the appearance features of a detected object and the already tracked object. The appearance features are computed by a deep neural network trained on a large person re-identification dataset [53]. A cascade strategy then determines object-to-track assignments. This strategy effectively encodes the probability spread in the association likelihood. DeepSORT performs poorly if the cascade strategy cannot match the detected and predicted bounding boxes.
Recently, Bergmann et al. [1] introduced Tracktor++ which is based on the Faster R-CNN object detection method. Faster R-CNN classifies region proposals to target and background and fits the selected bounding boxes to object contours by a regression head. The authors trained Faster R-CNN on the MOT17Det pedestrian dataset [25]. The first step is an object detection by Faster R-CNN. The detected objects in the first frame are then initialized as tracks. Afterwards, the tracks are tracked in the next frame by regressing their bounding boxes using the regression head. In this method, the lost or deactivated tracks can be re-identified in the following frames using a SNN and a constant velocity motion model.

2.3. Tracking in Satellite and Aerial Imagery

The reviewed object tracking methods in the previous sections have been mainly developed for computer vision datasets and challenges. In this section, we focus on the proposed methods for satellite and aerial imagery. Visual object tracking for targets such as pedestrians and vehicles in satellite and aerial imagery is a challenging task that has been addressed by only few works, compared to the huge number addressing pedestrian and vehicle tracking in ground imagery [13,54].Tracking in satellite and aerial imagery is much more complex. This is due to the moving cameras, large image sizes, different scales, large number of moving objects, tiny size of the objects (e.g., 4 × 4 pixels for pedestrians, 30 × 15 for vehicles), low frame rates, different visibility levels, and different atmospheric and weather conditions [25,55].

2.3.1. Tracking by Moving Object Detection

Most of the previous works in satellite and aerial object tracking are based on moving object detection [23,24,56]. Reilly et al. [23] proposed one of the earliest aerial object tracking approaches focusing on vehicle tracking mainly in highways. They compensate camera motion by a correction method based on point correspondence. A median background image is then modeled from ten frames and subtracted from the original frame for motion detection, resulting in the moving object positions. All images are split into overlapping grids, with each one defining an independent tracking problem. Objects are tracked using bipartite graph, matching a set of label nodes and a set of target nodes. The Hungarian algorithm solves the cost matrix afterwards to determine the assignments. The usage of the grids allows tracking large number of objects with the O ( n 3 ) runtime complexity for the Hungarian algorithm.
Meng et al. [24] followed the same direction. They addressed the tracking of ships and grounded aircrafts. Their method detects moving objects by calculating an Accumulative Difference Image (ADI) from frame to frame. Pixels with high values in the ADI are likely to be moving objects. Each target is afterwards modeled by extracting its spectral and spatial features, where spectral features refer to the target probability density functions and the spatial features to the target geometric areas. Given the target model, matching candidates are found in the following frames via regional feature matching using a sliding window paradigm.
Tracking methods based on moving object detection are not applicable for our pedestrian and vehicle tracking scenarios. For instance, Reilly et al. [23] use a road orientation estimate to constrain the assignment problem. Such estimations which may work for vehicles moving along predetermined paths (e.g., highways and streets), do not work for pedestrian tracking with much more diverse and complex movement behaviors (e.g., crowded situations and multiple crossings). In general, such methods perform poorly in unconstrained environments, are sensitive to illumination change and atmospheric conditions (e.g., clouds, shadows, or fog), suffer from the parallax effect, and cannot handle small or static objects. Additionally, since finding the moving objects requires considering multiple frames, these methods cannot be used for the real-time object tracking.

2.3.2. Tracking by Appearance Features

The methods based on appearance-like features overcome the issues of the tracking by moving object detection approaches [22,57,58,59,60], making it possible to detect small and static objects on single images. Butenuth et al. [57] deal with pedestrian tracking in aerial image sequences. They employ an iterative Bayesian tracking approach to track numerous pedestrians, where each pedestrian is described by its position, appearance features, and direction. A linear dynamic model then predicts futures states. Each link between a prediction and a detection is weighted by evaluating the state similarity and associated with the direct link method described in [35]. Schmidt et al. [58] developed a tracking-by-detection framework based on Haar-like features. They use a Gentle AdaBoost classifier for object detection and an iterative Bayesian tracking approach, similar to [57]. Additionally, they calculate the optical flow between consecutive frames to extract motion information. However, due to the difficulties of detecting small objects in aerial imagery, the performance of the method is degraded by a large number of false positives and negatives.
Bahmanyar et al. [22] proposed Stack of Multiple Single Object Tracking CNNs (SMSOT-CNN) and extended the GOTURN method, a SOT method developed by Held et al. [17], by stacking the architecture of GOTURN to track multiple pedestrians and vehicles in aerial image sequences. SMSOT-CNN is the only previous DL-based work dealing with MOT. SMSOT-CNN expands the GOTURN network by three additional convolutional layers to improve the tracker’s performance in locating the object in the search area. In their architecture, each SOT-CNN is responsible for tracking one object individually leading to a linear increase in the tracking complexity by the number of objects. They evaluate their approach on the vehicle and pedestrian sets of the KIT AIS aerial image sequence dataset. Experimental results show that SMSOT-CNN significantly outperforms GOTURN. Nevertheless, SMSOT-CNN performs poorly in crowded situations and when objects share similar appearance features.
In Section 5, we experimentally investigate a set of the reviewed visual object tracking methods on three aerial object tracking datasets.

3. Datasets

In this section, we introduce the datasets used in our experiments, namely the KIT AIS (pedestrian and vehicle sets), the Aerial Multi-Pedestrian Tracking (AerialMPT) [26], and DLR’s Aerial Crowd Dataset (DLR-ACD) [61]. All these datasets are the first of their kind and aim at promoting pedestrian and vehicle detection and tracking based on aerial imagery. The images of all these datasetes have been acquired by the German Aerospace Center (DLR) using the 3K camera system, comprising a nadir-looking and two side-looking DSLR cameras, mounted on an airborne platform flying at different altitudes. The different flight altitudes and camera configurations allow capturing images with multiple spatial resolutions (ground sampling distances-GSDs) and viewing angles.
For the tracking datasets, since the camera is continuously moving, in a post-processing step, all images were orthorectified with a digital elevation model, co-registered, and geo-referenced with a GPS/IMU system. Afterwards, images taken at the same time were fused into a single image and cropped to the region of interest. This process caused small errors visible in the frame alignments. Moreover, the frame rate of all sequences is 2 Hz. The image sequences were captured during different flight campaigns and differ significantly in object density, movement patterns, qualities, image sizes, viewing angles, and terrains. Furthermore, different sequences are composed by a varying number of frames ranging from 4 to 47. The number of frames per sequence depends on the image overlap in flight direction and the camera configuration.

3.1. KIT AIS

The KIT AIS dataset is generated for two tasks, vehicle and pedestrian tracking. The data have been annotated manually by human experts and suffer from a few human errors. Vehicles are annotated by the smallest enclosing rectangle (i.e., bounding box) oriented in the direction of their travel, while individual pedestrians are marked by point annotations on their heads. In our experiments, we used bounding boxes of sizes 4 × 4 and 5 × 5 pixels for the pedestrians according to the GSDs of the images, ranging from 12 to 17 cm. As objects may leave the scene or be occluded by other objects, the tracks are not labeled continuously for all cases. For the vehicle set cars, trucks, and buses are annotated if they lie entirely within the image region with more than 2 3 of their bodies visible. In the pedestrian set only pedestrians are labeled. Due to crowded scenarios or adverse atmospheric conditions in some frames, pedestrians can be hardly visible. In these cases, the tracks have been estimated by the annotators as precisely as possible. Table 1 and Table 2 represent the statistics of the pedestrian and vehicle sets of the KIT AIS dataset, respectively.
The KIT AIS pedestrian is composed of 13 sequences with 2649 pedestrians (Pedest.), annotated by 32,760 annotation points (Anno.) throughout the frames Table 1. The dataset is split into 7 training and 6 testing sequences with 104 and 85 frames (Fr.), respectively. The sequences are characterized by different lengths ranging from 4 to 31 frames. The image sequences come from different flight campaigns over Allianz Arena (Munich, Germany), Rock am Ring concert (Nuremberg, Germany), and Karlsplatz (Munich, Germany).
The KIT AIS vehicle comprises nine sequences with 464 vehicles annotated by 10,817 bounding boxes throughout 239 frames. It has no pre-defined train/test split. For our experiments, we split the dataset into five training and four testing sequences with 131 and 108 frames, respectively, similarly to [22]. According to Table 2, the lengths of the sequences vary between 14 and 47 frames. The image sequences have been acquired from a few highways, crossroads, and streets in Munich and Stuttgart, Germany. The dataset presents several tracking challenges such as lane change, overtaking, and turning maneuvers as well as partial and total occlusions by big objects (e.g., bridges). Figure 3 demonstrates sample images from the KIT AIS vehicle dataset.

3.2. AerialMPT

The Aerial Multi-Pedestrian Tracking (AerialMPT) dataset [26] is newly introduced to the community, and deals with the shortcomings of the KIT AIS dataset such as the poor image quality and limited diversity. AerialMPT consists of 14 sequences with 2528 pedestrians annotated by 44,740 annotation points throughout 307 frames Table 3. Since the images have been acquired by a newer version of the DLR’s 3K camera system, their quality and contrast are much better than the images of KIT AIS dataset. Figure 4 compares a few sample images from the AerialMPT and KIT AIS datasets.
AerialMPT is split into 8 training and 6 testing sequences with 179 and 128 frames, respectively. The lengths of the sequences vary between 8 and 30 frames. The image sequences were selected from different crowd scenarios, for example, from moving pedestrians on mass events and fairs to sparser crowds in the city centers. Figure 1 demonstrates an image from the AerialMPT dataset with the overlaid annotations.

AerialMPT vs. KIT AIS

The AerialMPT has been generated in order to mitigate the limitations of the KIT AIS pedestrian dataset. In addition to the higher quality of the images, the numbers of minimum annotations per frame and the total annotations of AerialMPT are significantly larger than those of the KIT AIS dataset. All sequences in AerialMPT contain at least 50 pedestrians, while more than 20% of the sequences of KIT AIS include less than ten pedestrians. Based on our visual inspection, not only the pedestrian movements in AerialMPT are more complex and realistic, but also the diversity of the crowd densities are greater than those of KIT AIS. The sequences in AerialMPT differ in weather conditions and visibility, incorporating more diverse kinds of shadows as compared to KIT AIS. Furthermore, the sequences of AerialMPT are longer in average, with 60% longer than 20 frames (less than 20% in KIT AIS). Further details on these datasets can be found in [26].

3.3. DLR-ACD

DLR-ACD is the first aerial crowd image dataset [61] comprises 33 large aerial RGB images with average size of 3619 × 5226 pixels from different mass events and urban scenes containing crowds such as sports events, city centers, open-air fairs, and festivals. The GSDs of the images vary between 4.5 and 15 cm/pixel. In DLR-ACD 226,291 pedestrians have been manually labeled by point annotations, with the number of pedestrians ranging from 285 to 24,368 per image. In addition to its unique viewing angle, the large number of pedestrians in most of the images (>2 K) makes DLR-ACD stand out among the existing crowd datasets. Moreover, the crowd density can vary significantly within each image due to the large field of view of the images. Figure 5 demonstrates example images from the DLR-ACD dataset. For further details on this dataset, the interested reader is remanded to [61].

4. Evaluation Metrics

In this section, we introduce the most important metrics we use for our quantitative evaluations. We adopted widely-used metrics in the MOT domain based on [25] which are listed in Table 4. In this table, ↑ and ↓ denote higher or lower values being better, respectively. The objective of MOT is finding the spatial positions of p objects as bounding boxes throughout an image sequence (object trajectories). Each bounding box is defined by the x and y coordinates of its top-left and bottom-right corners in each frame. Tracking performances are evaluated based on true positives (TP), correctly predicting the object positions, false positives (FP), predicting the position of another object instead of the target object’s position, and false negatives (FN), where an object position is totally missed. In our experiments, a prediction (tracklet) is considered as TP if the intersection over union (IoU) of the predicted and the corresponding ground truth bounding boxes is greater than 0.5 . Moreover, an identity switch (IDS) occurs if an annotated object a is associated with a tracklet t, and the assignment in the previous frame was a t . The fragmentation metric shows the total number of times a trajectory is interrupted during tracking.
Among these metrics, the crucial ones are the Multiple-Object Tracker Accuracy (MOTA) and the Multiple-Object Tracker Precision (MOTP). MOTA represents the ability of trackers in following the trajectories throughout the frames t, independently from the precision of the predictions:
M O T A = 1 t ( F N t + F P t + I D S t ) t G T t .
The Multiple-Object Tracker Accuracy Log (MOTAL) is similar to MOTA; however, ID switches are considered on a logarithmic scale.
M O T A L = 1 F N T + F P t + l o g 10 ( I D S t + 1 ) G T t .
MOTP measures the performance of the trackers in precisely estimating object locations:
M O T P = t , i d t , i t c t ,
where d t , i is the distance between a matched object i and the ground truth annotation in frame t, and c is the total number of matched objects.
Each tracklet can be considered as mostly tracked (MT), partially tracked (PT), or mostly lost (ML), based on how successful an object is tracked during its whole lifetime. A tracklet is mostly lost if it is only tracked less than 20% of its lifetime and mostly tracked if it is tracked more than 80% of its lifetime. Partially tracked applies to all remaining tracklets. We report MT, PT, and ML as percentages of the total amount of tracks. The false acceptance rate (FAR) for an image sequence with f frames describes the average amount of FPs per frame:
F A R = F P t f .
In addition, we use recall and precision measures, defined as follows:
R c l l = T P t ( T P t + F N t ) ,
P r c n = T P t ( T P t + F P t ) .
Identification precision (IDP), identification recall (IDR), and IDF1 are similar to precision and recall; however, they take into account how long the tracker correctly identifies the targets. IDP and IDR are the ratios of computed and ground-truth detections that are correctly identified, respectively. IDF1 is calculated as the ratio of correctly identified detections over the average number of computed and ground-truth detections. IDF1 allows ranking different trackers based on a single scalar value. For any further information on these metrics, the interested reader is remanded to [62].

5. Preliminary Experiments

This section empirically shows the existing challenges in aerial pedestrian tracking. We study the performance of a number of existing tracking methods including KCF [47], MOSSE [6], CSRT [51], Median Flow [50], SORT, DeepSORT [34], Stacked-DCFNet [32], Tracktor++ [1], SMSOT-CNN [22], and Euclidean Online Tracking on aerial data, and show their strengths and limitations. Since in the early phase of our research, only the KIT AIS pedestrian dataset was available to us, the experiments of this section have been conducted on this dataset. However, our findings also hold for the AerialMPT dataset.
The tracking performance is usually correlated to the detection accuracy for both detection-free and detection-based methods. As our main focus is at tracking performance, in most of our experiments we assume perfect detection results and use the ground truth data. While for the object locations in the first frame are given to the detection-free methods, the detection-based methods are provided with the object locations in every frame. Therefore, for the detection-based methods, the most substantial measure is the number of ID switches, while for the other methods all metrics are considered in our evaluations.

5.1. From Single- to Multi-Object Tracking

Many tracking methods have been initially designed to track only single objects. However, according to [22], most of them can be extended to handle MOT. Tracking management is an essential function in MOT which stores and exploits multiple active tracks at the same time, in order to remove and initialize the tracks of objects leaving from and entering into the scenes. For our experiments we developed a tracking management module for extending the SOT methods to MOT. It unites memory management, including the assignment of unique track IDs and individual object position storage, with track initialization, aging and removing functionalities.
OpenCV provides several built-in object tracking algorithms. Among them, we investigate the KCF, MOSSE, CSRT, and Median Flow SOT methods. We extend them to the MOT scenarios within the OpenCV framework. We initialize the trackers by the ground truth bounding box positions.
DCFNet [32] is also an SOT on a DCF. However, the DCF is implemented as part of a DNN and uses the features extracted by a light-weight CNN. Therefore, DCFNet is a perfect choice to study whether deep features improve the tracking performance compared to the handcrafted ones. For our experiments, we took the PyTorch implementation (https://github.com/foolwood/DCFNet_pytorch, accessed on 10 May 2021) of DCFNet and modified its network structure to handle multi-object tracking, and we refer to it as “Stacked-DCFNet”. From the KIT AIS pedestrian training set we crop a total of 20,666 image patches centered at every pedestrian. The patch size is the bounding box size multiplied by 10 in order to consider contextual information to some degree. Then we scale the patches to 125 × 125 pixels to match the network input size. Using the patches, we retrain the convolutional layers of the network for 50 epochs with ADAM [63] optimizer, MSE loss, initial learning rate of 0.01, and a batch size of 64. Moreover, we set the spatial bandwidth to 0.1 for both online tracking and offline training. Furthermore, in order to adapt it to MOT, we use our developed Python module. Multiple targets are given to the network within one batch. For each target object, the network receives two image patches, from previous and current frames, centered on the known previous position of the object. The network output is the probability heatmap in which the highest value represents the most likely object location in the image patch of the current frame (search patch). If this value is below a certain threshold, we consider the object as lost. Furthermore, we propose a simple linear motion model and set the center point of the search patch to the position estimate of this model instead of the position of the object in the previous frame patch (as in the original work). Based on the latest movement v t ( x , y ) of a target, we estimate its position as:
p e s t ( x , y ) = p ( x , y ) + k · v t ( x , y ) ,
where k determines the influence of the last movement. For all of the methods, we remove the objects if they leave the scene and their track ages are greater than 3 frames.
Table 5 and Table 6 show the overall and sequence-wise tracking results of these methods on the KIT AIS pedestrian dataset, respectively. The results of Table 5 indicate the poor performance of all of these methods with a total MOTA scores varying between −85.8 and −55.9. The results of KCF and MOSSE are very similar. However, the use of HOG features and non-linear kernels in KCF improves MOTA by 0.9 and MOTP by 0.5 points respectively, compared to MOSSE. Moreover, both methods mostly track about 1% of the pedestrians in average. However, they have the first and second best MOTP values among the compared methods in Table 5. This indicates that although they lose track of many objects (partially or totally), their tracking localization is relatively precise. Moreover, according to the results, Stacked-DCFNet significantly outperforms the method with handcrafted features by a MOTA score of −37.3 (18.6 points higher than that of the CSRT). The MT and ML rates are also improving with only losing 23.6% of all tracks while mostly tracking the 13.8% of the pedestrians.
CSRT (which is also DCF-based) outperforms both prior methods significantly, reaching a total MOTA and MOTP of −55.9 and 78.4. The smaller MOTP value of CSRT indicates its slightly worse tracklet localization precision as compared to KCF and MOSSE. Furthermore, it mostly tracks about 10% of the pedestrians in average and proves the effectiveness of the channel and reliability scores. According to the table, Median Flow achieves comparable results to CSRT with total MOTA and MOTP scores of −63.8 and 77.7, respectively. Comparing the results of different sequences in Table 6 indicates that all algorithms perform significantly better on the “RaR_Snack_Zone_02” and “RaR_Snack_Zone_04” sequences. Based on visual inspection, we argue that this is due to their short length resulting in fewer lost objects and ID switches. Comparing their performances on the longer sequences (“AA_Crossing_02”, “AA_Walking_02” and “Munich02”) demonstrates that Stacked-DCFNet performs much better than the other methods on these sequences, showing the ability of the method in tracking objects for a longer time.
Altogether, according to the results, we argue that the deep features outperform the handcrafted ones by a large margin.

5.2. Multi-Object Trackers

In this section, we study a number of MOT methods including SORT, DeepSORT, and Tracktor++. Additionally, we propose a new tracking algorithm called Euclidean Online Tracking (EOT) which uses the Euclidean distance for object matching.

5.2.1. DeepSORT and SORT

DeepSORT [34] is a MOT method comprising deep features and an IoU-based tracking strategy. For our experiments, we use the PyTorch implementation (https://github.com/ZQPei/deep_sort_pytorch, accessed on 10 May 2021) of DeepSORT and adapt it for the KIT AIS dataset by changing the bounding box size and IoU threshold, as well as fine-tuning the network on the training set of the KIT AIS dataset. As mentioned, for the object locations we use the ground truth and do not use the DeepSORT’s object detector. Table 7 and Table 8 show the tracking results of our experiments in which Rcll, Prcn, FAR, MT, PT, ML, FN, FM, and MOTP are not important in our evaluations as the ground truth is used instead of the detection results. Therefore, the best values for these metrics are not highlighted for non of the methods in Table 7 and for DeepSORTs and SORTs in Table 8.
In the first experiment, we employ DeepSORT with its original parameter settings. As the results show, this configuration is not suitable for tracking small objects (pedestrians) in aerial imagery. DeepSORT utilizes deep appearance features to associate objects to tracklets; however, for the first few frames, it relies on IoU metric until enough appearance features are available. The original IoU threshold is 0.5 . The standard DeepSORT uses a Kalman filter for each object to estimate its position in the next frame. However, due to small IoU overlaps between most predictions and detections, many tracks can not be associated with any detection, making it impossible to use the deep features afterwards. The main cause of minor overlaps is the small size of the bounding boxes. For example, if the Kalman filter estimates the object position only 2 pixels off the detection’s position, for a bounding box of 4 × 4 pixels, the overlap would be below the threshold and, consequently, the tracklet and the object cannot be matched. These mismatches result in a large number of falsely initiated new tracks, leading to a total amount of 8627 ID switches, an average amount of 8.27 ID switches per person, and an average amount of 0.71 ID switches per detection.
We tackle this problem by enlarging the bounding boxes by a factor of two in order to increase the IoU overlaps, increase the number of matched tracklets and detections, and enable the use of appearance features. According to Table 7, this configuration (DeepSORT-BBX 2 × ) results in a 41.19% decrease in the total number of ID switches (from 8627 to 5073), a 56.38% decrease in the average number of ID switches per person (from 8.62 to 4.86), and a 59.15% decrease in the average number of ID switches per detection (from 0.71 to 0.42). We further analyze the impact of using different IoU thresholds on the tracking performance. Figure 6 illustrates the number of ID switches with different IoU thresholds. It can be observed that by increasing the threshold (minimizing the required overlap for object matching) the number of ID switches reduces. The least number of ID switches (738 switches) is achieved by the IoU threshold of 0.99, as can be seen in Table 7 for DeepSORT-IoU 99 . Based on the results, enlarging the bounding boxes and changing the IoU threshold significantly improves the tracking results of DeepSORT-BBX 2 × -IoU 99 as compared to the original settings of DeepSORT (ID switches by 91.44% and MOTA by 3.7 times). This confirms that the missing IoU overlap is the main issue with the standard DeepSORT.
After adapting the IoU object matching, the deep appearance features play a prominent role in the object tracking after the first few frames. Thus, a fine-tuning of the DeepSORT’s neural network on the training set of the KIT AIS pedestrian dataset can further improve the results (DeepSORT-BBX 2 × -IoU 99 -FT). Originally, the network has been trained on a large person re-identification dataset, which is very different from our scenario, especially in the looking angle and the object sizes, as the bounding boxes in aerial images are much smaller than in the person re-identification dataset ( 4 × 4 vs. 128 × 64 pixels). Scaling the bounding boxes of our aerial dataset to fit the network input size leads to relevant interpolation errors. For our experiments we initialize the last re-identification layers from scratch, and the rest of the network using the pre-trained weights and biases. We also changed the number of classes to 610, representing the number of different pedestrians after cropping the images into the patches with the size of the bounding boxes, and ignoring the patches located at the image border. Instead of scaling the patches to 128 × 64 pixels, we only scale them to 50 × 50 . We trained the classifier for 20 epochs with SGD optimizer, Cross-Entropy loss function, batch size of 128, and an initial learning rate of 0.01 . Moreover, we doubled the bounding box sizes for our experiment. The results in Table 7 show that the total number of ID switches only decreases from 738 to 734. This indicates that the deep appearance features of DeepSORT are not useful for our problem. While for a large object a small deviation of the bounding box position is tolerable (as the bounding box still mostly contains object-relevant areas), for our very small objects this can cause significant changes in object relevance. The extracted features mostly contain background information. Consequently, in the appearance matching step, the object features from its previous and currently estimated positions can differ significantly. Additionally, the appearance features of different pedestrians in aerial images are often not discriminative enough to distinguish them.
In order to better demonstrate this effect, we evaluate DeepSORT without any appearance feature, also known as SORT. Table 7 shows the tracking results with original and doubled bounding box sizes and an IoU threshold of 0.99. According to the results, SORT outperforms the fine-tuned DeepSORT with 438 ID switches. Nevertheless, the number of ID switches is still high, given that we use the ground truth object positions. This could be due to the low frame rate of the dataset and the small sizes of the the objects. Although enlarging the bounding boxes improved the performance significantly (60% and 56% better MOTA for DeepSORT and SORT, respectively), it leads to a poor localization accuracy.

5.2.2. Tracktor++

Tracktor++ [1] is an MOT method based on deep features. It employs a Faster-RCNN to perform object detection and tracking through regression. We use its PyTorch implementation (https://github.com/phil-bergmann/tracking_wo_bnw, accessed on 10 May 2021) and adapt it to our aerial dataset. We tested Tracktor++ with the ground truth object positions instead of using its detection module; however, it totally failed the tracking task with these settings. Faster-RCNN has been trained on the datasets which are very different to our aerial dataset, for example in looking angle, number and size of the objects. Therefore, we fine-tune Faster-RCNN on the KIT AIS dataset. To this end, we had to adjust the training procedure to the specification of our dataset.
We use Faster-RCNN with a ResNet50 backbone, pre-trained on the ImageNet dataset. We change the anchor sizes to {2, 3, 4, 5, 6} and the aspect ratios to {0.7, 1.0, 1.3}, enabling it to detect small objects. Additionally, we increase the maximum detections per image to 300, set the minimum size of an image to be rescaled to 400 pixels, the region proposal non-maximum suppression (NMS) threshold to 0.3, and the box predictor NMS threshold to 0.1. The NMS thresholds influence the amount of overlap for region proposals and box predictions. Instead of SGD, we use an ADAM optimizer with an initial learning rate of 0.0001 and a weight decay of 0.0005. Moreover, we decrease the learning rate every 40 epochs by a factor of 10 and set the number of classes to 2, corresponding to background and pedestrians. We also apply substantial online data augmentation including random flipping of every second image horizontally and vertically, color jitter, and random scaling in a range of 10%.
The tracking results of Tracktor++ with the fine-tuned Faster-RCNN are presented in Table 7. The detection precision and recall of Faster-RCNN are 25% and 31%, respectively, with this poor detection performance potentially propagated to the tracking part. According to the table, Tracktor++ only achieves an overall MOTA of 5.3 and 2188 ID switches even when we use ground truth object positions. We conclude by assuming that Tracktor++ has difficulties with the low frame rate of the dataset and the small object sizes.

5.2.3. SMSOT-CNN

SMSOT-CNN [22] is the first DL-based method for multi-object tracking in aerial imagery. It is an extension to GOTURN [17], an SOT regression-based method using CNNs to track generic objects at high speed. SMSOT-CNN adapts GOTURN for MOT scenarios by three additional convolution layers and a tacking management module. The network receives two image patches from the previous and current frames, where both are centered at the object position in the previous frame. The size of the image patches (the amount of contextual information) is adjusted by a hyperparameter. The network regresses the object position in the coordinates of the current frame’s image patch. SMSOT-CNN has been evaluated on the KIT AIS pedestrian dataset in [22], where the objects’ first positions are given based on the ground truth data. The tracking results can be seen in Table 7. Due to the use of a deep network and the local search for the next position of the objects, the number of ID switches by SMSOT-CNN is 157, which is small, relative to the other methods. Moreover, this algorithm achieves an overall MOTA and MOTP of −29.8 and 71.0, respectively. Based on our visual inspections, SMSOT-CNN has some difficulties in densely crowded situations where the objects share similar appearance features. In these cases, multiple similarly looking objects can be present in an image patch, resulting in ID switches and losing track of the target objects. Furthermore, the small sizes of the pedestrians make them similar to many non-pedestrian objects in the feature space causing a large number of FPs and FNs.

5.2.4. Euclidean Online Tracking

Inspired by the tracking results of SORT besides its simplicity, we propose EOT based on the architecture of SORT for pedestrian tracking in aerial imagery. EOT uses a Kalman filter similarly to SORT. Then it calculates the euclidean distance between all predictions ( x i , y i ) and detections ( x j , y j ), and normalizes them w.r.t. the GSD of the frame to construct a cost matrix as follows:
D i , j = G S D · ( x i x j ) 2 + ( y i y j ) 2 .
After that, as in SORT, we use the Hungarian algorithm to look for global minima. However, if objects enter or leave the scene, the Hungarian algorithm can propagate an error to the whole prediction-detection matching procedure: therefore, we constrain the cost matrix so that all distances greater than a certain threshold are ignored and set to an infinity cost. We empirically set the threshold to 17 . G S D pixels. Furthermore, only objects successfully tracked in the previous frame are considered for the matching process. According to Table 7, while the total MOTA score is competitive with the previously studied methods, EOT achieves the least ID switches (only 37). Compared to SORT, as EOT keeps better track of the objects, the deviations in the Kalman filter predictions are smaller. Therefore, Euclidean distance is a better option as compared to IoU for our aerial image sequences.

5.3. Conclusion of the Experiments

In this section, we conclude our preliminary study. According to the results, our EOT is the best performing tracking method. Figure 7 illustrates a major case of success by our EOT method. We can observe that almost all pedestrians are tracked successfully, even though the sequence is crowded and people walk in different directions. Furthermore, the significant cases of false positives and negatives are caused by the limitation of the evaluation approach. In other words, while EOT tracks most of the objects, since the evaluation approach is constrained to the minimum 50% overlap (4 pixels), the correctly tracked objects with smaller overlaps are not considered.
Figure 8 shows a typical failure case of the Stacked-DCFNet method. In the first two frames, most of the objects are tracked correctly; however, after that, the diagonal line in the patch center is confused with the people walking across it. We assume that the line shares similar appearance features with the crossing people. Figure 9 demonstrates a successful tracking case by Stacked-DCFNet. People are not walking closely together and the background is more distinguishable from the people. Figure 10 illustrates another typical failure case of DCFNet. The image includes several people walking closely in different directions, introducing confusion into the tracking method due to the people’s similar appearance features. We closely investigate these failure cases in Figure 11. In this figure, we visualize the activation map of the last convolution layer of the network. Although the convolutional layers of Stacked-DCFNet are supposed to be trained only for people, the line and the people (considering their shadows) appear indistinguishable. Moreover, based on the features, different people cannot be discriminated. We also evaluated SMSOT-CNN and found that it shares similar failure and success cases with Stacked-DCFNet, as both take advantage of convolutional layers for extracting appearance features.
Altogether, the Euclidean distance paired with trajectory information in EOT works better than IoU for tracking in aerial imagery. However, detection-based trackers such as EOT require object detection in every frame. As shown for Tracktor++, the detection accuracy of the object detectors is very poor for pedestrians in aerial images. Thus, detection-based methods are not appropriate for our scenarios. Moreover, the approaches which employ deep appearance features for re-identification share similar problems with object detectors, features with poor discrimination abilities in the presence of similarly looking objects, leading to ID switches and loosing track of objects. The tracking methods based on regression and correlation (e.g., Stacked-DCFNet and SMSOT-CNN) show, in general, better performances than the methods based on re-identification because they track objects by local image patches that errors to be propagated to the whole image. Furthermore, according to our investigations, the path taken by every pedestrian is influenced by three factors: (1) the pedestrian’s path history, (2) the positions and movements of the surrounding people, (3) the arrangement of the scene.
We conclude that both regression- and correlation-based tracking methods are good choices for our scenario. They can be improved by considering trajectory information and the pedestrians movement relationships.

6. AerialMPTNet

In this section we explain our proposed AerialMPTNet tracking algorithm with its different configurations. Part of its architecture and configurations has been presented in [26].
As stated in Section 5, a pedestrian’s movement trajectory is influenced by its movement history, its motion relationships to its neighbours, and scene arrangements. The same holds for the vehicles in traffic scenarios. For the vehicles, there are other constraints such as moving along predetermined paths (e.g., streets, highways, railways) in most of the time. Different objects have different motion characteristics such as speed and acceleration. For example, several studies have shown that walking speed of pedestrians are strongly influenced by their age, gender, temporal variations as well as distractions (e.g., cell phone usage), whether the individual is moving in a group or not, and even the size of the city where the event takes place [64,65]. Regarding road traffic, similar factors could influence driving behaviors and movement characteristics (e.g., cell phone usage, age, stress level, and fatigue) [66,67]. Furthermore, similar to the pedestrians, maneuvers of a vehicle can directly affect the movements of other neighbouring vehicles: for example, if the vehicle brakes, all the following vehicles must brake, too.
The understanding of individual motion patterns is crucial for tracking algorithms, especially when only limited visual information about target objects is available. However, current regression-based tracking methods such as GOTURN and SMSOT-CNN do not incorporate movement histories or relationships between adjacent objects. These networks locate the next position of objects by monitoring a search area in their immediate proximity. Thus, the contextual information provided to the network is limited. Additionally, during the training phase, the networks do not learn how to differentiate the targets from similarly looking objects within the search area. Thus, as discussed in Section 5, ID switches and losing of object tracks happen often for these networks in crowded situations or by object intersections.
In order to tackle the limitations of previous works we propose to fuse visual features, track history, and the movement relationships of adjacent objects in an end-to-end fashion within a regression-based DNN, which we refer to as AerialMPTNet. Figure 12 shows an overview of the network architecture. AerialMPTNet takes advantage of a Siamese Neural Network (SNN) for visual features, a Long Short-Term Memory (LSTM) module for movement histories, and a GraphCNN for movement relationships. The network takes two local image patches cropped from two consecutive images (previous and current), called target and search patch in which the object location is known and has to be predicted, respectively. Both patches are centered at the object coordinates known from the previous frame. Their size (the degree of contextual information) is correlated with the size of the objects, and it is set to 227 × 227 pixels to be compatible to the network’s input. Both patches are then given to the SNN module (retained from [22]) composed of two branches of five 2D convolutional, two local response normalization, and three max-pooling layers with shared weights. Afterwards, the two output features O u t S N N are concatenated and given to three 2D convolutional layers and, finally, four fully connected layers regressing the object position in the search patch coordinates. We use ReLU activations for all these convolutional layers.
The network output is a vector of four values indicating the x and y coordinates of the top-left and bottom-right corners of the objects’ bounding boxes. These coordinates are then transformed into image coordinates. In our network, the LSTM module and the GraphCNN module use the object coordinates in the search patch and image domain, respectively.

6.1. Long Short-Term Memory Module

In order to encode movement histories and predict object trajectories, recent works mainly relied on LSTM- and RNN-based structures [68,69,70]. While these structures have been mostly used for individual objects, due to the large number of objects, we cannot apply these structures directly to our scenarios. Thus, we propose using a structure which treats all object by only one model and predicts the movements (movement vectors) instead of positions.
In order to test our idea, we built an LSTM comprising two bidirectional LSTM layers with 64 dimensions, a dropout layer with p = 0.5 in between, and a linear layer which generates two-dimensional outputs, representing the x and y values of the movement vector. The input of the LSTM module are two-dimensional movement vectors with dynamic lengths up to five steps of the objects’ movement histories. We applied this module to our pedestrian tracking datasets. The results of this experiment show that our LSTM module can predict the next movement vector of multiple pedestrians with about 3.6 pixels (0.43 m) precision, which is acceptable for our scenarios. Therefore, training a single LSTM on multiple objects would be enough for predicting the objects’ movement vectors.
We embed a similar LSTM module into our network as shown in Figure 12. For the training of the module, the network first generates a sequence of object movement vectors based on the object location predictions. In our experiments, each track has a dynamic history of up to five last predictions. As tracks are not assumed to start at the same time, the length of each track history can be different. Thus, we use zero-padding to make the lengths of track histories similar, allowing to process them together as a batch. These sequences are fed into the first LSTM layer with a hidden size of 64. A dropout with p = 0.5 is then applied to the hidden state of the first LSTM layer, and passes the results to the second LSTM layer. The output features of the second LSTM layer are fed into a linear layer of size 128. The 128-dimensional output of the LSTM module O u t L S T M is then concatenated with O u t S N N and O u t G r a p h , the output of the GCNN module. The concatenation allows the network to predict object locations more precisely based on a fusion of appearance and movement features.

6.2. GraphCNN Module

The GraphCNN module consists of three 1D convolution layers with 1 × 1 kernels and respectively 32, 64, and 128 channels. We generate each object’s adjacency graph based on the location prediction of all objects. To this end, the eight closest neighbors in a radius of 7.5 m from the object are considered and modeled as a directed graph by a set of vectors v i from the neighbouring objects to the target object’s position ( x , y ) . The resulting graph is represented as [ x , y , x v 1 , y v 1 , , x v 8 , y v 8 ] . If less than eight neighbors are existing, we zero-pad the rest of the vectors.
The GraphCNN module also uses historical information by considering five previous graph configurations. Similarly to the LSTM module, we use zero-padding if less than five previous configurations are available. The resulting graph sequences are described by a 18 × 5 matrix which is fed into the first convolution layer. In our setup, graph sequences of multiple objects are given to the network as a batch of matrices. The output of the last convolutional layer is gone through a global average pooling in order to generate the final 128-dimensional output of the module O u t G r a p h , which is concatenated to O u t S N N and O u t L S T M . The features of the GraphCNN module enable the network to better understand group movements.

6.3. Squeeze-and-Excitation Layers

During our preliminary experiments in Section 5, we experienced a high deviation in the quality of activation maps produced by the convolution layers in DCFNet and SMSOT-CNN. This deviation shows the direct impact of single channels and their importance for the final result of the network. In order to consider this factor in our approach, we model the dominance of the single channels by Squeeze-And-Excitation (SE) layers [71].
CNNs extract image information by sliding spatial filters across the inputs to different layers. While the lower layers extract detailed features such as edges and corners, the higher layers can extract more abstract structures such as object parts. In this process, each filter at each layer has a different relevance to the network output. However, all filters (channels) are usually weighted equally. Adding the SE layers to a network helps weighting each channel adaptively based on their relevance. In the SE layers, each channel is squeezed to a single value by using global average pooling [72], resulting in a vector with k entries. This vector is given to a fully connected layer reducing the size of the output vector by a certain ratio, followed by a ReLu activation function. The result is fed into a second fully connected layer scaling the vector back to its original size and applying a sigmoid activation afterwards. In the final step, each channel of the convolution block is multiplied by the results of the SE layer. This channel weighting step adds less than 1% to the overall computational cost. As can bee seen in Figure 12, we add one SE layer after each branch of the SNN module, and one SE layer after the fusion of O u t S N N , O u t L S T M , and O u t G r a p h .

6.4. Online Hard Example Mining

In the object detection domain, datasets usually contain a large number of easy cases with respect to cases which are challenging for the algorithms. Several strategies have been developed in order to account for this, such as sample-aware loss functions (e.g., Focal Loss [73]), where the easy and hard samples are weighted based on their frequencies, and online hard example mining (OHEM) [28], which gives hard examples to the network if they are previously failed to be correctly predicted. The selection and focusing on such hard examples can make the training more effective. OHEM have been explored in the object detection task [74,75], however, its usage has not been investigated for the object tracking task. In the multi-object tracking domain, such strategies have been rarely used although the tracking datasets suffer from the sample problem as the detection datasets. To the best of our knowledge, none of the previous works in the regression-based tracking used OHEM during their training process.
Thus, in order to deal with the sample imbalance problem of our datasets, we propose adapting and employing OHEM for our training process. To this end, if the tracker loses an object during training, we reset the object to its original starting position and the starting frame, and feed it to the network in the next iteration again. If the tracker fails again, we ignore the sample by removing it from the batch.

7. Experimental Setup

For all of our experiments, we used PyTorch and one Nvidia Titan XP GPU. We trained all networks with an SGD optimizer and an initial learning rate of 10 6 . For all training setups, unless indicated otherwise, we use the L 1 loss, L ( x , x ^ ) = | x x ^ | , where x and x ^ represent the output of the network and ground truth, respectively. The batch size of all our experiments is 150; however, during offline feedback training, the batch size can differ due to unsuccessful tracking cases and subsequent removal of the object from the batch.
In our experiments, we consider SMSOT-CNN as baseline network and compare different parts of our approach to it. The original SMSOT-CNN is described in Caffe. In order to make it completely comparable to our approach, we re-implement it in PyTorch. For the training of SMSOT-CNN, we assign different fractions of the initial learning rate to each layer, as in the original Caffe implementation, inspired by the GOTURN’s implementation. In more detail, we assign the initial learning rate to each convolutional layer, and assign a learning rate 10 times larger to the fully connected layers. Weights are initialized by Gaussians with different standard deviations, while biases are initialized by constant values (zero or one), as in the Caffe version. The training process of SMSOT-CNN is based on a so-called Example Generator. Provided with one target image with known object coordinates, this creates multiple examples by creating and shifting the search crop to create different kinds of movements. It is also possible to give the true target and search images. A hyperparameter set to 10 controls the number of examples generated for each image. For the pedestrian tracking, we use DLR-ACD to increase the number of available training samples. SMSOT-CNN is trained completely offline and learns to regress the object location based on only the previous location of the object.
For AerialMPTNet, we train the SNN module and the fully connected layers as in SMSOT-CNN. After that, the layers are initialized with the learnt weights, and the remaining layers are initialized with the standard PyTorch initialization. Moreover, we decay the learning rate by a factor of 0.1 for every twenty thousand iterations and train AerialMPTNet in an end-to-end fashion by using feedback loops to integrate previous movement and relationship information between adjacent objects. In contrast to the training process of SMSOT-CNN, which is based on artificial movements created by the example generator, we train our networks based on real tracks.
In the training process, a batch of 150 random tracks (i.e., objects from random sequences of the training set) is first selected starting at a random time step between 0 and the track end t e n d 1 . We give the network the target and search patches for these objects. The network’s goal is to regress each object position in the search patches consecutively until either the object is lost or the track ends. The target and search patches are generated based on the network predictions in consecutive frames. The object will remain in the batch as long as the network tracks it successfully. If the ground truth object position lies outside of the predicted search area or the track reaches its end frame, we remove the object from the batch and replace it with a new randomly selected object.
For each track and each time step, the network’s prediction is stored and used from the LSTM and GraphCNN module. For each object in the batch, the LSTM module is given the objects’ movement vectors from the latest time steps up to a maximum number of five, as explained in Section 6. This process provides the network with an understanding of each object’s movement characteristics by a prediction of the next movement. As a result, our network uses its predictions as feedback to improve its performance. Furthermore, we perform gradient clipping for the LSTM during training to prevent exploding gradients. The neighbor calculation of the GraphCNN module is also based on the network’s prediction of each object’s position, as mentioned in Section 6. Based on the network’s prediction of the object position, we search for the nearest neighbors in the ground truth annotation of that frame. However, during the testing phase, we search nearest neighbors based on the network’s prediction of the object positions.
For the pedestrian dataset, we set the context factor to 4, with each object with a bounding box size of 4 × 4 pixel resulting in an image patch of 16 × 16 pixels. For vehicle tracking, however, due to the larger sizes of their bounding boxes, we reduce the context factor to 3. This helps avoiding multiple vehicles in a single image patch which could cause track confusion.

8. Evaluation and Discussion

In this section, we evaluate different parts of our proposed AerialMPTNet on the KIT AIS and AerialMPT datasets through a set of ablation studies. Furthermore, we compare our results to the tracking methods discussed in Section 5. Table 9 reports the different network configurations for our ablation studies.

8.1. SMSOT-CNN (PyTorch)

The tracking results of our PyTorch SMOST-CNN on the ArialMPT and KIT AIS pedestrian and vehicle datasets are presented in Table 10. Therein, SMSOT-CNN achieves MOTA and MOTP scores of −35.0 and 70.0 for the KIT AIS pedestrian, and 37.1 and 75.8 for the KIT AIS vehicle dataset, respectively. It achieves, respectively, a MOTA and MOTP of −37.2 and 68.0 on the AerialMPT dataset. It can be seen that IDF is highest for the RaR_Snach_Zone and Pasing7 for the AerialMPT and KIT AIS dataset by achieving about 63.1 and 57.7 respectively. This is due to the less persons on those sequences, lowering the possibility of falsely tracking an ID. This shows its affect on other parameters such as IDP, IDR, FAR, MT, PT, ML as well. Regarding FP, FN and ID switch, Munich02 and Bauma3 have the highest wrong detections and id switches, however, the performance of algorithm on Bauma3 is comparable with other sequences to the less noise in the dataset. A comparison of the results to [22] shows that our PyTorch implementation works rather similarly to the original Caffe version, with only 5.2 and 4.0 points smaller MOTA for the KIT AIS pedestrian and vehicle, respectively. For the rest of our experiments, we consider the results of this implementation of SMOST-CNN as the baseline for our evaluations.

8.2. AerialMPTNet (LSTM Only)

In this step, we evaluate the influence of the LSTM module on the tracking performance of our AerialMPTNet. Table 11 reports the tracking result of AerialMPTNet L S T M on our experimental datasets. We use the pre-trained weights of SMSOT-CNN to initialize the convolutional weights and biases. For the KIT AIS pedestrian dataset, we evaluate the effects of freezing the weights during the training of LSTM. The tracking results with frozen and trainable convolutional weights in Table 11 show that the latter improves MOTA and MOTP values by 8.2 and 0.5, respectively. Moreover, the network trained with trainable weights tracks 6.9% more objects mostly during their lifetimes (MT). We can observe that this increase in performance holds for all sequences with different number of frames and objects with regard to IDF, IDPR, IDR, MT, ML, FP and FN. Having said that by not freeźing the initial weights, the number of ID switches (IDs) from 231 increases to 270, which we contemplate this is due to the small size of dataset and high number of trainable weights. However, after further investigation we notice that after visual inspections that although the network with the trainable weights can track objects for a longer time; however, when the objects get into crowded scenarios, it loses their track by switching their IDs. Based on these comparisons, we can argue that the computed features in SNN need fine tuning to some degree in order to work jointly with the LSTM module. That could be the reason why the training with the trainable weights outperforms the setting employing frozen weights. Thus, for the rest of our experiments, we use trainable weights. Consequently, Table 11 shows only the results with trainable weights for the AerialMPT and KIT AIS vehicle datasets.
Table 12 represents the overall performances of different tracking methods on the KIT AIS and AerialMPT datasets. According to the table, AerialMPTNet L S T M outperforms SMSOT-CNN with significant larger MOTA on all experimental datasets. In particular, based on Table 10 and Table 11, the main improvements happen for complex sequences such as the “AA_Walking_02” and “Munich02” sequences of the KIT AIS pedestrian dataset, with a 20.8 and 23.8 points larger MOTA, respectively.
On the AerialMPT dataset, the most complex sequences are “Bauma3” and “Bauma6” presenting overcrowded scenarios with many pedestrians intersecting. According to the results, using the LSTM module does not help the performance relevantly. In such complex sequences, the trajectory information of the LSTM module is not enough for distinguishing pedestrians and tracking them within the crowds. Furthermore, the increase in the number of mostly and partially tracked objects (MT and PT) and the decrease in the number of mostly lost ones (ML) indicate that the LSTM module helps AerialMPTNet in the tracking of the objects for a longer time. This, however, causes a larger number of ID switches as discussed before. On the KIT AIS vehicle dataset, although the results show a significant improvement of AerialMPTNet L S T M over SMSOT-CNN, the performance improvements are minor compared to the pedestrian datasets. This could be due the more distinguishable appearance features of the vehicles, leading to a good performance even when relying solely on the SNN module.

8.3. AerialMPTNet (GCNN Only)

In this step, we focus on the modeling of the movement relationships between adjacent objects by AerialMPTNet G C N N . As described in Table 9, we only consider the SNN and GCNN modules, and train the network on our experimental datasets. The tracking results on the test sequences of the datasets are shown in Table 13, and the comparisons to the other methods are provided in Table 12. By adding GCNN the AerialMPTNet performance increases compared to the SMSOT-CNN significantly. MOTA is improved by 11.8, 12.0, and 5.7 points on the AerialMPT and KIT AIS pedestrian and vehicle datasets, respectively. MT, PT, and ML values also improve for the pedestrian datasets. However, MT is only enhanced on the vehicle dataset. IDF, IDP and IDR is improved on three datasets indicating GCNN can improve the performance when objects are close to each other and keeping the track of each object as a graph node is effective. Altogether, these results indicate that the relational information is more important for the pedestrians than the vehicles. Moreover, according to Table 13, as in LSTM results, the use of GCNN helps more for complex sequences. For example, MOTA on the “AA_Walking_02” and “Munich02” sequences increase by 13.9 and 20.5, respectively; however, it decreases respectively by 12.1 and 14.8 on “AA_Crossing_02” and “RaR_Snack_Zone_02”. This could be due to the negative impact of the large number of zero paddings in the less crowded sequences with smaller number of adjacent objects. Compared to AerialMPTNet L S T M , for the AerialMPT, AerialMPTNet G C N N performs slightly better while on the other two datasets it performs worse with a narrow margin. We assume that, due to the higher crowd densities in the AerialMPT dataset, the relationships between adjacent objects are more critical with respect to their movement histories.

8.4. AerialMPTNet

In this step, we evaluate the complete AerialMPTNet by fusing the SNN, LSTM, and GCNN modules. Table 14 represents the tracking results of AerialMPTNet on the test sets of our experimental datasets, and Table 12 compares its overall performance to the other tracking methods.
According to the results, the AerialMPTNet outperforms AerialMPTnet L S T M and AerialMPTNet G C N N for both pedestrian datasets. However, this is not the case for the vehicle dataset. This is due to the main idea behind the development of the network. Since AerialMPTNet is initially designed for pedestrian tracking, it needs to be further adapted to domain specific challenges posed by vehicle tracking. For example, the distance threshold for the modeling if the adjacent object relationships (in GCNN) which considers objects within a distance of 50 pixels from the target object might miss many neighbouring vehicles, as usually the distances between vehicles are larger than those between pedestrians. Finally, AerialMPTNet achieves better tracking results than SMSOT-CNN on all three datasets.

8.4.1. Pedestrian Tracking

In more detail, AerialMPTNet yields the best MOTA among the studied methods on the “AA_Walking_ 02”, “Munich02”, and “RaR_Snack_Zone_02” sequences of the KIT AIS pedestrian dataset (−16.8, −34.5, and 38.9, respectively.) These sequences are the most complex ones in this dataset with respect to the length and number of objects, thing which could significantly influence the MOTA value. Longer sequences and a higher number of objects usually cause the MOTA value to decrease, as it is more probable that the tracking methods lose track of the objects or confuse their IDs in these cases. Figure 13 illustrates the tracking results on two frames of the “AA_Walking_ 02” sequence of the KIT AIS pedestrian dataset by AerialMPTNet and SMSOT-CNN. Comparing the predictions and ground truth points demonstrates that SMSOT-CNN loses track of a considerably higher number of pedestrians between these two frames. While SMSOT-CNN’s predictions are stuck at the diagonal background lines due to their similar appearance features to the pedestrians, AerialMPTNet can easily handle this situation due to the LSTM module.
We also visualized a cropped part of four frames from the “AA_Crossing_02” sequence of the KIT AIS pedestrian dataset in Figure 14. As in the previous example, AerialMPTNet clearly outperforms SMSOT-CNN on the tracking of the pedestrians crossing the background lines.
On the AerialMPT dataset, AerialMPTNet achieves the best MOTA scores among all studied methods in this paper on the “Bauma3”, “Bauma6”, and “Witt” sequences (−32.0, −28.4, −65.9), which contain the most complex scenarios regarding crowd density, pedestrian movements, variety of the GSDs, and complexity of the terrain. However, in contrast to the KIT AIS pedestrian dataset, the MOTA scores are not correlated with the sequence lengths, indicating the impact of other complexities on the tracking results and the better distribution of complexities over the sequences of the AerialMPT dataset as compared to the KIT AIS pedestrian dataset.
Figure 15 exemplifies the role of the LSTM module in enhancing the tracking performance in AerialMPTNet. This figure shows an intersection of two pedestrians in the cropped patches from four frames of the “Pasing8” sequence of the AerialMPT dataset. According to the results, SMOT-CNN (bottom row) loses one of the pedestrians after their intersection leading to an ID switch. However, AerialMPTNet (top row) can track both pedestrians correctly, mainly relying on the pedestrians’ movement histories (their movement directions) provided by the LSTM module.
Figure 16 illustrates a case in which the advantage of the GCNN module can be clearly observed. The images are cropped from four frames of the “Karlsplatz” sequence of the AerialMPT dataset. It can be seen that SMSOT-CNN has difficulties in tracking the pedestrians in such crowded scenarios, where the pedestrians move in various directions. However, AerialMPTNet can handle this scenario mainly based on the pedestrian relationship models provided by the GCNN module.
In addition, there are sequences where both methods reach their limits and perform poorly. Figure 17 illustrates the tracking results of AerialMPTNet (top row) and of SMSOT-CNN (bottom row) on two frames of the “Witt” sequence of the AerialMPT dataset. Comparing the predictions and ground truth object tracks indicates the large number of lost objects by both methods. According to Table 10 and Table 14, despite the small number of frames in the “Witt” sequence, the MOTA scores are low for both methods (−68.6 and −65.9). Further investigations show that these poor performances are caused by the non-adaptive search window size. In the “Witt” sequence, pedestrians move out of the search window and are lost by the tracker as a consequence. In order to solve this issue, the GSD of the frames as well as the pedestrian velocities should be considered in determining the search window size.
In order to show the complexity of the pedestrian tracking task in the AerialMPT dataset, we report the tracking results of AerialMPTNet on the frames 18 and 10 of the “Munich02” and “Bauma3” sequences, respectively, in Figure 1.

8.4.2. Vehicle Tracking

According to Table 12, AerialMPTNet outperforms SMSOT-CNN also on the KIT AIS vehicle dataset, although the increase in performance is lower compared to the pedestrian tracking results. Results on different sequences in Table 10 and Table 14 show that both methods perform poorly on the “MunichCrossroad02” sequence. Figure 18 visualizes the challenges that the tracking methods face in this sequence. For the visualization, we selected an early and a late frame to demonstrate the strong camera movements and changes in the viewing angle, which affect scene arrangements and object appearances. In addition, vehicles are partly or completely occluded by shadows and other objects such as trees. Finally, in this crossroad the movement patterns of the vehicles are complex.
In Figure 19, we compare the performances of AerialMPTNet and SMSOT-CNN on the “MunichCrossroad02” sequence. Both methods track AerialMPTNet tracks a few vehicles better than SMSOT-CNN such as the ones located densely at the traffic lights. AerialMPTNet loses track of a few vehicles which are tracked correctly by SMSOT-CNN. These failures could be solved by a parameter adjustment in our AerialMPTNet.
In Figure 20 we compare performances on the “MunichStreet04” sequence. In this example, AerialMPTNet tracks the long vehicle much better than SMSOT-CNN.
Based on Table 10 and Table 14, SMSOT-CNN outperforms our AerialMPTNet on the “MunichStreet02” sequence. In Figure 21, we exemplify the existing problems with our AerialMPTNet in this sequence. A background object (in the middle of the scene) has been recognized as a vehicle in frame 7, while the vehicle of interest is lost. A similar failure happens at the intersection. This is due to the parameter configurations of AerialMPTNet. As mentioned before, our method was initially proposed for pedestrian tracking, taking into account the characteristics and challenges of this task. Thus, we believe that by further investigations and parameter tuning, such issues should be solved.

8.4.3. Localization Preciseness

In order to evaluate the preciseness of the object locations predicted by AerialMPTNet with respect to SMSOT-CNN, we vary the overlap criterion (IoU threshold) of the evaluation metrics for the Prcn, MOTA, MT, and ML metrics in Figure 22.
According to the plots, the performance of both methods decreases by increasing the IoU threshold, requiring more overlap between the predicted and ground truth bonding boxes (more precise localization.) For all presented metrics, the preciseness of our ArialMPTNet surpasses that of the SMSOT-CNN. However, for the vehicle dataset the performance increase by our AerialMPTNet over SMSOT-CNN is lower than for the case of the pedestrian datasets.

8.5. AerialMPTNet (with Squeeze-and-Excitation Layers)

In this step, we evaluate the improvement achieved by adding SE layers to our AerialMPTNet, as described in Section 6.3. We train the network on our three experimental datasets and report the tracking results in Table 12. Using the SE layers in AerialMPTNet S E degrades the results marginally for most of the metrics on the KIT AIS pedestrian and vehicle datasets as compared to AerialMPTNet. For the vehicle dataset, the SE layers improves the number of the mostly lost (ML) and partially tracked (PT) vehicles by 0.9% and 3.9%, respectively. On the AerialMPT dataset, however, the network behaviour is totally different. AerialMPTNet S E outperforms AerialMPTNet for most of the metrics. SE layers improve MOTA and MOTP by 2 and 0.1 points, respectively. Moreover, the number of mostly tracked (MT) pedestrians increases by 1.7%. These inconstant behaviours could be due to the different image quality and contrast of the datasets. Since the images of the AerialMPT dataset are characterized by a higher quality, the adaptive channel weighting would be more meaningful.

8.6. Training with OHEM

We evaluate the influence of Online Hard Example Mining (OHEM) on the training of our AerialMPTNet as described in Section 6.4. The results are compared to those of the AerialMPTNet with its standard training procedure in Table 12. The use of OHEM in the training procedure reduces the performance marginally on both pedestrian datasets. For example, MOTA decreases by 5 and 1.7 points for the KIT AIS pedestrian and AerialMPT datasets, respectively. For the KIT AIS vehicle dataset, however, results show small improvements in the tracking results. For instance, MOTA rises by 1.8 points and the number of mostly tracked objects increases by 1.4%. We argue that pedestrian movement is highly complex and therefore, providing in input a similar situation multiple times to the tracker based on OHEM does not help the performance. For the vehicles, however, since they mostly moves in straight paths, OHEM can improve the training by retrying the failure cases. This is the first experiment on the benefits of OHEM in regression-based tracking. Further experiments have to be conducted in order to better understand the underlying reasons.

8.7. Huber Loss Function

We assess the effects of loss function in the tracking performance by using the Huber loss [76] instead of the traditional L 1 loss function. The Huber loss is a mixture of the L 1 and L 2 losses, both commonly used for regression problems, and combines their strengths. The L 1 loss measures the Mean Absolute Error (MAE) between the output of the network x and the ground truth x ^ :
L 1 ( x , x ^ ) = i | x i x ^ i | .
The L 2 loss calculates the Mean Squared Error (MSE) between the network output and the ground truth value:
L 2 ( x , x ^ ) = i ( x i x ^ i ) 2 .
The L 1 loss is less affected by outliers with respect to the L 2 loss. The Huber loss acts as a MSE when the error is small, and as a MAE when the error is large:
L H ( x , x ^ ) = i z i ,
z i = 0.5 ( x i x ^ i ) 2 , i f | x i x ^ i | < 1 | x i x ^ i | 0.5 , o t h e r w i s e .
The Huber loss is more robust to outliers with respect to L 2 and improves the L 1 loss for the missing minima at the end of the training.
Table 15 compares results obtained by L 1 and Huber loss functions. The model trained with the L 1 loss outperforms the one trained with the Huber loss in general on all three datasets. There are a few metrics for which the Huber loss shows an improvement over L 1 , such as MT in the vehicle dataset or IDS in the AerialMPT dataset; however, these are marginal. Altogether, we can conclude that the L 1 loss is a better option for our method in these tracking scenarios.

9. Comparing AerialMPTNet to Other Methods

In this section, we compare the results of our AerialMPTNet with a set of traditional methods including KCF, Median Flow, CSRT, and MOSSE as well as DL-based methods such as Tracktor++, Stacked-DCFNet, and SMSOT-CNN. Table 12 reports the results of different tracking methods on the KIT AIS and AerialMPT datasets. In general, the DL-based methods outperform the traditional ones, with MOTA scores varying between −16.2 and −48.8 rather then between −55.9 and −85.8, respectively. The percentages of mostly tracked and mostly lost objects vary between 0.8% and 9.6% for the DL-based methods, while they lie between 36.5% and 78.3% for the traditional ones.

9.1. Pedestrian Tracking

Among the traditional methods, CSRT is the best performing one on the AerialMPT and KIT AIS pedestrian datasets, with MOTA values of −55.9 and −64.6. CSRT mostly tracks 9.6% and 2.9%, and of the pedestrians while it mostly loses 39.4% and 59.3% of the objects in these datasets. The DL-based methods, apart from Tracktor++, track much more pedestrians mostly (>13.8%) and lose much less pedestrians (<23.6%) with respect to traditional methods. The poor performances of Tracktor++ is due to its limitations in working with small objects. AerialMPTNet outperforms all other methods according to most of the adopted figures of merit on the pedestrian datasets with significantly larger MOTA values (−16.2 and −23.4) and competitive MOTP (69.6 and 69.7) values. It mostly tracks 5.9% and 4.6% more pedestrians and loses 5.2% and 6.8% less pedestrians with respect to the best performing previous method, SMSOT-CNN on the KIT AIS and AerialMPT pedestrian datasets, respectively.

9.2. Vehicle Tracking

As Table 12 demonstrates, the DL-based methods and CSRT outperform KCF, Median Flow, and MOSSE significantly, with average MOTA value of 42.9 versus -30.9. The DL-based methods and CSRT are also better with respect to the number of mostly tracked and mostly lost vehicles, varying between 30.0% and 69.1% and between 22.6% and 12.6%, respectively. These values for KCF, MOSSE, and Median Flow are between 19.6% and 32.2% and between 50.4% and 27.8%. Among the DL-based methods, Stacked-DCFNet has the best performance in terms of MOTA and MOTP, outperforming AerialMPTNet by 4.6 and 5.7 points, respectively. While the number of mostly tracked vehicles by Stacked-DCFNet is 2.6% larger than in the case of AerialMPTNet, it mostly loses 3.1% more vehicles. The performance of Tracktor++ increases significantly compared to the pedestrian scenarios, due to the ability of its object detector in detecting vehicles. Tracktor++ achieves a competitive MOTA of 37.1 without any ground truth initialization. The best performing method in terms of MOTA, MT, and ML is CSRT. It outperforms all other methods with a MOTA of 51.1 and MOTP of 80.7.
We rank the studied tracking methods based on their MOTA and MOTP values in Figure 23, with the diagrams offering a clear overview on their performance. AerialMPTNet appears the best method in terms of MOTA for both pedestrian datasets, and achieves competitive MOTP values. Median Flow, for example, achieves a very high MOTP values; however, because of the low number of matched track-object pairs after the first frame, it is not able to track many objects. Hence, the MOTP value solely is not a good performance indicator. For the KIT AIS vehicle dataset, AerialMPTNet shows worse performance than the other methods according to the MOTA and MOTP values. CSRT and Stacked-DCFNet, however, perform favorably for vehicle tracking.

10. Conclusions and Future Works

In this paper, we investigate the challenges posed by the tracking of pedestrians and vehicles in aerial imagery by applying a number of traditional and DL-based SOT and MOT methods on three aerial MOT datasets. We also describe our proposed DL-based aerial MOT method, the so-called AerialMPTNet. Our proposed network fuses appearance, temporal, and graphical information for a more accurate and stable tracking by employing a SNN, a LSTM, and a GCNN module. The influence of SE and OHEM on the performance of AerialMPTNet is investigated, as well as the impact of adopting an L 1 rather than a Huber loss function. An extensive qualitative and quantitative evaluation shows that the proposed AerialMPTNet outperforms both traditional and state-of-the-art DL-based MOT methods for the pedestrian datasets, and achieves competitive results for the vehicle dataset. On the one hand, it is verified that LSTM and GCNN modules enhance the tracking performance; on the other hand, the use of SE and OHEM significantly helps only in some cases, while degrading the tracking results in other cases. The comparison of L 1 and Huber loss shows that L 1 is a better option for most of the scenarios in our experimental datasets.
We believe that the present paper can promote research on aerial MOT by providing a deep insight into its challenges and opportunities, and pave the path for future works in this domain. In the future, within the framework of AerialMPTNet, the search area size can be adapted to the image GSDs and object velocities and accelerations. Additionally, the SNN module can be modified in order to improve the appearance features extraction. The training process of most DL-based tracking methods relies on common loss functions, which do not correlate with tracking evaluation metrics such as MOTA and MOTP, as they are usually differentiable. Recently, differentiable proxies of MOTA and MOTP have been proposed [77], which can be also investigated for the aerial MOT scenarios.

Author Contributions

S.M.A., M.K. and R.B. designed the algorithm. S.M.A. and M.K. prepared the data. M.K. performed the experiments. S.M.A., M.K. and R.B. analyzed the results. R.B. wrote the manuscript. S.M.A., M.K., R.B. and P.R. revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bergmann, P.; Meinhardt, T.; Leal-Taixe, L. Tracking without bells and whistles. In Proceedings of the IEEE International Conference on Computer Vision (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 941–951. [Google Scholar]
  2. Xiang, Y.; Alahi, A.; Savarese, S. Learning to track: Online multi-object tracking by decision making. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 4705–4713. [Google Scholar]
  3. Bertinetto, L.; Valmadre, J.; Henriques, J.F.; Vedaldi, A.; Torr, P.H. Fully-convolutional siamese networks for object tracking. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; pp. 850–865. [Google Scholar]
  4. Cuevas, E.V.; Zaldivar, D.; Rojas, R. Kalman Filter for Vision Tracking; Technical Report; Freie Universität Berlin: Berlin, Germany, 2005. [Google Scholar] [CrossRef]
  5. Cuevas, E.; Zaldivar, D.; Rojas, R. Particle filter in vision tracking. e-Gnosis 2007, 5, 1–11. [Google Scholar]
  6. Bolme, D.S.; Beveridge, J.R.; Draper, B.A.; Lui, Y.M. Visual object tracking using adaptive correlation filters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 2544–2550. [Google Scholar]
  7. Boudoukh, G.; Leichter, I.; Rivlin, E. Visual tracking of object silhouettes. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3625–3628. [Google Scholar]
  8. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–26 June 2005; pp. 886–893. [Google Scholar]
  9. Marvasti-Zadeh, S.M.; Cheng, L.; Ghanei-Yakhdan, H.; Kasaei, S. Deep Learning for Visual Tracking: A Comprehensive Survey. arXiv 2019, arXiv:1912.00535. [Google Scholar]
  10. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  11. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826. [Google Scholar]
  12. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Wang, L.; Ouyang, W.; Wang, X.; Lu, H. Visual tracking with fully convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 3119–3127. [Google Scholar]
  14. Zhang, K.; Liu, Q.; Wu, Y.; Yang, M.H. Robust visual tracking via convolutional networks without training. IEEE Trans. Image Process. 2016, 25, 1779–1792. [Google Scholar] [CrossRef] [PubMed]
  15. Kim, H.I.; Park, R.H. Residual LSTM attention network for object tracking. IEEE Signal Process. Lett. 2018, 25, 1029–1033. [Google Scholar] [CrossRef]
  16. Li, B.; Yan, J.; Wu, W.; Zhu, Z.; Hu, X. High performance visual tracking with siamese region proposal network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8971–8980. [Google Scholar]
  17. Held, D.; Thrun, S.; Savarese, S. Learning to track at 100 fps with deep regression networks. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; pp. 749–765. [Google Scholar]
  18. Song, Y.; Ma, C.; Wu, X.; Gong, L.; Bao, L.; Zuo, W.; Shen, C.; Lau, R.W.; Yang, M.H. Vital: Visual tracking via adversarial learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8990–8999. [Google Scholar]
  19. Zhang, D.; Maei, H.; Wang, X.; Wang, Y.F. Deep reinforcement learning for visual object tracking in videos. arXiv 2017, arXiv:1701.08936. [Google Scholar]
  20. U.S. Government Printing Office. Remote Sensing Data: Applications and Benefits; Technical Report; Subcommittee on Space and Aeronautics, Committee on Science and Technology, Serial No. 110-91. 2008. Available online: https://www.govinfo.gov/content/pkg/CHRG-110hhrg41573/html/CHRG-110hhrg41573.html (accessed on 2 January 2020).
  21. Everaerts, J. The use of unmanned aerial vehicles (UAVs) for remote sensing and mapping. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 1187–1192. [Google Scholar]
  22. Bahmanyar, R.; Azimi, S.M.; Reinartz, P. Multiple vehicle and people tracking in aerial imagery using stack of micro single-object-tracking CNNs. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 163–170. [Google Scholar] [CrossRef] [Green Version]
  23. Reilly, V.; Idrees, H.; Shah, M. Detection and tracking of large number of targets in wide area surveillance. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010; pp. 186–199. [Google Scholar]
  24. Meng, L.; Kerekes, J.P. Object tracking using high resolution satellite imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 146–152. [Google Scholar] [CrossRef] [Green Version]
  25. Milan, A.; Leal-Taixé, L.; Reid, I.; Roth, S.; Schindler, K. MOT16: A benchmark for multi-object tracking. arXiv 2016, arXiv:1603.00831. [Google Scholar]
  26. Kraus, M.; Azimi, S.M.; Ercelik, E.; Bahmanyar, R.; Reinartz, P.; Knoll, A. AerialMPTNet: Multi-Pedestrian Tracking in Aerial Imagery Using Temporal and Graphical Features. In Proceedings of the International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2020. [Google Scholar]
  27. Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; Upcroft, B. Simple online and realtime tracking. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3464–3468. [Google Scholar]
  28. Shrivastava, A.; Gupta, A.; Girshick, R. Training Region-Based Object Detectors with Online Hard Example Mining. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 761–769. [Google Scholar]
  29. Fiaz, M.; Mahmood, A.; Javed, S.; Jung, S.K. Handcrafted and deep trackers: Recent visual object tracking approaches and trends. Acm Comput. Surv. 2019, 52, 1–44. [Google Scholar] [CrossRef]
  30. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. Mar 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  31. Mihaylova, L.; Carmi, A.Y.; Septier, F.; Gning, A.; Pang, S.K.; Godsill, S. Overview of Bayesian sequential Monte Carlo methods for group and extended object tracking. Digit. Signal Process. 2014, 25, 1–16. [Google Scholar] [CrossRef]
  32. Wang, Q.; Gao, J.; Xing, J.; Zhang, M.; Hu, W. DCFNet: Discriminant Correlation Filters Network for Visual Tracking. arXiv 2017, arXiv:1704.04057. [Google Scholar]
  33. Ma, C.; Huang, J.B.; Yang, X.; Yang, M.H. Hierarchical convolutional features for visual tracking. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 3074–3082. [Google Scholar]
  34. Wojke, N.; Bewley, A.; Paulus, D. Simple online and realtime tracking with a deep association metric. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3645–3649. [Google Scholar] [CrossRef] [Green Version]
  35. Huang, C.; Wu, B.; Nevatia, R. Robust object tracking by hierarchical association of detection responses. In Proceedings of the European Conference on Computer Vision, Marseille, France, 12–18 October 2008; pp. 788–801. [Google Scholar]
  36. Lu, X.; Ma, C.; Ni, B.; Yang, X.; Reid, I.; Yang, M.H. Deep Regression Tracking with Shrinkage Loss. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; p. 17. [Google Scholar]
  37. Wang, L.; Ouyang, W.; Wang, X.; Lu, H. Stct: Sequentially training convolutional networks for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1373–1381. [Google Scholar]
  38. Huang, C.; Lucey, S.; Ramanan, D. Learning policies for adaptive tracking with deep feature cascades. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 105–114. [Google Scholar]
  39. Chahyati, D.; Fanany, M.I.; Arymurthy, A.M. Tracking people by detection using CNN features. Procedia Comput. Sci. 2017, 124, 167–172. [Google Scholar] [CrossRef]
  40. Zhang, Y.; Wang, J.; Yang, X. Real-time vehicle detection and tracking in video based on faster R-CNN. J. Phys. Conf. Ser. IOP Publ. 2017, 887, 012068. [Google Scholar] [CrossRef]
  41. Okuma, K.; Taleghani, A.; De Freitas, N.; Little, J.J.; Lowe, D.G. A boosted particle filter: Multitarget detection and tracking. In Proceedings of the European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; pp. 28–39. [Google Scholar]
  42. Brunelli, R. Template Matching Techniques in Computer Vision: Theory and Practice; John Wiley & Sons: Hobokon, NJ, USA, 2009. [Google Scholar]
  43. Hager, G.D.; Belhumeur, P.N. Real-time tracking of image regions with changes in geometry and illumination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 18–20 June 1996; pp. 403–410. [Google Scholar]
  44. Briechle, K.; Hanebeck, U.D. Template matching using fast normalized cross correlation. In Optical Pattern Recognition XII; International Society for Optics and Photonics: Bellingham, WA, USA, 2001; Volume 4387, pp. 95–102. [Google Scholar]
  45. Avidan, S. Ensemble tracking. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 261–271. [Google Scholar] [CrossRef] [PubMed]
  46. Hare, S.; Golodetz, S.; Saffari, A.; Vineet, V.; Cheng, M.M.; Hicks, S.L.; Torr, P.H. Struck: Structured output tracking with kernels. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 2096–2109. [Google Scholar] [CrossRef] [Green Version]
  47. Henriques, J.F.; Caseiro, R.; Martins, P.; Batista, J. High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 583–596. [Google Scholar] [CrossRef] [Green Version]
  48. Sadeghian, A.; Alahi, A.; Savarese, S. Tracking the untrackable: Learning to track multiple cues with long-term dependencies. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 300–311. [Google Scholar]
  49. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
  50. Kalal, Z.; Mikolajczyk, K.; Matas, J. Forward-backward error: Automatic detection of tracking failures. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2756–2759. [Google Scholar]
  51. Lukezic, A.; Vojir, T.; Cehovin Zajc, L.; Matas, J.; Kristan, M. Discriminative correlation filter with channel and spatial reliability. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Venice, Italy, 22–29 October 2017; pp. 6309–6318. [Google Scholar]
  52. Kuhn, H.W. The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 1955, 2, 83–97. [Google Scholar] [CrossRef] [Green Version]
  53. Zheng, L.; Bie, Z.; Sun, Y.; Wang, J.; Su, C.; Wang, S.; Tian, Q. Mars: A video benchmark for large-scale person re-identification. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 868–884. [Google Scholar]
  54. Yokoyama, M.; Poggio, T. A contour-based moving object detection and tracking. In Proceedings of the 2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, Breckenridge, CO, USA, 7 January 2005; pp. 271–276. [Google Scholar]
  55. Jadhav, A.; Mukherjee, P.; Kaushik, V.; Lall, B. Aerial multi-object tracking by detection using deep association networks. arXiv 2019, arXiv:1909.01547. [Google Scholar]
  56. Benedek, C.; Szirányi, T.; Kato, Z.; Zerubia, J. Detection of object motion regions in aerial image pairs with a multilayer Markovian model. IEEE Trans. Image Process. 2009, 18, 2303–2315. [Google Scholar] [CrossRef] [PubMed]
  57. Butenuth, M.; Burkert, F.; Schmidt, F.; Hinz, S.; Hartmann, D.; Kneidl, A.; Borrmann, A.; Sirmacek, B. Integrating pedestrian simulation, tracking and event detection for crowd analysis. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Barcelona, Spain, 8–13 November 2011; pp. 150–157. [Google Scholar]
  58. Schmidt, F.; Hinz, S. A Scheme for the Detection and Tracking of People Tuned for Aerial Image Sequences. In Proceedings of the ISPRS conference on Photogrammetric Image Analysis (PIA), Munich, Germany, 5–7 October 2011; Volume 6952, pp. 257–270. [Google Scholar]
  59. Liu, K.; Mattyus, G. Fast multiclass vehicle detection on aerial images. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1938–1942. [Google Scholar]
  60. Qi, S.; Ma, J.; Lin, J.; Li, Y.; Tian, J. Unsupervised ship detection based on saliency and S-HOG descriptor from optical satellite images. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1451–1455. [Google Scholar]
  61. Bahmanyar, R.; Vig, E.; Reinartz, P. MRCNet: Crowd Counting and Density Map Estimation in Aerial and Ground Imagery. In Proceedings of the BMVC’s Workshop on Object Detection and Recognition for Security Screenin (BMVC-ODRSS), Cardiff, UK, 9–12 September 2019. [Google Scholar]
  62. Ristani, E.; Solera, F.; Zou, R.; Cucchiara, R.; Tomasi, C. Performance measures and a data set for multi-target, multi-camera tracking. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 17–35. [Google Scholar]
  63. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  64. Rastogi, R.; Thaniarasu, I.; Chandra, S. Design implications of walking speed for pedestrian facilities. J. Transp. Eng. 2011, 137, 687–696. [Google Scholar] [CrossRef]
  65. Finnis, K.; Walton, D. Field observations of factors influencing walking speeds. Ergonomics 2006, 51, 827–842. [Google Scholar] [CrossRef]
  66. Strayer, D.L.; Drew, F.A. Profiles in driver distraction: Effects of cell phone conversations on younger and older drivers. Hum. Factors 2004, 46, 640–649. [Google Scholar] [CrossRef] [PubMed]
  67. Rakha, H.; El-Shawarby, I.; Setti, J.R. Characterizing driver behavior on signalized intersection approaches at the onset of a yellow-phase trigger. IEEE Trans. Intell. Transp. Syst. 2007, 8, 630–640. [Google Scholar] [CrossRef]
  68. Alahi, A.; Goel, K.; Ramanathan, V.; Robicquet, A.; Fei-Fei, L.; Savarese, S. Social LSTM: Human trajectory prediction in crowded spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 961–971. [Google Scholar]
  69. Xue, H.; Huynh, D.Q.; Reynolds, M. SS-LSTM: A hierarchical LSTM model for pedestrian trajectory prediction. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Stateline, NV, USA, 12–14 March 2018; pp. 1186–1194. [Google Scholar]
  70. Vemula, A.; Muelling, K.; Oh, J. Social attention: Modeling attention in human crowds. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbone, Australia, 21–25 May 2018; pp. 1–7. [Google Scholar]
  71. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
  72. Lin, M.; Chen, Q.; Yan, S. Network in network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
  73. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  74. Lin, C.T.; Chen, S.P.; Santoso, P.S.; Lin, H.J.; Lai, S.H. Real-Time Single-Stage Vehicle Detector Optimized by Multi-Stage Image-Based Online Hard Example Mining. IEEE Trans. Veh. Technol. 2019, 69, 1505–1518. [Google Scholar] [CrossRef]
  75. Koga, Y.; Miyazaki, H.; Shibasaki, R. A CNN-based method of vehicle detection from aerial images using hard example mining. Remote Sens. 2018, 10, 124. [Google Scholar] [CrossRef] [Green Version]
  76. Huber, P.J. Robust estimation of a location parameter. In Breakthroughs in Statistics; Springer: New York, NY, USA, 1992; pp. 492–518. [Google Scholar]
  77. Xu, Y.; Osep, A.; Ban, Y.; Horaud, R.; Leal-Taixé, L.; Alameda-Pineda, X. How To Train Your Deep Multi-Object Tracker. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
Figure 1. Multi-Pedestrian tracking results of AerialMPTNet on the frame 18 of the “Munich02” (left) and frame 10 of the “Bauma3” (right) sequences of the AerialMPT dataset. Different pedestrians are depicted in different colors with the corresponding trajectories.
Figure 1. Multi-Pedestrian tracking results of AerialMPTNet on the frame 18 of the “Munich02” (left) and frame 10 of the “Bauma3” (right) sequences of the AerialMPT dataset. Different pedestrians are depicted in different colors with the corresponding trajectories.
Remotesensing 13 01953 g001
Figure 2. Illustrations of some challenges in aerial MOT datasets. The examples are from the KIT AIS pedestrian (a), AerialMPT (b), and KIT AIS vehicle datasets (c,d). Multiple pedestrians which are hard to distinguish due to their similar appearance features and low image contrast (a). Multiple pedestrians at a trade fair walking closely together with occlusions, shadows, and strong background colors (b). Multiple vehicles at a stop light where the shadow on the right hand side can be problematic (c). Multiple vehicles with some of them occluded by trees (d).
Figure 2. Illustrations of some challenges in aerial MOT datasets. The examples are from the KIT AIS pedestrian (a), AerialMPT (b), and KIT AIS vehicle datasets (c,d). Multiple pedestrians which are hard to distinguish due to their similar appearance features and low image contrast (a). Multiple pedestrians at a trade fair walking closely together with occlusions, shadows, and strong background colors (b). Multiple vehicles at a stop light where the shadow on the right hand side can be problematic (c). Multiple vehicles with some of them occluded by trees (d).
Remotesensing 13 01953 g002
Figure 3. Sample images from the KIT AIS vehicle dataset acquired at different locations in Munich and Stuttgart, Germany.
Figure 3. Sample images from the KIT AIS vehicle dataset acquired at different locations in Munich and Stuttgart, Germany.
Remotesensing 13 01953 g003
Figure 4. Sample images from the AerialMPT and KIT AIS datasets. “Bauma3”, “Witt”, “Pasing1” are from AerialMPT. “Entrance_01”, “Walking_02”, and “Munich02” are from KIT AIS.
Figure 4. Sample images from the AerialMPT and KIT AIS datasets. “Bauma3”, “Witt”, “Pasing1” are from AerialMPT. “Entrance_01”, “Walking_02”, and “Munich02” are from KIT AIS.
Remotesensing 13 01953 g004
Figure 5. Example images of the DLR-ACD dataset. The images are from an open-air (a) festival (b) and music concert.
Figure 5. Example images of the DLR-ACD dataset. The images are from an open-air (a) festival (b) and music concert.
Remotesensing 13 01953 g005
Figure 6. ID Switches versus IoU thresholds in DeepSORT. From left to right: total, average per person, and average per detection ID Switches.
Figure 6. ID Switches versus IoU thresholds in DeepSORT. From left to right: total, average per person, and average per detection ID Switches.
Remotesensing 13 01953 g006
Figure 7. A success case processed by Stacked-DCFNet on the sequence “Munich02”. The tracking results and ground truth are depicted in green and black, respectively.
Figure 7. A success case processed by Stacked-DCFNet on the sequence “Munich02”. The tracking results and ground truth are depicted in green and black, respectively.
Remotesensing 13 01953 g007
Figure 8. A failure case by Stacked-DCFNet on the sequence “AA_Walking_02”. The tracking results and ground truth are depicted in green and black, respectively.
Figure 8. A failure case by Stacked-DCFNet on the sequence “AA_Walking_02”. The tracking results and ground truth are depicted in green and black, respectively.
Remotesensing 13 01953 g008
Figure 9. A success case by Stacked-DCFNet on the sequence “AA_Crossing_02”. The tracking results and ground truth are depicted in green and black, respectively.
Figure 9. A success case by Stacked-DCFNet on the sequence “AA_Crossing_02”. The tracking results and ground truth are depicted in green and black, respectively.
Remotesensing 13 01953 g009
Figure 10. A failure case by Stacked-DCFNet on the test sequence “RaR_Snack_Zone_04”. The tracking results and the ground truth are depicted in green and black, respectively.
Figure 10. A failure case by Stacked-DCFNet on the test sequence “RaR_Snack_Zone_04”. The tracking results and the ground truth are depicted in green and black, respectively.
Remotesensing 13 01953 g010
Figure 11. (a) An input image patch to the last convolutional layer of Stacked-DCFNetand and (b) its corresponding activation map.
Figure 11. (a) An input image patch to the last convolutional layer of Stacked-DCFNetand and (b) its corresponding activation map.
Remotesensing 13 01953 g011
Figure 12. Overview of the network’s architecture composing a SNN, a LSTM and a GraphCNN module. The inputs are two consecutive images cropped and centered to a target object, while the output is the object location in search crop coordinates.Overview of AerialMPTNet’s architecture.
Figure 12. Overview of the network’s architecture composing a SNN, a LSTM and a GraphCNN module. The inputs are two consecutive images cropped and centered to a target object, while the output is the object location in search crop coordinates.Overview of AerialMPTNet’s architecture.
Remotesensing 13 01953 g012
Figure 13. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 8 and 14 of the “AA_Walking_ 02” sequence of the KIT AIS pedestrian dataset. The predictions and ground truth are depicted in blue and white, respectively.
Figure 13. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 8 and 14 of the “AA_Walking_ 02” sequence of the KIT AIS pedestrian dataset. The predictions and ground truth are depicted in blue and white, respectively.
Remotesensing 13 01953 g013
Figure 14. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 4, 6, 8, and 10 of the “AA_Crossing_02” sequence of the KIT AIS pedestrian dataset. The predictions and ground truth are depicted in blue and white, respectively.
Figure 14. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 4, 6, 8, and 10 of the “AA_Crossing_02” sequence of the KIT AIS pedestrian dataset. The predictions and ground truth are depicted in blue and white, respectively.
Remotesensing 13 01953 g014
Figure 15. Tracking results by the AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 11, 13, 15, and 17 of the “Pasing8” sequence of the AerialMPT dataset. The predictions and ground truth are depicted in blue and white, respectively.
Figure 15. Tracking results by the AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 11, 13, 15, and 17 of the “Pasing8” sequence of the AerialMPT dataset. The predictions and ground truth are depicted in blue and white, respectively.
Remotesensing 13 01953 g015
Figure 16. Tracking results by the AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 21, 23, 25, and 27 of the “Karlsplatz” sequence of the AerialMPT dataset. The predictions and ground truth are depicted in blue and white, respectively.
Figure 16. Tracking results by the AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 21, 23, 25, and 27 of the “Karlsplatz” sequence of the AerialMPT dataset. The predictions and ground truth are depicted in blue and white, respectively.
Remotesensing 13 01953 g016
Figure 17. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 3 and 6 of the “Witt” sequence of the AerialMPT dataset. The predictions and ground truth are depicted in blue and white, respectively.
Figure 17. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 3 and 6 of the “Witt” sequence of the AerialMPT dataset. The predictions and ground truth are depicted in blue and white, respectively.
Remotesensing 13 01953 g017
Figure 18. Tracking results by AerialMPTNet on the frames 4 and 31 of the “MunichCrossroad02” sequence of the KIT AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively. Several hindrances such as changing viewing angle, shadows, and occlusions (e.g., by trees) are visible.
Figure 18. Tracking results by AerialMPTNet on the frames 4 and 31 of the “MunichCrossroad02” sequence of the KIT AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively. Several hindrances such as changing viewing angle, shadows, and occlusions (e.g., by trees) are visible.
Remotesensing 13 01953 g018
Figure 19. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 2 and 8 of the “MunichCrossroad02” sequence of the KIT AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively.
Figure 19. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 2 and 8 of the “MunichCrossroad02” sequence of the KIT AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively.
Remotesensing 13 01953 g019
Figure 20. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 20 and 29 of the “MunichStreet04” sequence of the KIT AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively.
Figure 20. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 20 and 29 of the “MunichStreet04” sequence of the KIT AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively.
Remotesensing 13 01953 g020
Figure 21. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 1 and 7 of the “MunichStreet02” sequence of the KIT AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively.
Figure 21. Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 1 and 7 of the “MunichStreet02” sequence of the KIT AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively.
Remotesensing 13 01953 g021
Figure 22. Comparing the Prcn, MOTA, MT, and ML of the AerialMPTNet and SMSOT-CNN on the KIT AIS pedestrian (first row), AerialMPT (second row), and KIT AIS vehicle (third row) datasets by changing the IoU thresholds of the evaluation metrics.
Figure 22. Comparing the Prcn, MOTA, MT, and ML of the AerialMPTNet and SMSOT-CNN on the KIT AIS pedestrian (first row), AerialMPT (second row), and KIT AIS vehicle (third row) datasets by changing the IoU thresholds of the evaluation metrics.
Remotesensing 13 01953 g022
Figure 23. Ranking the tracking methods based on their MOTA and MOTP values on the (a) KIT AIS pedestrian, (b) AerialMPT, and (c) KIT AIS vehicle datasets.
Figure 23. Ranking the tracking methods based on their MOTA and MOTP values on the (a) KIT AIS pedestrian, (b) AerialMPT, and (c) KIT AIS vehicle datasets.
Remotesensing 13 01953 g023
Table 1. Statistics of the KIT AIS pedestrian dataset.
Table 1. Statistics of the KIT AIS pedestrian dataset.
Train
Seq.Image Size#Fr.#Pedest.#Anno.#Anno./Fr.GSD
AA_Crossing_01309 × 487181642618145.415.0
AA_Easy_01161 × 1681481128.015.0
AA_Easy_02338 × 507121618515.415.0
AA_Easy_Entrance165 × 1251983110558.315.0
AA_Walking_01227 × 297134044534.215.0
Munich01509 × 57924100130854.512.0
RaR_Snack_Zone_01443 × 5354237930232.515.0
Total104633670364.4
Test
AA_Crossing_02322 × 5371394113587.315.0
AA_Entrance_01835 × 7981697314,031876.915.0
AA_Walking_02516 × 445171882671157.115.0
Munich02702 × 790312306125197.612.0
RaR_Snack_Zone_02509 × 4744220865216.215.0
RaR_Snack_Zone_04669 × 54243111230307.515.0
Total85201626,057306.5
Table 2. Statistics of the KIT AIS vehicle dataset.
Table 2. Statistics of the KIT AIS vehicle dataset.
Train
Seq. Image Size#Fr. #Vehic. #Anno. #Anno./Fr. GSD
MunichAutobahn1633 × 988161616110.115.0
MunichCrossroad1684 × 547203050925.512.0
MunichStreet11764 × 4302557133853.512.0
MunichStreet31771 × 4224788307165.312.0
StuttgartAutobahn1767 × 669234376433.217.0
Total131234584344.6
Test
MunichCrossroad2895 × 10364566215547.912.0
MunichStreet21284 × 377204774637.312.0
MunichStreet41284 × 3882968151952.412.0
StuttgartCrossroad1724 × 708144955439.617.0
Total108230497446.1
Table 3. Statistics of the AerialMPT dataset.
Table 3. Statistics of the AerialMPT dataset.
Train
Seq. Image Size #Fr. #Pedest. #Anno. #Anno./Fr. GSD
Bauma1462 × 306192704448234.111.5
Bauma2310 × 249291483627125.111.5
Bauma4281 × 243221272399109.111.5
Bauma5281 × 2431794141082.911.5
Marienplatz316 × 355302155158171.910.5
Pasing1L614 × 36628100232783.110.5
Pasing1R667 × 2201686119674.710.5
OAC186 × 1631892128771.58.0
Total179113221,852122.1
Test
Bauma3611 × 552166098788549.211.5
Bauma6310 × 249262705314204.411.5
Karlsplatz283 × 275271463374125.010.0
Pasing7667 × 22024103206486.010.5
Pasing8614 × 3662783193271.610.5
Witt353 × 120281851416177.013.0
Total128139622,888178.8
Table 4. Description of the metrics used for quantitative evaluations.
Table 4. Description of the metrics used for quantitative evaluations.
Metric Description
IDF1ID F1-Score
IDPID Global Min-Cost Precision
IDRID Global Min-Cost Recall
RcllRecall
PrcnPrecision
FARFalse Acceptance Rate
MTRatio of Mostly Tracked Objects
PTRatio of Partially Tracked Objects
MLRatio of Mostly Lost Objects
FPFalse Positives
FNFalse Negatives
IDSNumber of Identity Switches
FMNumber of Fragmented Tracks
MOTAMultiple Object Tracker Accuracy
MOTPMultiple Object Tracker Precision
MOTALMultiple Object Tracker Accuracy Log
Table 5. Results of KCF, MOSSE, CSRT, Median Flow, and Stacked-DCFNet on the KIT AIS pedestrian dataset. The first and second best values are highlighted.
Table 5. Results of KCF, MOSSE, CSRT, Median Flow, and Stacked-DCFNet on the KIT AIS pedestrian dataset. The first and second best values are highlighted.
MethodsIDF1↑IDP↑IDR↑Rcll↑Prcn↑FAR↓MT%↑PT%↑ML%↓FP↓FN↓IDS↓FM↓MOTA↑MOTP↑MOTAL↑
KCF9.08.89.310.39.8165.61.153.845.111,42610,78232116−84.987.2−84.7
MOSSE9.18.99.310.510.0163.80.854.045.211,30310,76531133−85.886.7−83.5
CSRT16.016.915.217.519.4126.59.651.039.48732992491254−55.978.4−55.1
Median Flow18.518.318.819.519.0144.77.755.836.59986967830161−63.877.7−63.5
Stacked-DCFNet30.030.230.933.132.3120.513.862.623.683168051139651−37.371.6−36.1
Table 6. Results of KCF, MOSSE, CSRT, Median Flow, and Stacked-DCFNet on different sequences of KIT AIS pedestrian dataset. The first and second best values of each method on the sequences are highlighted.
Table 6. Results of KCF, MOSSE, CSRT, Median Flow, and Stacked-DCFNet on different sequences of KIT AIS pedestrian dataset. The first and second best values of each method on the sequences are highlighted.
Sequences# ImgsGTIDF1↑IDP↑IDR↑Rcll↑Prcn↑FAR↓MT%↑PT%↑ ML%↓FP↓FN↓IDS↓FM↓MOTA↑MOTP↑MOTAL↑
KCF
AA_Crossing_0213948.18.18.09.19.278.11.16.492.51015103208−80.497.3−80.4
AA_Walking_02171886.56.36.77.87.3154.91.610.687.826332463314−90.996.9−90.8
Munich02312304.34.14.45.65.2201.70.93.995.2625457812975−97.062.2−96.5
RaR_Snack_Zone_02422029.329.129.529.829.5154.51.898.20.061860708−41.695.1−41.6
RaR_Snack_Zone_04431125.825.725.926.926.8226.50.399.70.0906899011−46.797.9−46.7
MOSSE
AA_Crossing_0213948.08.17.99.19.278.11.15.393.61015103209−80.496.9-80.4
AA_Walking_02171886.66.46.78.07.6151.81.610.188.325802458220−88.795.7−88.6
Munich02312304.34.24.55.75.4199.70.94.394.8619057752978−95.861.9−95.4
RaR_Snack_Zone_02422029.429.229.630.430.0153.20.599.50.0613602014−40.594.9−40.5
RaR_Snack_Zone_04431125.825.725.927.026.8226.20.399.70.0905898012−46.697.5−46.6
CSRT
AA_Crossing_02139412.913.212.515.115.969.51.130.968.09049641029−65.584.6−64.7
AA_Walking_02171889.210.08.51112.9116.92.715.481.918723781241−63.988.0−63.5
Munich02312309.29.98.710.912.5151.41.814.383.94696545566137−66.861.2−65.8
RaR_Snack_Zone_02422043.242.042.543.843.3124.217.382.70.0497486016−13.687.9−13.6
RaR_Snack_Zone_04431145.645.545.047.947.6162.016.783.30.0648641331−5.085.2−4.8
Median Flow
AA_Crossing_02139427.327.327.428.528.362.81.168.130.8817812449−43.974.9−43.6
AA_Walking_021718810.09.910.011.111.0141.11.621.377.123982374816−79.086.3−78.7
Munich02312309.29.09.49.99.5186.41.38.790.0577855171053−84.664.7−84.4
RaR_Snack_Zone_02422051.751.452.052.852.2104.78.691.40.04194082144.283.74.3
RaR_Snack_Zone_04431153.153.053.353.953.6143.517.482.60.05745676296.783.07.2
Stacked-DCFNet
AA_Crossing_02139441.942.441.342.743.947.812.858.528.76216501571−13.374.7-12.1
AA_Walking_021718831.431.631.232.332.7104.35.945.748.41773180923184−35.074.1−34.2
Munich023123021.220.621.925.023.6160.41.750.048.34974459197322−57.760.5−56.2
RaR_Snack_Zone_02422051.852.351.352.453.499.022.374.53.23964124356.184.06.5
RaR_Snack_Zone_04431151.852.651.052.153.7138.021.974.93.25525890397.283.67.2
Table 7. Results of DeepSORT, SORT, Tracktor++, and SMSOT-CNN on the KIT AIS pedestrian dataset. The first and second best values are highlighted.
Table 7. Results of DeepSORT, SORT, Tracktor++, and SMSOT-CNN on the KIT AIS pedestrian dataset. The first and second best values are highlighted.
MethodsIDF1↑IDP↑IDR↑Rcll↑Prcn↑FAR↓MT%↑PT%↑ML%↓FP↓FN↓IDS↓FM↓MOTA↑MOTP↑MOTAL↑
DeepSORT10.09.810.2100.095.87.6100.00.00.052308627923.981.198.6
DeepSORT-BBX 2 × 38.436.939.9100.092.613.9100.00.00.095805073949.978.792.0
DeepSORT-IoU 99 43.340.844.098.391.116.799.80.20.01152205400918955.473.788.7
DeepSORT-BBX 2 × -IoU 99 82.180.783.699.496.07.399.80.20.0502757387089.174.795.2
DeepSORT-BBX 2 × -IoU 99 -FT82.481.083.899.496.07.199.80.20.0493717346889.274.795.3
SORT-IoU 99 42.941.844.298.793.412.299.80.20.0840151380514160.173.691.7
SORT-BBX 2 × -IoU 99 86.585.587.299.698.13.399.80.20.0231464384894.174.797.7
Tracktor++13.727.39.228.585.013.244.242.6604859321887255.30.1
SMSOT-CNN34.033.234.938.236.4116.425.052.522.580287427157614−29.871.0−28.5
EOT-D 17 85.284.985.586.586.024.580.219.60.21692161937107472.269.372.5
Table 8. Results of DeepSORT, SORT, Tracktor++, and SMSOT-CNN on the KIT AIS pedestrian dataset. The first and second best values of each method on the sequences are highlighted.
Table 8. Results of DeepSORT, SORT, Tracktor++, and SMSOT-CNN on the KIT AIS pedestrian dataset. The first and second best values of each method on the sequences are highlighted.
Sequences# ImgsGTIDF1↑IDP↑IDR↑Rcll↑Prcn↑FAR↓MT%↑PT%↑ML%↓FP↓FN↓IDS↓FM↓MOTA↑MOTP↑MOTAL↑
DeepSORT
AA_Crossing_0213943.13.13.1100.0100.00.0100.00.00.000940117.299.799.7
AA_Walking_02171887.77.77.8100.098.91.7100.00.00.02902145518.699.098.8
Munich02312309.18.89.4100.092.815.4100.00.00.047804681115.864.092.1
RaR_Snack_Zone_02422021.020.921.2100.098.72.7100.00.00.0110351258.298.198.4
RaR_Snack_Zone_04431117.917.918.0100.099.61.2100.00.00.050510058.198.699.4
DeepSORT-BBX 2 ×
AA_Crossing_02139434.834.535.1100.098.41.4100.00.00.0180566148.594.398.2
AA_Walking_021718846.646.047.1100.098.83.6100.00.00.06101073557.593.197.6
Munich023123029.527.631.5100.087.727.7100.00.00.085902989137.263.985.9
RaR_Snack_Zone_02422052.251.952.5100.098.92.5100.00.00.0100203275.495.798.6
RaR_Snack_Zone_04431161.261.061.5100.099.22.5100.00.00.0100242079.594.499.0
DeepSORT-IoU 99
AA_Crossing_02139455.054.455.699.096.92.8100.00.00.036113471065.383.695.6
AA_Walking_021718863.462.564.399.196.36.1100.00.00.0103235572574.482.095.2
Munich023123024.222.825.897.285.831.899.60.40.0985170273715136.562.981.1
RaR_Snack_Zone_02422057.757.358.2100.098.53.2100.00.00.0130177278.090.498.2
RaR_Snack_Zone_04431169.168.769.599.998.83.799.70.30.0151191183.287.298.5
DeepSORT-BBX 2 × -IoU 99
AA_Crossing_02139493.892.595.299.896.92.8100.00.00.036245293.885.096.5
AA_Walking_021718888.784.493.499.790.017.3100.00.00.02958421287.086.488.6
Munich023123073.170.975.398.993.214.2100.00.00.0441675655682.562.991.7
RaR_Snack_Zone_02422090.189.990.499.899.21.799.10.90.07237494.787.998.8
RaR_Snack_Zone_04431190.290.190.3100.099.80.7100.00.00.03049095.888.499.6
DeepSORT-BBX 2 × -IoU 99 -FT
AA_Crossing_02139493.192.793.4100.099.30.6100.00.00.08043195.585.199.2
AA_Walking_021718893.192.493.799.898.42.5100.00.00.043642996.686.598.1
Munich023123073.371.275.599.093.313.9100.00.00.0432635635482.762.991.9
RaR_Snack_Zone_02422090.189.990.499.899.21.799.10.90.07237494.787.998.8
RaR_Snack_Zone_04431190.290.190.3100.099.80.7100.00.00.03049095.888.499.6
SORT-IoU 99
AA_Crossing_02139455.955.456.599.197.25.5100.00.00.03310343966.083.596.0
AA_Walking_021718864.063.264.999.396.75.3100.00.00.090195502175.382.095.8
Munich023123024.623.625.898.089.722.299.60.40.0689122254410845.262.886.7
RaR_Snack_Zone_02422057.757.358.2100.098.53.2100.00.00.0130177278.090.498.2
RaR_Snack_Zone_04431169.168.769.599.998.83.799.70.30.0151191183.287.298.5
SORT-BBX 2 × -IoU 99
AA_Crossing_02139493.192.793.4100.099.30.6100.00.00.08045195.385.099.1
AA_Walking_021718894.593.995.199.398.62.2100.00.00.037230697.486.598.5
Munich023123080.479.681.399.397.25.7100.00.00.0176422843791.863.096.4
RaR_Snack_Zone_02422090.590.290.899.899.21.799.10.90.07234495.087.998.8
RaR_Snack_Zone_04431190.590.490.7100.099.80.7100.00.00.03045096.188.499.6
Tracktor++
AA_Crossing_02139412.719.69.448.2100.020.151.128.8058843210710.10.13
AA_Walking_021718810.727.56.723.295.83.243.153.72720504261546.30.13
Munich02312307.816.75.122.774.52.241.356.67464736965412−0.80.08
RaR_Snack_Zone_02422033.854.524.540.289.517.745.536.8415171342720.00.09
RaR_Snack_Zone_04431132.550.224.042.989.822.244.133.7607022312519.30.06
SMSOT-CNN
AA_Crossing_02139449.949.750.152.151.642.624.552.123.455454411712.368.83.2
AA_Walking_021718830.730.231.333.832.7109.615.538.945.61864176734140−32.768.0−36.0
Munich023123023.622.724.528.826.7156.38.638.353.148464363105316−52.168.4−50.4
RaR_Snack_Zone_02422061.661.461.864.463.978.537.362.30.431430823927.977.928.0
RaR_Snack_Zone_04431161.261.161.363.863.6112.534.464.61.045044554826.876.727.2
EOT-D 17
AA_Crossing_02139494.494.494.495.395.24.191.58.50.0545343490.273.890.5
AA_Walking_021718894.694.095.196.995.86.796.82.70.511482106392.376.692.6
Munich023123076.075.876.277.076.546.644.354.80.9144614091593053.160.453.4
RaR_Snack_Zone_02422095.094.995.196.596.38.087.712.30.0323031692.577.692.8
RaR_Snack_Zone_04431195.295.195.296.396.311.576.223.80.0464553192.278.692.5
Table 9. Different network configurations.
Table 9. Different network configurations.
NameSNNLSTMGCNNSE LayersOHEM
SMSOT-CNN🗸××××
AerialMPTNet L S T M 🗸🗸×××
AerialMPTNet G C N N 🗸×🗸××
AerialMPTNet🗸🗸🗸××
AerialMPTNet S E 🗸🗸🗸🗸×
AerialMPTNet O H E M 🗸🗸🗸×🗸
Table 10. SMSOT-CNN on the KIT AIS and AerialMPT datasets.
Table 10. SMSOT-CNN on the KIT AIS and AerialMPT datasets.
Sequences# ImgsGTIDF1↑IDP↑IDR↑Rcll↑Prcn↑FAR↓MT%↑PT%↑ML%↓FP↓FN↓IDS↓FM↓MOTA↑MOTP↑MOTAL↑
KIT AIS Pedestrian Dataset
AA_Crossing_02139449.449.249.651.751.342.9222.460.617.055854815881.266.82.4
AA_Walking_021718829.629.030.231.930.6113.769.145.745.21934182025139−41.565.7−40.6
Munich023120.723019.921.524.522.6165.453.544.352.25129462591271−60.767.1−59.3
RaR_Snack_Zone_02422063.162.963.464.263.779.035.063.61.431631013927.578.227.6
RaR_Snack_Zone_04431163.563.363.765.364.9108.535.064.01.043442734829.876.730.0
Overall69104332.531.733.435.733.9121.3222.256.021.883717730135585−35.070.0−33.9
AerialMPT Dataset
Bauma31660929.328.630.034.633.0385.699.947.143.061715748200458−37.969.1−35.7
Bauma62627030.828.633.337.732.3161.2312.257.430.441923311115302−43.467.7−41.2
Karlsplatz2714630.729.432.233.830.894.936.958.234.9256322332695−42.967.9−42.2
Pasing72410357.754.561.361.955.143.4235.954.49.71042786713611.167.611.4
Pasing8278333.532.634.435.133.350.308.454.237.4135812531082−35.767.0−35.2
Witt818515.815.715.916.416.2150.381.120.578.41203118419−68.661.5−68.6
Overall128139632.030.733.436.633.6129.1310.747.741.616,52914,5153591082−37.268.0−35.6
KIT AIS Vehicle Dataset
MunichStreet02204787.485.090.190.585.35.8087.28.54.3116711774.880.674.9
StuttgartCrossroad01144967.363.671.574.966.614.8657.130.612.320813931736.875.337.3
MunichCrossroad02456650.649.551.753.551.324.3845.527.327.21097100117411.969.42.6
MunichStreet04296883.582.484.785.883.68.8376.514.78.825621561568.679.768.9
Overall10823068.066.469.771.367.915.5365.720.413.916771426278037.175.837.6
Table 11. AerialMPTNet L S T M on the KIT AIS and AerialMPT datasets. The best overall values of the two configurations on the KIT AIS pedestrian dataset are highlighted.
Table 11. AerialMPTNet L S T M on the KIT AIS and AerialMPT datasets. The best overall values of the two configurations on the KIT AIS pedestrian dataset are highlighted.
Sequences# ImgsGTIDF1↑IDP↑IDR↑Rcll↑Prcn↑FAR↓MT%↑PT%↑ML%↓FP↓FN↓IDS↓FM↓MOTA↑MOTP↑MOTAL↑
KIT AIS Pedestrian Dataset—Frozen Weights
AA_Crossing_02139442.041.842.244.844.548.9213.859.626.66366261399−12.368.4−11.3
AA_Walking_021718834.734.035.437.235.8104.948.055.336.71784167822227−30.467.4−29.7
Munich023123026.025.126.933.130.8146.816.157.836.145514098191463−44.367.8−41.2
RaR_Snack_Zone_02422057.156.957.359.058.690.2529.169.51.436135514217.172.917.2
RaR_Snack_Zone_04431164.764.464.966.365.9105.2539.658.81.642141545231.773.832.0
Overall69104335.534.636.340.438.5112.3622.060.317.777537172231883−26.069.3−24.1
KIT AIS Pedestrian Dataset—Trainable Weights
AA_Crossing_02139447.149.947.349.649.244.7723.448.927.75825721191−2.668.2−1.8
AA_Walking_021718839.839.240.541.940.596.4718.646.834.61640155331215−20.767.2−19.6
Munich023123029.628.630.837.134.5139.108.359.632.143123852221506−36.967.1−33.3
RaR_Snack_Zone_02422063.062.863.264.964.477.5037.360.02.731030443128.672.228.9
RaR_Snack_Zone_04431167.667.567.869.168.896.5046.050.83.238638034337.573.337.7
Overall69104339.738.840.644.642.6104.7828.953.817.372306661270886−17.868.8−15.5
AerialMPT Dataset
Bauma31660928.327.729.034.633.0386.008.451.240.461765745246608−38.571.0−35.7
Bauma62627033.231.235.539.334.5152.3513.058.528.539613225135387−37.870.1−35.3
Karlsplatz2714648.447.050.051.448.268.8924.755.519.81860164116140−4.269.7−3.8
Pasing72410361.058.563.664.359.238.0835.956.37.8914737512719.870.520.0
Pasing8278341.340.642.142.741.443.7818.150.631.311821108490−18.769.4−18.6
Witt818515.615.515.717.317.1148.752.723.873.511901171324−66.961.1−66.8
Overall128139635.734.537.040.537.7119.4012.849.837.415,28313,6274091376−28.170.1−26.3
KIT AIS Vehicle Dataset
MunichStreet02204781.979.984.084.980.67.6074.510.614.91521134363.979.664.4
StuttgartCrossroad01144965.962.469.972.765.015.5059.226.514.321715121133.276.233.5
MunichCrossroad02456657.756.059.560.656.921.9348.533.318.2987850224313.769.414.7
MunichStreet04296888.788.389.189.989.05.7986.87.45.81681532378.779.878.8
Overall10823071.669.873.474.570.914.1167.419.613.015241267306043.375.743.9
Table 12. Overall Performances of Different Tracking Methods on the KIT AIS and AerialMPT Datasets. The first and second best values on each dataset are highlighted.
Table 12. Overall Performances of Different Tracking Methods on the KIT AIS and AerialMPT Datasets. The first and second best values on each dataset are highlighted.
MethodsIDF1↑IDP↑IDR↑Rcll↑Prcn↑FAR↓MT%↑PT%↑ML%↓FP↓FN↓IDS↓FM↓MOTA↑MOTP↑MOTAL↑
KIT AIS Pedestrian Dataset
KCF9.08.89.310.39.8165.61.153.845.111,42610,78232116−84.987.2−84.7
Median Flow18.518.318.819.519.0144.77.755.836.59986967830161−63.877.7−63.5
CSRT16.016.915.217.519.4126.59.651.039.48732992491254−55.978.4−55.1
MOSSE9.18.99.310.510.0163.80.854.045.211,30310,76531133−85.886.7−83.5
Tracktor++6.69.05.210.818.781.71.128.470.5564810,723648367−41.540.5
Stacked-DCFNet30.030.230.933.132.3120.513.862.623.683168051139651−37.371.6−36.1
SMSOT-CNN32.531.733.435.733.9121.322.256.021.883717730135585−35.070.0−33.9
AerialMPTNet L S T M (Ours)39.738.840.644.642.6104.828.953.817.372306661270886−17.868.8−15.5
AerialMPTNet G C N N (Ours)37.536.738.442.040.0109.525.355.319.475556980259814−23.069.6−20.9
AerialMPTNet (Ours)40.639.741.545.143.2103.428.155.316.671386597236897−16.269.6−14.2
AerialMPTNet S E (Ours)38.337.539.142.841.1107.227.454.518.173956876250818−20.769.9−18.7
AerialMPTNet O H E M (Ours)38.637.739.442.740.9107.726.155.818.174356889254854−21.269.5−19.1
AerialMPT Dataset
KCF11.911.512.313.412.5167.23.717.079.321,40719,82086212−80.577.2−80.1
Median Flow12.212.012.413.112.7162.01.720.278.120,73219,88346144−77.777.8−77.5
CSRT16.916.617.120.319.7148.52.937.859.319,01118,235426668−64.674.6−62.7
MOSSE12.111.712.413.712.9165.73.817.978.321,20419,74985194−79.380.0−78.9
Tracktor++4.08.83.15.08.793.00.17.692.311,90721,752399345−48.840.3
Stacked-DCFNet28.027.628.531.430.4128.39.444.246.416,42215,712322944−41.872.3−40.4
SMSOT-CNN32.030.733.436.633.6129.110.747.741.616,52914,5153591082−37.268.0−35.6
AerialMPTNet L S T M (Ours)35.734.537.040.537.7119.412.849.837.415,28313,6274091376−28.170.1−26.3
AerialMPTNet G C N N (Ours)37.035.738.342.039.1117.015.646.038.414,98313,2794331229−25.469.7−23.5
AerialMPTNet (Ours)37.836.539.343.140.0115.515.349.934.814,78213,0224361269−23.469.7−21.5
AerialMPTNet S E (Ours)38.937.540.444.140.9113.817.048.134.914,56812,7994301212−21.469.8−19.6
AerialMPTNet O H E M (Ours)37.235.838.742.439.3117.316.046.837.215,01613,1814301284−25.169.8−23.2
KIT AIS Vehicle Dataset
KCF41.339.043.945.640.430.927.033.539.5333927085396−22.672.3−21.6
Median Flow42.039.544.946.340.831.032.240.027.8334826692347−21.482.0−21.0
CSRT76.772.181.983.173.114.172.621.75.71520841214652.180.752.5
MOSSE29.027.430.832.428.836.819.630.050.4397733645681−48.775.0−47.6
Tracktor++55.366.647.257.380.76.330.047.422.6681212532320437.177.4
Stacked-DCFNet73.871.276.677.271.814.069.115.215.71512113393946.682.046.8
SMSOT-CNN68.066.469.771.367.915.565.720.413.916771426278037.175.837.6
AerialMPTNet L S T M (Ours)71.669.873.474.570.914.167.419.613.015241267306043.375.743.9
AerialMPTNet G C N N (Ours)71.169.472.974.170.614.267.018.714.315361289225842.875.943.2
AerialMPTNet (Ours)70.068.371.873.970.314.466.520.912.615561299296742.076.342.6
AerialMPTNet S E (Ours)70.068.471.773.269.814.663.524.811.715741334238441.175.641.5
AerialMPTNet O H E M (Ours)71.770.073.474.671.213.967.019.613.415051262276643.875.544.3
Table 13. AerialMPTNet G C N N on the KIT AIS and AerialMPT datasets.
Table 13. AerialMPTNet G C N N on the KIT AIS and AerialMPT datasets.
Sequences# ImgsGTIDF1↑IDP↑IDR↑Rcll↑Prcn↑FAR↓MT%↑PT%↑ML%↓ FP↓FN↓IDS↓FM↓MOTA↑MOTP↑MOTAL↑
KIT AIS Pedestrian Dataset
AA_Crossing_02139443.543.343.745.545.148.418.151.130.86296191190−10.968.5−10.1
AA_Walking_021718835.835.336.238.237.2101.314.947.937.21723165035204−27.668.1−26.3
Munich023123029.12830.235.532.9142.98.353.937.844313951204434−40.268.1−36.9
RaR_Snack_Zone_02422055.255.055.456.956.594.728.269.52.337937334112.773.313.0
RaR_Snack_Zone_04431167.26767.368.568.298.244.452.13.539338764536.173.936.5
Overall69104337.536.738.442.040.0109.525.355.319.475556980259814−23.069.6−20.9
AerialMPT Dataset
Bauma31660929.628.930.436.534.7376.711.348.340.460285581276550−35.270.0−32.1
Bauma62627036.734.439.343.738.2144.220.450.429.237502994126329−29.370.6−26.9
Karlsplatz2714643.772.345.246.443.475.615.863.021.22042180925145−14.968.5−14.2
Pasing72410368.666.071.471.666.131.551.539.88.775685749634.771.034.9
Pasing8278341.240.442.142.741.044.018.151.830.111881108294−18.968.2−18.9
Witt818514.114.014.215.315.1152.41.619.578.912191200015−70.860.8−70.8
Overall128139637.035.738.342.039.1117.115.646.038.414,98313,2794331229−25.469.7−23.5
KIT AIS Vehicle Dataset
MunichStreet02204782.680.584.785.481.17.476.66.417.01481094365.079.565.5
StuttgartCrossroad01144970.066.573.876.769.113.665.322.412.319012921142.175.742.3
MunichCrossroad02456656.354.758.059.456.022.344.034.821.21005876144112.170.012.7
MunichStreet04296887.386.887.888.587.46.783.88.87.41931752375.679.775.7
Overall10823071.169.472.974.170.614.267.018.714.315361289225842.875.943.2
Table 14. AerialMPTNet on the KIT AIS and AerialMPT datasets.
Table 14. AerialMPTNet on the KIT AIS and AerialMPT datasets.
Sequences# ImgsGTIDF1↑IDP↑IDR↑Rcll↑Prcn↑FAR↓MT%↑PT%↑ML%↓FP↓FN↓IDS↓FM↓MOTA↑MOTP↑MOTAL↑
KIT AIS Pedestrian Dataset
AA_Crossing_02139446.745.646.949.348.845.123.451.125.55865761292−3.469.7−2.5
AA_Walking_021718841.440.842.143.742.393.617.051.631.41591150425231−16.868.5−15.9
Munich023123031.230.232.337.835.3136.810.455.733.942403808192498−34.567.6−31.4
RaR_Snack_Zone_02422059.058.859.260.960.586.033.265.01.8344333843420.773.421.1
RaR_Snack_Zone_04431168.568.368.669.869.594.245.751.82.537737134238.974.239.1
Overall69104340.639.741.545.143.2103.428.155.316.671386597236897−16.269.6−14.2
AerialMPT Dataset
Bauma31660631.230.432.038.236.3368.111.651.736.758905435277582−32.070.8−28.9
Bauma62627037.234.839.944.238.6143.717.058.124.937362964123333−28.470.2−26.1
Karlsplatz2714645.644.247.148.645.672.419.961.618.51954173325153−10.067.4−9.3
Pasing72410367.664.870.771.365.332.649.543.76.878259359333.170.733.3
Pasing8278339.738.740.841.339.245.815.755.428.912381134283−22.968.9−22.8
Witt818516.015.916.117.917.6147.72.724.373.011821163425−65.960.1−65.7
Overall128139637.836.539.343.140.0115.515.349.934.814,78213,0224361269−23.469.7−21.5
KIT AIS Vehicle Dataset
MunichStreet02204783.281.185.486.382.007.176.610.612.71411024366.980.167.3
StuttgartCrossroad01144968.465.072.275.367.814.1461.226.512.319813711639.476.339.5
MunichCrossroad02456654.552.956.358.554.922.943.937.918.2103389520459.670.110.5
MunichStreet04296886.586.087.089.188.06.385.37.47.31841654376.880.277.0
Overall10823070.068.371.873.970.314.466.520.912.615561299296742.076.342.6
Table 15. Comparison of AerialMPTNet trained with the L 1 and Huber Losses.
Table 15. Comparison of AerialMPTNet trained with the L 1 and Huber Losses.
LossIDF1↑IDP↑IDR↑Rcll↑Prcn↑FAR↓GTMT%↑PT%↑ML%↓FP↓FN↓IDS↓FM↓MOTA↑MOTP↑MOTAL↑
KIT AIS Pedestrian Dataset
L140.639.741.545.143.2103.45104328.155.316.671386597236897−16.269.6−14.2
Huber38.837.939.743.141.1107.42104325.056.518.574126845212866−20.369.4−18.6
AerialMPT Dataset
L137.836.539.343.140.0115.48139615.349.934.814,78213,0224361269−23.469.7−21.5
Huber38.036.739.543.039.9115.70139615.648.436.014,80913,0514151196−23.569.9−21.7
KIT AIS Vehicle Dataset
L170.068.371.873.970.314.4123066.520.912.615561299296742.076.342.6
Huber67.265.569.070.667.115.9823067.017.415.617261461346535.276.135.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Azimi, S.M.; Kraus, M.; Bahmanyar, R.; Reinartz, P. Multiple Pedestrians and Vehicles Tracking in Aerial Imagery Using a Convolutional Neural Network. Remote Sens. 2021, 13, 1953. https://doi.org/10.3390/rs13101953

AMA Style

Azimi SM, Kraus M, Bahmanyar R, Reinartz P. Multiple Pedestrians and Vehicles Tracking in Aerial Imagery Using a Convolutional Neural Network. Remote Sensing. 2021; 13(10):1953. https://doi.org/10.3390/rs13101953

Chicago/Turabian Style

Azimi, Seyed Majid, Maximilian Kraus, Reza Bahmanyar, and Peter Reinartz. 2021. "Multiple Pedestrians and Vehicles Tracking in Aerial Imagery Using a Convolutional Neural Network" Remote Sensing 13, no. 10: 1953. https://doi.org/10.3390/rs13101953

APA Style

Azimi, S. M., Kraus, M., Bahmanyar, R., & Reinartz, P. (2021). Multiple Pedestrians and Vehicles Tracking in Aerial Imagery Using a Convolutional Neural Network. Remote Sensing, 13(10), 1953. https://doi.org/10.3390/rs13101953

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop