Next Article in Journal
Design and Modification of a High-Resolution Optical Interferometer Accelerometer
Previous Article in Journal
Wearable Sensor Clothing for Body Movement Measurement during Physical Activities in Healthcare
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Graph Based Multi-Layer K-Means++ (G-MLKM) for Sensory Pattern Analysis in Constrained Spaces

1
Department of Electrical and Computer Engineering, University of Texas at San Antonio, San Antonio, TX 78249, USA
2
EnviroCal Inc., Houston, TX 77084, USA
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(6), 2069; https://doi.org/10.3390/s21062069
Submission received: 3 March 2021 / Revised: 13 March 2021 / Accepted: 14 March 2021 / Published: 16 March 2021
(This article belongs to the Section Physical Sensors)

Abstract

:
In this paper, we focus on developing a novel unsupervised machine learning algorithm, named graph based multi-layer k-means++ (G-MLKM), to solve the data-target association problem when targets move on a constrained space and minimal information of the targets can be obtained by sensors. Instead of employing the traditional data-target association methods that are based on statistical probabilities, the G-MLKM solves the problem via data clustering. We first develop the multi-layer k-means++ (MLKM) method for data-target association at a local space given a simplified constrained space situation. Then a p-dual graph is proposed to represent the general constrained space when local spaces are interconnected. Based on the p-dual graph and graph theory, we then generalize MLKM to G-MLKM by first understanding local data-target association, extracting cross-local data-target association mathematically, and then analyzing the data association at intersections of that space. To exclude potential data-target association errors that disobey physical rules, we also develop error correction mechanisms to further improve the accuracy. Numerous simulation examples are conducted to demonstrate the performance of G-MLKM, which yields an average data-target association accuracy of 92.2%.

1. Introduction

Associating data with the right target in a multi-target environment is an important task in many research areas, such as object tracking [1], surveillance [2,3], and situational awareness [4]. Image sensors can be used to acquire rich information related to each target, which will significantly simplify the data-target association problem. For example, video cameras in a multi-target tracking mission can provide colors and shapes of targets as extra features in the association process [5]. However, considering the costs, security issues, and special environments (e.g., ocean tracking [6], military spying), a simple, reliable, and low-cost sensor network is often a preferred option [7]. Consequently, the data-target association problem needs to be further studied, especially in cases when the gathered data are cluttered and contains limited information related to the targets.
The existing approaches for data-target association, in general, consist of three procedures [8]: (i) Measurements collection–preparation before data association process, such as object identification in video frames, radar signals processing, or raw sensor data accumulation; (ii) measurements prediction–predict the potential future measurements based on history data, which yields an area (validation gate) that narrows down the search space; and (iii) optimal measurement selection–select the optimal measurement that matches history data according to a criterion (varies in different approaches) and update the history dataset. With the same procedures but different choices of the optimal measurement criteria, many data-target association techniques have already been developed. Among them, the well-known techniques include the global nearest neighbor standard filter (Global NNSF) [9], joint probabilistic data association filter (JPDAF) [10,11,12,13], and multiple hypothesis tracking (MHT) [14].
The Global NNSF approach attempts to find the maximum likelihood estimate related to the possible measurements (non-Bayesian) at each scan (that measures the states of all targets simultaneously). For nearest neighbor correspondences, there is always a finite chance that the association is incorrect [15]. Besides that, the Global NNSF assumes a fixed number of targets and cannot adjust the target number during the data association process. A different well-known technique for data association is JPDAF, which computes association probabilities (weights) and updates the track with the weighted average of all validated measurements. Similar to Global NNSF, JPDAF cannot be applied in scenarios with targets birth and death [1]. The most successful algorithm based on this data-oriented view is the MHT [16], which takes a delayed decision strategy by maintaining and propagating a subset of hypotheses in the hope that future data will disambiguate decisions at present [1]. MHT is capable of associating noisy observations and is resistant to a dynamic number of targets during the association process. The main disadvantage of MHT is its computational complexity as the number of hypotheses increases exponentially over time.
There are other approaches available for data association. For example, the Markov chain Monte Carlo data association (MCMCDA) [5,17]. MCMCDA takes the data-oriented, combinatorial optimization approach to the data association problem but avoids the enumeration of tracks by applying a sampling method called Markov chain Monte Carlo (MCMC) [17], which implements statistical probabilities in the procedure of optimal measurement selection as well. In this paper, we assume an object generates at most a single detection in each sensor scan, namely, a point-target assumption. Hence, the approaches on multiple detections per object per time step, i.e., extended-target [18], are not discussed here. The data association in extended object tracking problems typically use data clustering techniques, such as k-means [19], to address the extended-target issue by specifying which measurements are from the same source. Then the corresponding association problems can be simplified as point-target tracking problems. For example, the authors in [20] proposed a clustering procedure and took into account the uncertainty and imprecision of similarity measures by using a geometric fuzzy representation, which shows the potential of applying clustering algorithms in the data association problem.
The main contribution of this paper is the development of an efficient unsupervised machine learning algorithm, called graph based multi-layer k-means++ (G-MLKM). The proposed G-MLKM differs from the existing data-target association methods in three aspects. First, in contrast to the previous developed data association approaches that estimate the potential measurement from history data for each target and select an optimal one from validated measurements based on statistical probabilities, G-MLKM solves the data-target association problem in the view of data clustering. Second, the previous approaches are mainly developed with respect to sensors that are capable of obtaining information from a multiple dimensional environment, such as radars, sonars, and video cameras. G-MLKM is proposed on sensors that only provide limited information. Interesting research on tracking targets with binary proximity sensors can be seen in [7], whose objective is only limited to target counting, while G-MLKM can associate data to targets. Third, G-MLKM can address the case that targets move in a constrained space, which requires dealing with data separation and merging.
The reminder of this paper is structured as follows. The data association problem in a constrained space and the corresponding tasks are described in Section 2. In Section 3, the multi-layer k-means++ (MLKM) method is developed for data-target association at local space given a simplified constrained space situation. The graph based multi-layer k-means++ (G-MLKM) algorithm is then developed in Section 4 for general constrained spaces. Simulation examples are then provided in Section 5. Section 6 provides a brief summary of the work presented in this paper.

2. Problem Formulation

In this paper, we consider the problem of data-target association when multiple targets move across a road network. Here, a road network is a set of connected road segments, along which low-cost sensors are spatially distributed. The sensors are used to collect information of targets, which, in particular, are the velocity of targets and the corresponding measured time. We assume (1) there is no false alarm in the sensor measurements, and (2) the target’s velocity does not change rapidly within two adjacent sensors. The collected information about a target is normally disassociated with the target itself, meaning that the target from which the information was captured cannot be directly identified using the information. Hence, data-target associations is necessary.
Figure 1 shows one road network example that consists of 6 road segments. Without loss of generality, let the total number of road segments in one road network be denoted as L. The road segments are denoted as R 1 , R 2 , , R L , respectively. The length of road segment R i is denoted as D i for i = 1 , 2 , , L . To simplify discussion, we assume the road segments are for one-way traffic, i.e., targets cannot change their moving directions within one road segment. However, when the road segment allows bidirectional traffic, we can separate it into two unidirectional road segments and the proposed approach in this paper directly applies. Let S i = { S i 1 , S i 2 , , S i N i } be a set of N i R sensors placed along the direction of road segment R i . In other words, for sensor S i j S i , the larger the sub-notation j is, the further distance the sensor locates away from the starting point of road segment R i . We denote the corresponding distance between sensor S i j and the starting point of road segment R i as d i j . Hence, the position set for sensors in R i related to the starting point can be denoted as P i = { d i 1 , d i 2 , , d i N i } , where 0 d i 1 < d i 2 < < d i N i D i .
For each sensor S i j , its measurements are collected and stored in chronological order. The collections are denoted as a column vector X i j , such that X i j = [ x i j 1 , x i j 2 , , x i j m i j ] , where i { 1 , 2 , , L } , j { 1 , 2 , , N i } , the prime symbol represents the transpose operation for a vector, m i j is the total number of measurements in X i j , and x i j n , n { 1 , 2 , , m i j } , denotes an individual measurement in X i j . In particular, x i j n = [ v i j n , t i j n ] stores the measured velocity v i j n when one target passed by sensor S i j at time t i j n . As the elements in X i j are stored in chronological order, the recorded time for each measurement satisfies t i j 1 < t i j 2 < < t i j m i j , which can be distinguished based on the superscript n. All the measurement vectors stored by sensors that locate in the same road segment R i are stored into a matrix X i , such that X i = [ X ¯ i 1 , X ¯ i 2 , , X ¯ i N i ] , where X i R m i × N i , m i = m a x j { m i j } , and the column of the matrix is defined as X ¯ i j = [ X i j , 0 1 × ( m i m i j ) ] . If m i j = m i , X ¯ i j = X i j . The added all-zero row vector in X ¯ i j is to unify the length of vectors in matrix X i considering that miss detection may happen or targets may remain (or stop) inside the road network for a given data collection period.
The road network collects X i , i = 1 , , L that only include information of target’s velocity and the corresponding measurement time. In order to solve data-target association based on the L matrices, three tasks need to be accomplished. The first task (Task 1) is to cluster X i into m i groups for each road segment. Denote the data grouping result for each road segment as a new matrix T i , such that:
T i = [ T ¯ i 1 , T ¯ i 2 , , T ¯ i m i ] , i = 1 , 2 , , L
where T ¯ i z , z = 1 , 2 , , m i , is a row vector consisting of N i measurements associated with the same target, defined as:
T ¯ i z = [ τ i z 1 , τ i z 2 , , τ i z N i ] ,
where τ i z u is an entry of X ¯ i u for u = 1 , 2 , , N i . Then a new row vector T i z is obtained from T ¯ i z by excluding all zero elements.
The second task (Task 2) is to link the trajectories of targets at road intersections by pairing sensor S i 1 / S i N i from multiple road segments that are connected geometrically. In particular, let O T i n t s denote the index set of road segments that have outgoing targets related to one intersection i n t s , and I T i n t s denote the index set of road segments that have ingoing targets related to the same intersection. Since the road segments are unidirectional, the two index sets have no overlaps, i.e., O T i n t s I T i n t s = . In particular, only the dataset that has a subscript of i N i (according to the unidirectional road segement setting) can be the candidate for O T i n t s . Similarly, only the dataset that has a subscript notation of i 1 can be the candidate for I T i n t s . Therefore, datasets that belong to targets who move towards the intersection i n t s are denoted as:
Q I i n t s = { x i N i k O | x i N i k O X i N i , i O T i n t s } ,
while datasets that belong to targets who leave the intersection i n t s are denoted as:
Q O i n t s = { x i 1 k I | x i 1 k I X i 1 , i I T i n t s } .
where k O { 1 , 2 , , m i N i } , k I { 1 , 2 , , m i 1 } , O T i n t s { 1 , 2 , , L } , and I T i n t s { 1 , 2 , , L } . Since targets may stop in the intersection or the data collection process terminates before targets exit the intersection, the total number of targets heading into an intersection i n t s is always greater than or equal to the number of targets leaving the same intersection, i.e., | Q I i n t s | | Q O i n t s | . For simplicity of notation, denote | Q I i n t s | and | Q O i n t s | as n I and n O . Then we can calculate n I and n O via:
n I = i O T i n t s m i N i a n d n O = i I T i n t s m i 1 .
The pairing task for intersection i n t s can be denoted as a mapping function f, such that:
f ( x i N i k ) x l 1 , k { 1 , 2 , , n I } ,
where x i N i k Q I i n t s and x l 1 { Q O i n t s , 0 , 0 , , 0 n I n O } . In particular, the function f for intersection i n t s can be denoted as a permutation matrix G i n t s R n I × n I .
The last task (Task 3) is to merge data groups on the road network when loops may exist, i.e., targets may pass the same road segment several times. Hence, multiple data association groups may belong to the same target. The merged results can be denoted as L symmetric matrices G R i R m i × m i for each road segment R i . If targets only pass the road segment R i once, G R i is an identity matrix.
In this paper, we are going to propose a new unsupervised machine learning algorithm to associate data-target for the collected L matrices. In particular, this algorithm first creates a new clustering structure for data grouping in each matrix (associated with each road segment), and then leverages graph theory and clustering algorithms to link the matrices from different road segments for each intersection. Finally, the entire dataset can be analyzed and associated properly to the targets. The output of this new algorithm will be a detail trajectory path for each target with the captured velocities along the road segments. In the next two sections, the new data-target associations algorithm will be explained in detail. We begin the discussion with a special case when the road network is consisted of a single road segment.

3. MLKM for a Single Road Segment

In this section, we consider the special case when L = 1 , i.e., the road network only consists of one road segment, R 1 . In this special case, there are neither intersections nor loops in the road network. Therefore, the tasks in identifying data-target associations are simplified to cluster X 1 into m 1 groups (Task 1) only. One example of matrix X 1 R 10 × 9 is shown in Figure 2, which is the plot of measurements for 10 different targets that are captured by nine equally spaced sensors on road segment R 1 .

3.1. K-means++ Clustering and Deep Neural Network

K-means [19] and k-means++ [21] are perhaps the most common methods for data clustering. For a set of data points in a N-dimensional system, the two algorithms perform clustering by grouping the points that are closer to the optimally placed centroids. From the machine learning perspective, k-means learns where to optimally place a pre-defined number of centroids such that the cost function, defined as Φ Y ( C ) = y Y d 2 ( y , C ) , is minimized, where d ( y , C ) = min y Y y c i represents the distance between a sub-set of measurements Y and a centroid c i and C = { c 1 , . . . , c k } represents the set of centroids. The associated cost function is the sum of the Euclidean distances from all data points to their closer centroid. The cost function and optimization algorithms are the same for k-means and k-means++ while the only difference between them is that k-means++ places the initial guesses for the centroids in places that have data concentration, and consequently improves the running time of Lloyd’s algorithm and the quality of the final solution [21].
A much more complex boundary may exist between two data groups. Therefore, we also verify the potential performance of the deep neural network (DNN) algorithm [22] in the data association process, which is known for its capability of recognizing underlying patterns and defining better decision boundaries among data samples. For the purpose of evaluating the supervised DNN capabilities, a slight modification of the problem is considered. Instead of a complete unlabeled dataset X 1 , part of the measurements are pre-labeled, i.e., data-target relations for part of the measurements are known. In addition, we extend the measurement’s dimensions to further include v t , v 2 t , and v t 2 as extra features so that the inner structure of DNN can be simpler. Table 1 presents the detail settings of the DNN framework.

3.2. K-means++ with Data Preprocessing

While DNN can potentially provide better performance for the data association problem, it demands labeled datasets for training. In real scenarios, however, the training dataset may not be available. In contrast, k-mean++ can cluster data samples without the need for a labeled dataset. This unsupervised property of k-means++ enables a wider application domain. Hence, k-means++ is more practical for the task of clustering X 1 into m 1 groups. Moreover, when the dataset X 1 is small and sparse, k-means++ can perform well on the task of data-target association.
However, when the measurements are distributed along the time axis and velocity profiles are close, k-means++ tends to place the centroids in positions where data from different targets overlap and hence causes an inaccurate data-target pairing. This happens because k-means implements Euclidean distance to determine which centroid data sample ( v , t ) belongs, i.e.,
arg min ( v i * , t i * ) C ( v v i * ) 2 + ( t t i * ) 2 ,
where C is the set of centroids. When data samples distribute along time axis, the time difference becomes the determining factor for grouping results.
One natural way to balance the two components (time difference and velocity difference) in (7) is to process X 1 before applying k-means++. The idea of preprocessing is similar to the principal component analysis [23] that projects data into a main axis. The preprocessed data sample is denoted as x ^ 1 j n = [ v 1 j n , t ^ 1 j n ] , where t ^ 1 j n is given by:
t ^ 1 j n = t 1 j n d 1 j d * v 1 j n ,
where j { 1 , , N 1 } , n { 1 , , m 1 j } , d 1 j is the position of sensor S 1 j with respect to the starting point of road segment R 1 , and d * is the reference point for projecting. In other words, t ^ 1 j n is the expected starting time for a constant velocity ( v 1 j n ) model given the current time t 1 j n . Figure 3 is the preprocessed result for the dataset in Figure 2. In this example, the reference point d * is selected to be the starting point, and we can see clusters for each target have been formed after data preprocessing.

3.3. Multi-Layer K-means++

Through the preprocessing procedure, data can be roughly separated for different targets that provide dense and grouped subsets. The boundaries between two groups, however, maybe still too complex for k-means++ to define, especially, when X 1 is a large dataset and the grouped subsets are close to each other. Inspired by the DNN capability of defining classification boundaries via a multi-layer structure and a back-propagation philosophy, we propose a new multi-layer k-means++ (MLKM) method that integrates the DNN’s multi-layer structure with the clustering capabilities of k-means++ to overcome the complex boundary challenge.
The proposed MLKM algorithm is performed via 3 layers: (i) Data segmentation and clustering–the dataset is sequentially partitioned into smaller groups for the purpose of creating sparse data samples for k-means++; (ii) error detection and correction–check the clustered data by searching for errors through predefined rules and re-cluster the data using nearest neighbor concepts [24] if an error is found. Note that the k-means++ associates the data closer to the optimally placed centroid based on the Euclidean distance between data point and centroid, which is a scalar quantity; and (iii) cluster matching–match the clusters of each segment by preprocessing the cluster centroids of all segments to the cluster centroid of the first segment and again grouping them based on k-means++. A detail explanation for these three layers are given as follows.

3.3.1. Layer 1 (Data Segmentation & Clustering)

Without loss of generality, we assume that there are K sensors per segment. The dataset X 1 R m 1 × N 1 ( m 1 and N 1 are the maximum number of measurements and the total sensor number in sensor set S 1 , respectively) is sequentially partitioned into E segments, such that:
E = N 1 / K , N 1 % K = 0 , N 1 / K + 1 , otherwise .
In other words, when N 1 % K 0 , the last segment will contain measurements from less than K sensors. In the following of the paper, we assume that N 1 % K = 0 in the following sections of this paper for the simplicity of presentation. When N 1 % K 0 , we can add some extra artificial sensors with all zero measurements. Then the data segment can be defined as X 1 e = j = ( e 1 ) K + 1 e K X ¯ 1 j , where e = 1 , 2 , , E . K-means++ algorithm is then applied to each X 1 e by excluding all zero elements. By aggregating the clustering results, we can obtain a set of centroids for X 1 e , e = 1 , 2 , , E , defined as C 1 e = { c 11 e , c 12 e , , c 1 m 1 e } , and the associated measurements with each c 1 k e centroid are represented as T 1 k e , where k { 1 , 2 , , m 1 } .

3.3.2. Layer 2 (Error Detection & Correction)

The first layer seeks to associate data for each data segment. Since the clustering standard used in k-means++ is a scalar quantity while the actual measurements are given by vectors, there are potential data association errors in T 1 k e . Hence an additional layer to perform error detection and correction is needed. The error detection is to verify logic rules to determine if wrong data association appears in T 1 k e . The error correction will conduct data re-association on the identified wrong associations. To avoid the same wrong reassociation again, the global nearest neighbor standard is chosen as the re-association technique instead of k-means++ given the assumption that the target’s velocity does not change rapidly within two adjacent sensors.
We here proposed the following logic rules for error detection:
  • | T 1 k e | > K ;
  • n 1 n 2 , x 1 l n 1 T 1 k e , x 1 j n 2 T 1 k e l = j ;
  • l j , x 1 l n 1 0 T 1 k e , x 1 j n 2 T 1 k e t 1 l n 1 t 1 j n 2 ,
where | T 1 k e | indicates the cardinality of T 1 k e . The first rule means that more than K measurements appear in T 1 k e . The second rule means that more than one sensory measurements from the same sensor are associated with one target in T ¯ 1 k e . The third rule means that target is recorded in a later time by a previous sensor. If one or more rules are satisfied, the corresponding T 1 k e is then considered to be an erroneous data association and will be stored in Y 1 e * , where Y 1 e * refers to the wrong data associations in X 1 e .
The error correction is to re-associate data in Y 1 e * for the purpose of breaking all the logic rules listed above. We propose to use the global nearest neighbor approach. Specifically, elements in Y 1 e * that belongs to measurements of sensor S 1 are selected sequentially to be evaluated against with every measurement in Y 1 e * that belongs to measurements of sensor S 1 ( + 1 ) to obtain the best match. The evaluation is accomplished via the following optimization process:
arg min κ t 1 ( + 1 ) κ t 1 + d 1 ( + 1 ) d 1 v 1 , s . t . x 1 ( + 1 ) κ Y 1 e * .
With this procedure, all T 1 k e are updated with the corrected clusters and all c 1 k e are re-calculated based on the updated T 1 k e . The new corrected set of centroids C 1 e is updated for all segments and grouped into C 1 = { C 11 C 12 . . . C 1 E } . The position of the centroid set C 1 e is defined as:
d 1 e = j = ( e 1 ) K + 1 e K d 1 j / K .

3.3.3. Layer 3 (Cluster Matching)

Through the preceding two layers, data-target association can be accomplished for each data segment X 1 e independently. However, the target associations are uncorrelated among each data segment. In particular, the unsupervised k-means++ only groups data samples that belong to the same target while the clusters of each target are anonymous. Hence, it is still unclear how to associate the clusters among different segments.
In Layer 3, we project C 1 e , e = 1 , , E , using the preprocessing technique that is stated in Section 3.2. More precisely, the time component in c 1 k e C 1 e is preprocessed as:
t ^ 1 k e = t 1 k e d 1 e d 11 v 1 k e , e { 1 , , E } , k { 1 , , m 1 } ,
where c 1 k e = [ v 1 k e , t 1 k e ] , and d 1 e is the position of centroid set C 1 e defined in (9). Then k-means++ is applied to the preprocessed C 1 to find the clusters that group cluster centroids in different data segments. Accordingly, the associated measurements T 1 k e with respect to each centroid are merged together as T 1 k and, hence, provides the complete data-target association result for the entire road segment.
Note that the proposed MLKM method may not be applied directly to the case when L > 1 (i.e., more than one road segments). Therefore, we propose a more general method, named G-MLKM, to solve the general data-target association problem for a general road network in the next section.

4. G-MLKM for a General Road Network

In this section, we consider the general case when the road network consists of multiple road segments. To solve the data-target association problem, we propose a new graph-based multi-layer k-means++ (G-MLKM) algorithm. In particular, G-MLKM uses graph theory to represent the road network as a graph, and then links data from different road segments at each intersection of the road network by analyzing the graph structure. The data-target association problem for a general road network is then solved by merging the clustering results at intersections with the MLKM results on each road segment.
We first briefly introduce graph theory and the representation of road networks using graphs as preliminaries. Then the procedures for G-MLKM are explained in detail. In particular, we begin with a new graph representation for the road network. Then the procedures for linking measurements at intersections (Task 2) are described. After that, we unify the results on road segments and intersections, and complete the data merging task (Task 3).

4.1. Preliminaries

4.1.1. Graph Theory

For a system of L connected agents, its network topology can be modeled as a directed graph G = ( V , E ) , where V = { v 1 , v 2 , , v L } and E V × V are, respectively, the set of agents and the set of edges that connect the agents. An edge ( v i , v j ) in set E means that the agent v j can access the state information of agent v i , but not necessarily vice versa [25]. The adjacency matrix A R L × L of the directed graph G is defined by A = [ a i j ] R L × L , where a i j = 1 if ( v i , v j ) E and a i j = 0 otherwise.

4.1.2. Graph Representation of Road Networks

There are mainly two strategies to represent road networks using a graph, namely a primal graph and dual graph [26]. In a primal graph representation, road intersections or end points are represented by agents and road segments are represented by edges [27], while in a dual graph representation, road segments are represented by agents and an edge exists if two roads are intersected with each other [28]. Compared with a primal graph, dual graph concerns more on the topological relationship among road segments. As the data-target associations for each road segment can be solved by the MLKM method, the focus here is to cluster data at each intersection. As a consequence, the dual graph is a better option. However, the geometric properties such as road length are neglected by a dual graph. Hence, some further modification to the dual graph is needed.

4.2. G-MLKM Algorithm

In this subsection, we will provide the detail procedures for the G-MLKM algorithm that are composed of the following three steps.

4.2.1. Modified Graph Representation for Road Networks

Considering the cases when targets may stop in a road segment or data collection process may terminate before targets pass through a road segment, the total number of measurements collected by sensor S i N i (locates near the ending point of road segment R i ) may be less than the one collected by sensor S i 1 (locates near the starting point of road segment R i ). If the entire road segment is abstracted as one single agent, the inequality of measurements in the road segment may create issues for the subsequent data-target associations process. Here, we modify the dual graph by incorporating the primal graph for the representation of the road segment. In other words, we propose to replace each road segment node in the dual graph by two agents with one directed edge connecting them and the direction of the edge is determined by the traffic direction. In particular, we use the sensor nodes S i 1 and S i N i as the two agents. We may neglect the edge between S i 1 and S i N i because we focus on data-target associations at intersections while the data-target associations within the road segment can be accomplished by the MLKM method without the need for the knowledge of the graph. Moreover, the connection between S i 1 and S i N i is unidirectional when the traffic is unidirectional. We call the new graph the“p-dual graph”, i.e., prime-based dual graph. An example of how to derive the p-dual graph is shown in Figure 4, where the original six agents in the dual graph are replaced by 12 agents and the edges between S i 1 and S i N i are removed in the p-dual graph.
For a general road network with L edge segments, the edges of the new p-dual graph is given by V * = { S 11 , S 1 N 1 , S 21 , , S L 1 , S L N L } with the corresponding adjacency matrix, A * R 2 L × 2 L , given by:
A * = [ a i j * ] R L × L , a i j * = 0 0 a i j 0 .

4.2.2. Graph Analysis for Data Pairing at Intersections

From A * defined in (10), we can observe that the adjacency matrix A * has L columns and L rows that are all zeros. Hence, the sparse matrix A * can be further analyzed and decomposed to extract subgraphs related to different intersections. Then the task of linking the trajectories of targets at road intersections can be equivalently solved via pairing measurements of sensor S i 1 / S i N i from road segments in the subgraphs, which is further decomposed into the following three procedures.
Algorithm 1. Subgraph Extraction
1:
Input: b i j A * ;
2:
Output: ( O T i n t s , I T i n t s ) , i n t s { a , b , c , }
3:
I d x r o w = I d x c o l = { 1 , 2 , , | A * | } ;
4:
i = 0;
5:
for i n t s in { a , b , c , } do
6:
O T i n t s = I T i n t s = ;
7:
if | I d x r o w | 1 then
8:
   procedure Increment(i)
9:
    i = i + 1;
10:
    if i I d x r o w then
11:
    return i;
12:
   else
13:
    Increment(i);
14:
  procedure Recursion(i)
15:
   if j I d x c o l b i j 1 then
16:
     O T i n t s = O T i n t s i ;
17:
    procedure Extract(i)
18:
     for j in I d x c o l do
19:
       if b i j 0 then
20:
        I T i n t s = I T i n t s j ;
21:
      I d x c o l = I d x c o l \ I T i n t s ;
22:
      I d x r o w = I d x r o w \ { i } ;
23:
      for j I T i n t s do
24:
       if l I d x r o w b l j 1 then
25:
        for l in I d x r o w do
26:
        if b l j 0 then
27:
          O T i n t s = O T i n t s l ;
28:
      if I d x r o w O T i n t s then
29:
       Extract ( l ( I d x r o w O T i n t s ) ) ;
30:
      else
31:
       return ( O T i n t s , I T i n t s ) ;
32:
    else
33:
     I d x r o w = I d x r o w \ { i } ;
34:
    i = i + 1;
35:
     Recursion ( i ) ;
36:
else
37:
  break;

i. Subgraph Extraction

The first procedure is to extract subgraphs from A * . Let the letters in alphabet { a , b , c , . . . } denote the names for different intersections. The subgraph extraction procedure begins with an intersection name as a, follows by b, c, and so on. For any intersection i n t s , the subgraph extraction is conducted by cross-searching the non-zero entries of the matrix A * in a repeated row and column pattern. The corresponding indices of row and column containing non-zero entries, indicating the agents and edges that are included in that subgraph, are stored in the sets O T i n t s and I T i n t s , respectively. More precisely, O T i n t s denotes the index set of road segments that have outgoing targets related to intersection i n t s and I T i n t s denotes the index set of road segments that have ingoing targets related to the same intersection. The index storing processes are defined as O T i n t s = O T i n t s i , and I T i n t s = I T i n t s j , where i , j are the corresponding row index and column index, respectively. The iterative search process will terminate and return ( O T i n t s , I T i n t s ) when there is no more non-zero element in the recorded row and column indices. Algorithm 1 is the pseudo code for the subgraph extraction procedure. The extracted results are denoted as ( O T i n t s , I T i n t s ) , where i n t s { a , b , c , } .

ii. Data Preprocessing at Intersections

Given that the subgraph that describes an intersection, i n t s , is available from the preceding subgraph extraction procedure, datasets of X i 1 / X i N i which are subjected to the pairing task for the corresponding intersection can be pinpointed. In particular, (3) and (4) define the dataset for the intersection i n t s as an incoming dataset Q I i n t s and an outgoing dataset Q O i n t s , respectively. As we assume that (1) no false alarm in the measurements, and (2) the target’s velocity does not change rapidly within two adjacent sensors, data pairing at intersections may interpret as data clustering. A potential machine learning technique for data clustering is the k-means++. However, the sensors S i 1 / S i N i from different road segments are not guaranteed to locate near each other for a road intersection, which may contribute to a relatively large time difference in two sensors’ measurements for one target. Hence, before applying k-means++, data preprocessing on Q I i n t s and Q O i n t s is necessary.
Based on the proposed preprocessing definition in (8), we here propose a new data preprocessing technique that first selects a virtual reference at the center of the intersection i n t s and then recomputes t ^ i j k via projecting each element in I T i n t s and O T i n t s to the virtual reference as:
t ^ i j k = t i 1 k r + d i 1 v i 1 k , i I T i n t s & j = 1 t i N i k + D i d i N i + r v i N i k , i O T i n t s & j = N i ,
where k { 1 , 2 , , m i j } and r is the radius of the intersection circle centered at the virtual reference. An example of locating the virtual reference is shown in Figure 5, where the intersection consists of three road segments denoted as R i , R j , and R k .

iii. Data Pairing at Intersections and Error Correction

Denote the preprocessed datasets for Q I i n t s and Q O i n t s as Q ^ I i n t s and Q ^ O i n t s . Then k-means++ can be applied to the preprocessed intersection datasets { Q ^ I i n t s , Q ^ O i n t s } for data pairing. Similar to the development of MLKM for the case of one road segment, errors may arise when conducting the data pairing/clustering. Error detection and correction are needed to further improve accuracy.
For an intersection i n t s , the cardinalities of the preprocessed Q ^ I i n t s and Q ^ O i n t s remain the same as those of Q I i n t s and Q O i n t s . As defined in (5), | Q ^ I i n t s | = n I a n d | Q ^ O i n t s | = n O , where n I n O . The set of centroids is denoted as C i n t s = { c i n t s 1 , c i n t s 2 , , c i n t s n I } , and the associated measurements with each centroid c i n t s j , j { 1 , 2 , , n I } , are given as Y i n t s j . The error correction is similar to Layer 2 in the MLKM method described in Section 3.3.2, and defines three logic rules for error detection:
  • | Y i n t s j | > 2 ;
  • | Y i n t s j Q ^ I i n t s | 1 ;
  • x i N i Y i n t s j , x l 1 0 Y i n t s j t l 1 t i N i ,
where | Y i n t s j | is the cardinality of Y i n t s j . The first rule means more than two measurements are associated in Y i n t s j . Error can be determined in this case because each target has at most two measurements in one intersection. The second rule means either none or more than one sensory measurements can be found from the incoming dataset Q ^ I i n t s . The third rule means that the outgoing measurement in Y i n t s j is recorded earlier than the incoming measurement. If one or more rules are satisfied, the corresponding Y i n t s j is then considered to be an erroneous data association and will be stored in Y i n t s . The error correction is to re-associate data in Y i n t s for the purpose of breaking all three logic rules listed above. To achieve this goal, we separate Y i n t s into two subsets denoted as Y i n t s I and Y i n t s O given by:
Y i n t s I = { x i N i | x i N i Y i n t s } , Y i n t s O = { x l 1 | x l 1 Y i n t s } ,
where Y i n t s I and Y i n t s O store all measurements x i N i and x l 1 in Y i n t s , respectively. Re-associate data in Y i n t s becomes a linear assignment problem [29] between Y i n t s I and Y i n t s O . The optimal pairing between Y i n t s I and Y i n t s O can be found when the matching score reaches to the minimum via solving the optimization problem of arg min M | | M × Y i n t s I Y i n t s O | | , where Y i n t s I R m I × 1 and Y i n t s O R m O × 1 are column vectors converted from subsets Y i n t s I and Y i n t s O , respectively. M R m O × m I is a special binary matrix with the summation of each row being 1. After the error correction is accomplished, all Y i n t s j will be updated to complete Task 2. Furthermore, a permutation matrix G i n t s R n I × n I can be created to record the pairing relationship between incoming dataset Q I i n t s and outgoing dataset Q O i n t s for each intersection.

4.2.3. Group Merging in the Road Network

K-means++ clustering on the preprocessed dataset at each intersection solves the task of linking the trajectories of targets at road intersections (Task 2) while the proposed MLKM method solves the task of data associations for each road segment (Task 1). If the clustering results at all intersections are combined with the MLKM results on all road segments, trajectory awareness for each target in the road network is achieved. This is valid for situations when targets only pass the same road segment once. However, when targets pass the same road segment and intersection multiple times, one target can be assigned to multiple associated data groups on the road segment. To determine the connections among all associated data groups, an extra task (Task 3) for merging data groups in the road network is needed. Given that the datasets at intersections are extracted from the L matrices collected from all road segments, clusters at the intersections can be classified based on the data groups for all road segments. Therefore, the task of determining the connections among the associated data groups in the road network can be focused on connections of T ¯ i z defined in (2) for each road segment.
Let the symmetric matrix G R i R m i × m i denote the connections among the m i association groups in road segment R i given by G R i = [ b i j ] R m i × m i where:
b p q = b q p = 1 , i f T ¯ i p , T ¯ i q b e l o n g   t o   t h e   s a m e   t a r g e t , 0 , o t h e r w i s e .
To determine the entries in G R i , the depth-first search (DFS) [30] is implemented to detect cycles in the adjacency matrix A . If cycles do not exist, the non-diagonal entries are set to 0 and hence G R i is an identity matrix. Otherwise, further analysis on the connections among data groups at each road segment is operated sequentially in the following three steps.

i. Node Analysis on Dual Graph

The analysis starts with identifying road segments that have only outgoing flow, i.e., source nodes in the graph. The source nodes can be identified from the adjacency matrix A by checking the sum of each column. In particular, road segment R i is a source node when the sum of the ith column of A satisfies l = 1 L a l i = 0 , where a l i is the ( l , i ) th entry of the adjacency matrix, which represents the edge ( R l , R i ) .

ii. Trajectory Flow for Data Groups from Source Nodes

If the road segment R i is a source node, the m i data groups in R i resulting from the MLKM method are considered to be m i unique targets. Then the trajectories of these m i targets are traced in the road network. In particular, if T ¯ i z X i N i = , the target associated with data group T ¯ i z does not contain any measurement from sensor S i N i , which corresponds to the case when target stops in the road segment or the data collection terminates before the target could approach to sensor S i N i . The trajectory tracking for this target is then completed. Otherwise, the permutation matrix G i of intersection i that is consisted of sensor S i N i is utilized to pinpoint the trajectory of the same target in the intersection, and its data group T ¯ l z in the subsequent node or sink node R l where it is heading to. The trajectory tracking of the same target on the new road segments will keep on until the target stops or leaves the road network. The same process is used for tracing the flow of other targets.

iii. Matrix Description of Intermediate Nodes

After the trajectories of all targets from the road segments have been confirmed, data points for each target on different road segments can be merged. More precisely, the corresponding entry ( p , q ) in G R l that is assigned as 1 means that data groups T ¯ l p and T ¯ l q belong to one target. Consequently, the corresponding matrix G R l can be determined.

5. Simulation

In this section, the performance of the proposed G-MLKM algorithm is evaluated. We first introduce the testing datasets generation process. Then the performance of the MLKM method on one road segment is evaluated and compared with k-means++ and DNN. Then the complete G-MLKM algorithm performance is evaluated. A detailed example presenting the output via using G-MLKM is given to show how matrices G i n t s and G R i are created for data pairing at intersections and group merging.

5.1. Testing Data Generation

In order to obtain a quantitative performance evaluation of the data association techniques, labeled data is needed to obtain the percentage of true association between targets and their measurements. One convenient way to have accurate labeled dataset for data-target association is to generate it artificially. Let the generated testing dataset from the road network be M t = { T 1 , T 2 , , T L } , where T i R m i × m i has the same data structure as T i defined in (1). In particular, each element in T i is a data group that belongs to one target. Moreover, for any T i collected from road segments that have both incoming and outgoing flows, multiple rows may belong to the same target.
We utilize the road network structure shown in Figure 1 as a prototype for testing data generation. Moreover, N S sensors are assumed to be equally distributed on each road segment, where the length of the road segment is N S × d . The position set for sensors is selected as P i = { d , 2 d , , N S d } with respect to the starting point of road segment R i . The intersections are considered to have the same radius with the value of d / 2 . Hence, the distance between any two adjacency sensors is d. To further simplify the data generation process, we assume road segment R 1 is the only entrance of the road network during the data collection period with incoming targets number N A , and targets have equal possibilities of valid heading directions at each intersection. The targets are assumed to move with a constant velocity and the velocity is also discretely affected by Gaussian noise, such that, v i j = v 0 + N ( μ , σ ) , where v i j is one velocity measurement at sensor S i j and v 0 is the velocity measurement at the previous sensor. The corresponding time measurement is calculated as t i j = t 0 + v i j / ( j · d ) . The initial velocity and time for the N A targets are uniformly selected from the range ( v m i n , v m a x ) and ( t m i n , t m a x ) , respectively (refer to Table 2). The testing dataset generating process stops when all targets move out of the road network.
With the generated testing datasets, we may evaluate the performance of the data-target association techniques by calculating the data association accuracy, which is defined as the ratio between correctly classified number of data ( M c r ) and the total number of data ( M t ), such that, n u m e l ( M c r ) n u m e l ( M t ) × 100 % , where n u m e l ( M ) returns the number of elements in M. As multiple testing datasets are generated, the provided statistical information about performance includes the minimum (left - blue bar), average (middle - orange bar), and maximum (right - yellow bar) accuracies.

5.2. MLKM Performance and Comparisons

Before evaluating the entire accuracy of the proposed G-MLKM algorithm, the MLKM method is evaluated and compared with the other two common data clustering machine learning techniques, in particular, k-means++ and DNN, based on the collected dataset in road segment R 1 .

5.2.1. K-means++

The first set of simulations evaluate the performance of k-means++ based on two criteria: (i) Unprocessed vs. preprocessed data, and (ii) using different values of N A and N S . When the values of N A and N S increase, more data points are introduced into the dataset, leading to more overlapping among these data points. Figure 6 and Figure 7 show the performance of K-means++ using the parameters listed in Table 2.
As can be observed, a higher accuracy is achieved using the preprocessed data than that using the unprocessed data. This can be seen by comparing the average, and maximum and minimum accuracy for the two methods that use the preprocessed data versus unprocessed data, as shown in Figure 6. Using the raw data, the measurements associated with a specific target are sparse along the time axis. However, the velocity measurements from the same sensor are closely grouped along the velocity axis. These conditions contribute to incorrect clustering of the data. The preprocessing technique reduces the distance between target related measurements, therefore reducing the effect of the velocity measurements on the clustering.
A low accuracy is obtained for large values of N A and N S . This can be observed by comparing average, maximum and minimum accuracy for different N A and N S , as shown in Figure 6 and Figure 7. Similar to the unprocessed data, a large number of sensors/targets increases the density of measurement points. The concentration of measurements increases the probability that k-means/k-means++ clusters the data incorrectly (even with preprocessing).

5.2.2. DNN

The k-means++ fails to correctly cluster data when overlapping of measurements occurs. A deep neural networks (DNN) is used as an alternative approach because it has been shown to provide good results to uncover patterns for large dataset classification. One necessary condition for DNN is the availability of labeled datasets for training. To meet the requirements of DNN, it is assumed that labeled data is available for training.
The results for DNN are obtained using N A = 50 targets and N S = 50 sensors. Assuming that a portion of the data association has already been identified, the objective is to train a neural network to label the unidentified measurements. The number of ‘training’ sensors that provide labeled information and ‘testing’ sensors that provide unlabeled information are provided in Table 3. The accuracy is obtained for various proportions of ‘training’ sensors to ‘testing’ sensors. Table 3 also shows the accuracy obtained for different dataset configuration.
It can be observed that the training (respectively, testing) accuracy is high (respectively, low), when the testing dataset is relatively small. However, when the testing dataset is relatively high, the testing performance increases significantly (up to 91%). A high training accuracy with a low testing accuracy means that DNN suffers from overfitting due to the small size of the training dataset. Given this comparison, DNN is applicable when a large portion of a training dataset is available to train the network for classifying a relatively small amount of measurements.

5.2.3. MLKM

K-means++ does not provide good accuracy for a high number of measurements but performs well when clustering small amounts of data. DNN can cluster large datasets but requires a large training dataset. MLKM combines the multi-layer back-propagation error correction from DNN and the clustering capabilities of k-means++. The DNN-inspired error correction significantly improves the performance of MLKM by preventing the clustering errors in layer 1 to propagate to the cluster association in layer 3.
The results for the MLKM method are obtained using N A = 50 number of targets and N S = 20 number of sensors. In addition, the time and velocity parameters are set to ( t m i n , t m a x ) = U ( 10 , 30 ) and ( v m i n , v m a x ) = N ( 50 , 40 ) , receptively. Figure 8 shows the performance of the MLKM method with and without error correction, as well as results using the standard k-means++ method with preprocessing.
It can be observed that a higher accuracy is achieved using MLKM than that using k-means++. Figure 8 shows the average, and maximum and minimum accuracy for both methods. The error correction performed in layer 2 improves the average accuracy of MLKM by approximately 7% (MLKM w/ EC 91.65%; MLKM w/o EC 84.3%).

5.3. G-MLKM Overall Performance

The results for the G-MLKM method are obtained using N A = 20 number of targets and N S = 10 number of sensors. In addition, the time and velocity parameters are set to ( t m i n , t m a x ) = U ( 0 , 40 ) and ( v m i n , v m a x ) = U ( 10 , 50 ) , respectively. Figure 9 shows the performance of the G-MLKM algorithm with and without error correction.
It can be observed that a higher accuracy is achieved using G-MLKM with error correction than the result without error correction. Figure 9 shows the average, and maximum and minimum accuracy for both methods. The second error correction performed in the algorithm improves the average accuracy of G-MLKM by approximately 11% (G-MLKM w/ EC 92.2%; G-MLKM w/o EC 81%).

5.4. Matrix Output of the G-MLKM Algorithm

The proposed G-MLKM algorithm implements multiple (determined by the structure of road networks) permutation matrices G i n t s and L symmetric matrices G R i to represent the data cluster classification results at intersections and road segments, respectively. A detail example is illustrated to show the use of the proposed G-MLKM matrix output.
Suppose 5 targets (named as N 1 , N 2 , N 3 , N 4 , N 5 , respectively) go through the road network as shown in Figure 1 during a certain time. The trajectory ground truth is listed in Table 4. In particular, road segment R 1 has three data groups denoted as { 1 , 2 , 3 } , R 2 has six data groups denoted as { 1 , 2 , 3 , 4 , 5 , 6 } , R 3 has five data groups denoted as { 1 , 2 , 3 , 4 , 5 } , R 4 has three data groups denoted as { 1 , 2 , 3 } , R 5 has one data groups denoted as { 1 } , and R 6 has two data groups denoted as { 1 , 2 } .
Take target N 1 as an example, it travels through road segment R 1 , R 2 , then heads to road segment R 5 . After that, it keeps on moving through road segment R 4 , R 2 and finally leaves the road network through road segment R 3 . The connections among associated data groups in each road segment that are related to target N 1 is represented as { 1 1 , 1 2 , 6 2 , 1 5 , 3 4 , 5 3 } , which means data group 1 in road segment R 1 , data groups 1 and 6 in road segment R 2 , data group 1 in road segment R 5 , data group 3 in road segment R 4 , and data group 5 in road segment R 3 all belong to the measurements extracted from target N 1 .
As the road segment R 2 has two data groups belong to one target, the ideal matrix G R 2 R 6 × 6 should be:
G R 2 = 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 ,
with respect to its data groups { 1 , 2 , 3 , 4 , 5 , 6 } . For the other road segments, the corresponding matrix G R i is an identity matrix related to its own data groups. Especially, G R 1 = I 3 × 3 , G R 3 = I 3 × 3 , G R 4 = I 3 × 3 , G R 5 = I 1 × 1 , and G R 6 = I 2 × 2 .
Let the intersection formed by road segments R 6 , R 5 , and R 4 be denoted as a. The incoming dataset Q I a R 3 × 1 can be stored in the sequence of { 1 5 , 1 6 , 2 6 } and the outgoing dataset Q O a R 3 × 1 can be stored in the sequence of { 1 4 , 2 4 , 3 4 } . Therefore, the permutation matrix G a may be determined as:
G a = 0 0 1 1 0 0 0 1 0 .
Similarly, for the intersection formed by road segment R 1 , R 2 , and R 4 (named as intersection b), matrix G b may be determined as:
G b = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 ,
with Q I b R 6 × 1 stored in the sequence of { 1 1 , 2 1 , 3 1 , 1 4 , 2 4 , 3 4 } and the outgoing dataset Q O b R 6 × 1 in the sequence of { 1 2 , 2 2 , 3 2 , 4 2 , 5 2 , 6 2 } . For the intersection formed by road segment R 2 , R 3 , and R 5 (named as intersection c), G c may be determined as:
G c = 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 ,
with Q I c R 6 × 1 stored in the sequence of { 1 2 , 2 2 , 3 2 , 4 2 , 5 2 , 6 2 } and the outgoing dataset Q O c R 6 × 1 in the sequence of { 1 3 , 2 3 , 3 3 , 4 3 , 5 3 , 1 5 } .
With these matrices determined, the output result from G-MLKM can be clearly presented.

6. Conclusions and Future Work

This paper studied data pattern recognition for multi-targets in a constrained space, where the data were the minimal information provided by spatially distributed sensors. In contrast to the existing methods that rely on probabilistic hypothesis estimation, we proposed to utilize the machine learning approach for the data correlation analysis. Two common data clustering algorithms, namely, k-means++ and deep neural network, were first analyzed for data association given a simplified constrained space. Then the MLKM method was proposed via leveraging the structure advantage of DNN and the unsupervised clustering capability of k-means++. After that, graph theory was introduced in the purpose of extending the scope of MLKM for a general constrained space. In particular, we proposed a p-dual graph for data association at intersections and merged the results from local spaces and intersections through the dual graph of the constrained space. Simulation studies were provided to demonstrate the performance of the MLKM method and the proposed G-MLKM. Our future work will focus on releasing the assumptions in this paper to improve G-MLKM in the scenarios of false alarms.
Some interesting future work includes experimental verification of the proposed new approach in real-world environments and the consideration of constraints such as packet dropout, communication limitations, and other quality of service (QoS) parameters in sensor networks.

Author Contributions

Methodology, F.T., R.S., J.V., and Y.C.; Writing, F.T., R.S., and J.V.; Editing, Y.C. and F.T.; Data Analysis and Visualization, F. T. and R.S.; Supervision, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by Office of Naval Research under grants N00014-17-1-2613 and N00014-19-1-2278.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Vo, B.N.; Mallick, M.; Bar-shalom, Y.; Coraluppi, S.; Osborne, R.; Mahler, R.; Vo, B.T. Multitarget tracking. In Wiley Encyclopedia of Electrical and Electronics Engineering; Wiley: Hoboken, NJ, USA, 2015. [Google Scholar]
  2. Benfold, B.; Reid, I. Stable multi-target tracking in real-time surveillance video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 3457–3464. [Google Scholar]
  3. Haritaoglu, I.; Harwood, D.; Davis, L.S. W4: Real-time surveillance of people and their activities. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 809–830. [Google Scholar] [CrossRef] [Green Version]
  4. Endsley, M.R. Toward a theory of situation awareness in dynamic systems. Hum. Factors 1995, 37, 32–64. [Google Scholar] [CrossRef]
  5. Pasula, H.; Russell, S.; Ostland, M.; Ritov, Y. Tracking many objects with many sensors. In Proceedings of the International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 31 July–6 August 1999; Volume 99, pp. 1160–1171. [Google Scholar]
  6. Fortmann, T.; Bar-Shalom, Y.; Scheffe, M. Sonar tracking of multiple targets using joint probabilistic data association. IEEE J. Ocean. Eng. 1983, 8, 173–184. [Google Scholar] [CrossRef] [Green Version]
  7. Singh, J.; Madhow, U.; Kumar, R.; Suri, S.; Cagley, R. Tracking multiple targets using binary proximity sensors. In Proceedings of the International Conference on Information Processing in Sensor Networks, Cambridge, MA, USA, 25–27 April 2007; pp. 529–538. [Google Scholar]
  8. Grisetti, G. Robotics 2 Data Association. 2010. Available online: http://ais.informatik.uni-freiburg.de/teaching/ws09/robotics2/pdfs/rob2-11-dataassociation.pdf (accessed on 1 December 2019).
  9. Konstantinova, P.; Udvarev, A.; Semerdjiev, T. A study of a target tracking algorithm using global nearest neighbor approach. In Proceedings of the International Conference on Computer Systems and Technologies, Portland, OR, USA, 2–7 April 2003; pp. 290–295. [Google Scholar]
  10. Bar-Shalom, Y.; Willett, P.K.; Tian, X. Tracking and Data Fusion; YBS Publishing: Storrs, CT, USA, 2011. [Google Scholar]
  11. Ma, H.; Ng, B.W.H. Distributive JPDAF for multi-target tracking in wireless sensor networks. In Proceedings of the IEEE Region 10 Conference, Hong Kong, China, 14–17 November 2006; pp. 1–4. [Google Scholar]
  12. Kim, H.; Chun, J. JPDAS Multi-Target Tracking Algorithm for Cluster Bombs Tracking. In Proceedings of the 2016 Progress in Electromagnetic Research Symposium, Shanghai, China, 8–11 August 2016; pp. 2552–2557. [Google Scholar]
  13. Yuhuan, W.; Jinkuan, W.; Bin, W. A modified multi-target tracking algorithm based on joint probability data association and Gaussian particle filter. In Proceedings of the World Congress on Intelligent Control and Automation, Shenyang, China, 29 June–4 July 2014; pp. 2500–2504. [Google Scholar]
  14. Blackman, S.S. Multiple hypothesis tracking for multiple target tracking. IEEE Aerosp. Electron. Syst. Mag. 2004, 19, 309–332. [Google Scholar] [CrossRef]
  15. Cox, I.J. A review of statistical data association techniques for motion correspondence. Int. J. Comput. Vis. 1993, 10, 53–66. [Google Scholar] [CrossRef]
  16. Reid, D. An algorithm for tracking multiple targets. IEEE Trans. Autom. Control 1979, 24, 843–854. [Google Scholar] [CrossRef]
  17. Oh, S.; Russell, S.; Sastry, S. Markov chain Monte Carlo data association for general multiple-target tracking problems. In Proceedings of the 2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No. 04CH37601), Nassau, Bahamas, 14–17 December 2004; Volume 1, pp. 735–742. [Google Scholar]
  18. Granström, K.; Baum, M.; Reuter, S. Extended Object Tracking: Introduction, Overview, and Applications. J. Adv. Inf. Fusion 2017, 12, 139–174. [Google Scholar]
  19. Kanungo, T.; Mount, D.M.; Netanyahu, N.S.; Piatko, C.D.; Silverman, R.; Wu, A.Y. An efficient k-means clustering algorithm: Analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 881–892. [Google Scholar] [CrossRef]
  20. Postorino, M.N.; Versaci, M. A geometric fuzzy-based approach for airport clustering. Adv. Fuzzy Syst. 2014, 2014, 201243. [Google Scholar] [CrossRef] [Green Version]
  21. Arthur, D.; Vassilvitskii, S. K-means++: The Advantages of Careful Seeding. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, New Orleans, LA, USA, 7–9 January 2007; pp. 1027–1035. [Google Scholar]
  22. Hinton, G.E. How Neural Networks Learn From Experience. Cogn. Model. 2002, 267, 181–195. [Google Scholar] [CrossRef] [PubMed]
  23. Einasto, M.; Liivamägi, L.; Saar, E.; Einasto, J.; Tempel, E.; Tago, E.; Martínez, V. SDSS DR7 superclusters-Principal component analysis. Astron. Astrophys. 2011, 535, A36. [Google Scholar] [CrossRef] [Green Version]
  24. Arya, S.; Mount, D.M.; Netanyahu, N.; Silverman, R.; Wu, A.Y. An optimal algorithm for approximate nearest neighbor searching in fixed dimensions. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, Arlington, TX, USA, 23–25 January 1994; pp. 573–582. [Google Scholar]
  25. Cao, Y.; Yu, W.; Ren, W.; Chen, G. An overview of recent progress in the study of distributed multi-agent coordination. IEEE Trans. Ind. Informatics 2013, 9, 427–438. [Google Scholar] [CrossRef] [Green Version]
  26. Zhao, P.; Jia, T.; Qin, K.; Shan, J.; Jiao, C. Statistical analysis on the evolution of OpenStreetMap road networks in Beijing. Phys. A Stat. Mech. Its Appl. 2015, 420, 59–72. [Google Scholar] [CrossRef]
  27. Porta, S.; Crucitti, P.; Latora, V. The network analysis of urban streets: A primal approach. Environ. Plan. B Plan. Des. 2006, 33, 705–725. [Google Scholar] [CrossRef] [Green Version]
  28. Porta, S.; Crucitti, P.; Latora, V. The network analysis of urban streets: A dual approach. Phys. A Stat. Mech. Its Appl. 2006, 369, 853–866. [Google Scholar] [CrossRef] [Green Version]
  29. Kuhn, H.W. The Hungarian method for the assignment problem. Nav. Res. Logist. (NRL) 1955, 2, 83–97. [Google Scholar] [CrossRef] [Green Version]
  30. Tarjan, R. Depth-first search and linear graph algorithms. SIAM J. Comput. 1972, 1, 146–160. [Google Scholar] [CrossRef]
Figure 1. An example road network. R i represents the road segment. S i j represents the jth sensor on the ith road segment.
Figure 1. An example road network. R i represents the road segment. S i j represents the jth sensor on the ith road segment.
Sensors 21 02069 g001
Figure 2. Example of X 1 for a single road segment.
Figure 2. Example of X 1 for a single road segment.
Sensors 21 02069 g002
Figure 3. Example of preprocessed X 1 for a single road segment.
Figure 3. Example of preprocessed X 1 for a single road segment.
Sensors 21 02069 g003
Figure 4. (a) Dual graph representation for the road network in Figure 1 with the nodes and arrows representing, respectively, the agents and directed edges. (b) P-dual graph representation for the road network in Figure 1, where two sensor nodes represent one road segment and the edge within the two sensor nodes are ignored. In this example, there exist 3 subgraphs which are denoted as a, b , and c.
Figure 4. (a) Dual graph representation for the road network in Figure 1 with the nodes and arrows representing, respectively, the agents and directed edges. (b) P-dual graph representation for the road network in Figure 1, where two sensor nodes represent one road segment and the edge within the two sensor nodes are ignored. In this example, there exist 3 subgraphs which are denoted as a, b , and c.
Sensors 21 02069 g004
Figure 5. An intersection consists of three road segments denoted as R i , R j , and R k . The virtual reference for data preprocessing is in the center of the intersection with a radius of r to each road segment ending point.
Figure 5. An intersection consists of three road segments denoted as R i , R j , and R k . The virtual reference for data preprocessing is in the center of the intersection with a radius of r to each road segment ending point.
Sensors 21 02069 g005
Figure 6. K-means++ accuracy for Simulation 1 parameters on unprocessed (UP) and preprocessed (P) data.
Figure 6. K-means++ accuracy for Simulation 1 parameters on unprocessed (UP) and preprocessed (P) data.
Sensors 21 02069 g006
Figure 7. K-means++ accuracy for Simulation 2 parameters on unprocessed (UP) and preprocessed (P) data.
Figure 7. K-means++ accuracy for Simulation 2 parameters on unprocessed (UP) and preprocessed (P) data.
Sensors 21 02069 g007
Figure 8. K-means++ for preprocessed data (P:K-means++), multi-layer k-means++ (MLKM) without error correction (MLKM w/o EC) and MLKM with error correction (MLKM w/ EC).
Figure 8. K-means++ for preprocessed data (P:K-means++), multi-layer k-means++ (MLKM) without error correction (MLKM w/o EC) and MLKM with error correction (MLKM w/ EC).
Sensors 21 02069 g008
Figure 9. Accuracy obtained for graph based multi-layer k-means++ (G-MLKM) w/ EC and w/o EC.
Figure 9. Accuracy obtained for graph based multi-layer k-means++ (G-MLKM) w/ EC and w/o EC.
Sensors 21 02069 g009
Table 1. Deep neural network (DNN) configuration parameters.
Table 1. Deep neural network (DNN) configuration parameters.
FrameworkDefinition
Cost FunctionSoftmax
Activation FunctionRelu
OptimizerAdam Optimizer
Number of Hidden Layers2
Number of Neurons8
Table 2. Simulation parameters.
Table 2. Simulation parameters.
Simulation 1Simulation 2
N A 1050
N S 1020
( v m i n , v m a x ) U ( 10 , 50 ) U ( 10 , 50 )
( t m i n , t m a x ) U ( 0 , 40 ) U ( 0 , 40 )
Table 3. DNN with different training and testing datasets.
Table 3. DNN with different training and testing datasets.
Train SensorsTest SensorsTrain AccuracyTest Accuracy
203098%68%
252597.8%68%
302099%72%
401098.6%84.4%
45598.9%91.6%
Table 4. Ground truth for five targets trajectories. The representation of A i denotes the associated data group A in road segment R i .
Table 4. Ground truth for five targets trajectories. The representation of A i denotes the associated data group A in road segment R i .
TargetTrajectoryRepresentation
N 1 c R 1 R 2 R 5 R 4 R 2 R 3 { 1 1 , 1 2 , 1 5 , 3 4 , 6 2 , 5 3 }
N 2 R 1 R 2 R 3 { 2 1 , 2 2 , 1 3 }
N 3 R 1 R 2 R 3 { 3 1 , 3 2 , 2 3 }
N 4 R 6 R 4 R 2 R 3 { 1 6 , 1 4 , 4 2 , 3 3 }
N 5 R 6 R 4 R 2 R 3 { 2 6 , 2 4 , 5 2 , 4 3 }
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tao, F.; Suresh, R.; Votion, J.; Cao, Y. Graph Based Multi-Layer K-Means++ (G-MLKM) for Sensory Pattern Analysis in Constrained Spaces. Sensors 2021, 21, 2069. https://doi.org/10.3390/s21062069

AMA Style

Tao F, Suresh R, Votion J, Cao Y. Graph Based Multi-Layer K-Means++ (G-MLKM) for Sensory Pattern Analysis in Constrained Spaces. Sensors. 2021; 21(6):2069. https://doi.org/10.3390/s21062069

Chicago/Turabian Style

Tao, Feng, Rengan Suresh, Johnathan Votion, and Yongcan Cao. 2021. "Graph Based Multi-Layer K-Means++ (G-MLKM) for Sensory Pattern Analysis in Constrained Spaces" Sensors 21, no. 6: 2069. https://doi.org/10.3390/s21062069

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop