Next Article in Journal
A Symmetry Particle Method towards Implicit Non‐Newtonian Fluids
Next Article in Special Issue
Generalized Degree-Based Graph Entropies
Previous Article in Journal
Single Image Super-Resolution by Non-Linear Sparse Representation and Support Vector Regression
Previous Article in Special Issue
A Study on Immersion of Hand Interaction for Mobile Platform Virtual Reality Contents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deformable Object Matching Algorithm Using Fast Agglomerative Binary Search Tree Clustering

Department of Electronic Engineering, Inha University, Incheon 22212, Korea
*
Authors to whom correspondence should be addressed.
Symmetry 2017, 9(2), 25; https://doi.org/10.3390/sym9020025
Submission received: 7 November 2016 / Revised: 5 January 2017 / Accepted: 4 February 2017 / Published: 10 February 2017
(This article belongs to the Special Issue Symmetry in Complex Networks II)

Abstract

:
Deformable objects have changeable shapes and they require a different method of matching algorithm compared to rigid objects. This paper proposes a fast and robust deformable object matching algorithm. First, robust feature points are selected using a statistical characteristic to obtain the feature points with the extraction method. Next, matching pairs are composed by the feature point matching of two images using the matching method. Rapid clustering is performed using the BST (Binary Search Tree) method by obtaining the geometric similarity between the matching pairs. Finally, the matching of the two images is determined after verifying the suitability of the composed cluster. An experiment with five different image sets with deformable objects confirmed the superior robustness and independence of the proposed algorithm while demonstrating up to 60 times faster matching speed compared to the conventional deformable object matching algorithms.

1. Introduction

Humans can recognize and determine objects through vision. Human vision is fast and robust, and it is the most powerful perceptual function to acquire information. Vision is an ability that humans have from birth, and the human performance is far better than that of a computer. Computers may have better performance in fields that are difficult to work with human eyes, such as precision measurements. In a field of recognizing and determining objects, however, their ability is still worse than that of humans. Therefore, research to provide computers with the visual ability at the human level is currently active. Such research is called computer vision. Studies of computer vision are being performed for the recognition of face, object, gesture, from videos or images.
In image recognition, computer vision is divided into the extraction method, which belongs to low-level vision, and the matching method, which belongs to high-level vision. The typical algorithms of the extraction method include D. Lowe’s SIFT (Scale-Invariant Feature Transform) [1], which is robust to size and angle change, H. Bay’s SURF (Speeded Up Robust Features) [2], which is faster than SIFT, J. Matas’s region-based MSER (Maximally Stable Extremal Regions) [3], and K. Mikolajczyk’s Harris affine detector [4], which is robust to affine changes. The matching method is divided into a step for composing matching pairs between all the feature points of two images, and a step for performing geometric verification between the matching pairs. In particular, the geometric verification step is the final step in image recognition, and it is very important because, even if many matching pairs are composed, two images may be determined to be mutually different images if geometric verification fails. A typical algorithm for geometric verification is RANSAC [5].
In recent years, image recognition using deep learning has become popular [6]. Deep learning is different from conventional computer vision algorithms (divided into low-and high-level vision). It enables a computer to learn by itself using neural networks, without image feature extraction and matching method, and it is leading to unparalleled levels of accuracy in image recognition. However, deep learning has not yet been used in various object matching due to the requirement for a large amount of data. With a small amount of data in a database, it is still difficult to achieve reasonably good performance for image recognition using deep learning. In addition, to detect unique objects, neural networks have to become much deeper and deeper networks require high computational power. Thus, we still need computer vision technology that uses low-level and high-level vision for image recognition.
A representative technology that uses image recognition is content-based image retrieval, which was established as the MPEG-7 standard. Recently, at MPEG-7, by constructing the CDVS (Compact Descriptor Visual Search) [7], a study was performed for content-based image retrieval, which retrieves an image fast for mobile devices. Content-based image retrieval is a technology that retrieves an image by extracting robust features even if various deformations in brightness, rotation, affine, and size, occur in the image. On the other hand, most matching algorithms perform retrieval by targeting images with rigid objects [8,9,10]. The object types also include deformable objects; typical examples include clothes, packs, and bags. For rigid objects, the object shapes do not change, but for deformable objects, the object shapes can change in various ways. Because of this difference, the conventional rigid object matching algorithms that are robust to images with rigid objects are not suitable for matching images with deformable objects. Therefore, developing a matching algorithm that is robust to images that contain deformable objects has become an important issue.
The three aspects of excellent matching algorithm are robustness, independence, and fast matching [11]. Robustness is a characteristic that determines that two images with the same object, even if deformation occurs in the object, must be determined to be identical. Independence is a characteristic that determines that two images with mutually different objects are different. Finally, matching is done rapidly in fast matching. If fast matching does not occur, an algorithm may not be appropriate for applications that require fast image retrieval. The most significant weakness of conventional deformable object matching algorithms is slow matching.
In this paper, these three aspects are considered to propose an optimal algorithm for the matching of two images with deformable objects. The remainder of this paper is organized as follows. Section 2 introduces the related works about image matching. In Section 3, the proposed algorithm is described by dividing it into extraction and matching methods. In Section 4, the experiment is described and its results are confirmed and analyzed from five image sets with various deformable objects. Section 5 evaluates the proposed algorithm and reports the conclusion.

2. Related Works

This section introduces well-known feature descriptors developed recently. In the past few years, a number of feature descriptors using binary features were developed. These feature descriptors which have fast feature extraction and less computational complexity are suitable for real-time image matching. This section also introduces the conventional deformable object matching algorithms. Deformable object matching algorithms use different matching methods from rigid object matching algorithms.

2.1. Recent Feature Descriptors

In recent years, binary feature descriptors such as BRIEF (Binary Robust Independent Elementary Features) [12], BRISK (Binary Robust Invariant Scalable Keypoints) [13], FREAK (Fast Retina Keypoint) [14], SYBA (Synthetic Basis) [15], and TreeBASIS [16] have been reported. BRIEF uses a binary string, which results in intensity comparisons at random pre-determined pixel locations. The descriptor similarity is evaluated using the Hamming distance. It trades robustness and independence for fast processing speed, but it is sensitive to image distortions and transformations. BRISK is a 512 bit binary descriptor using a FAST-based detector. It relies on easily configurable circular sampling patterns from which it computes a binary descriptor. It uses the distance ratio of the two nearest neighbors to improve the accuracy of the detection of corresponding keypoint pairs. BRISK requires more computational complexity and more storage space than BRIEF. FREAK improves upon the sampling pattern and method of pair selection that BRISK uses. The features are much more concentrated near the keypoint.
SYBA uses a number of synthetic basis images to measure the similarity between a small image region surrounding a detected feature point and the randomly generated synthetic basis images. The TreeBASIS descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. It provides improvements in descriptor size, computation time, matching speed, and accuracy.

2.2. The Conventional Deformable Object Matching Algorithms

The feature-based deformable object matching algorithms include transformation model-based [17], mesh-based [18], cluster-based [19] and graph-based [20] algorithms. The transformation model-based and mesh-based algorithms require high complexity and are not suitable for various deformations of objects. The graph-based algorithms have fast processing speed but relatively poor performance. The conventional deformable object matching algorithm is the ACC (Agglomerative Correspondence Clustering) algorithm [21], which uses the clustering method. This algorithm calculates the dissimilarity between clusters using the adaptive partial linkage model in the framework of hierarchical agglomerative clustering. The IACC (Improved ACC) algorithm [22] includes the feature selection method for selecting robust features. These two algorithms show good performance for deformable objects, but high complexity in the clustering process. The matching speed becomes slower with higher complexity, and it cannot be called a good matching algorithm with slow matching speed.

3. Proposed Algorithm

This section discusses the proposed algorithm. This section is divided into two subsections: the first discusses the extraction method, and the second discusses the matching method. Figure 1 shows the flow chart of the proposed algorithm, consisting of the extraction part (feature extraction and feature selection) and the matching part (the rest).

3.1. Extraction Method

3.1.1. Feature Extraction

There exist methods for extracting the global features and local features from images. A global feature is unsuitable for an image with deformable objects because such features are extracted from the entire image. This is because the various deformations of deformable objects cannot be defined with a single feature. On the other hand, a local feature is suitable for an image with deformable objects because the features are defined for each local region. Furthermore, a local feature is suitable for applying clustering because additional information in terms of position, scale, and orientation is stored. In this study, a typical algorithm for local features, SIFT [1], was used. The feature F(·) stored through SIFT is expressed as (1).
F(i) = { pi, si , oi , fi } , (1 ≤ IN)
where N is the number of extracted feature points, and every feature point has four components. Here, pi is the feature point’s position, si is the scale, oi is the orientation, and fi is a feature vector with 128 dimensions.

3.1.2. Feature Selection

Non-matching and higher complexity can occur if the extracted features just use matching. This is because some of the feature points could be the outliers. Therefore, it requires a process that selects the robust feature points included in the inliers. The feature selection is a process for selecting robust feature points in composing matching pairs with the extracted features. In general, when the feature points matched in two images are compared, the statistical characteristic is different between the feature points included in the outliers and those included in the inliers [23]. Therefore, the use of the inlier’s statistical characteristic can distinguish the points of the inlier from the outlier. To obtain the inlier’s statistical characteristics, the position (pi), scale (si), orientation (oi), and distance of the center (ci) components are learned from various image sets [24,25]. When a large value (ei) is produced by substituting pi, si, oi, and ci in the learned inlier’s statistical characteristic ISC(·), the probability of belonging to the inlier region is high. The following pseudocode shows a process for selecting NS feature points from a total of N feature points using ISC(·). If NS is bigger than N, NS become N. We use NS = 300. Figure 2b gives an example of using feature selection, and when compared with Figure 2a, where this is not used, some of the outlier points are removed. When the feature points of the outlier are removed because the complexity becomes lower, the features become more robust and the matching speed becomes faster.
Feature selection
  E = {ø}, i = 0
  repeat
    i = i + 1
    ei = ISC(pi, si, oi, ci)
    Insert ei into E
    E, ranked in descending order
  until i = = N
  E = {e1, e2, e3, …, eNS …, eN}
  Selecting NS feature points from N feature points.

3.2. Matching Method

3.2.1. Composing a Matching Pair

To compose a matching pair, the feature points extracted from two images are compared [26]. The formula used here is the Euclidean distance, as expressed in Equation (2).
E u c l i d ( F R ( i ) , F Q ( j ) ) = k = 1 128 ( F R ( i ) k F Q ( j ) k ) 2
Equation (2) is an equation for finding the Euclidean distance of F R ( i ) , which is the i th feature vector of the reference image, and F Q ( j ) , which is the j th feature vector of the query image. If E u c l i d (·) is smaller than an arbitrary threshold, the feature points R ( i ) and Q ( j ) are composed as a matching pair. One feature point can compose up to the maximum of k matching pairs using the knn method. NM matching pairs composed in this manner undergo the overlap checking process expressed as Equation (3).
o v l p [ i , j ] = {   1 ,   if   m i   and   m j   are   overlapping ,   0 ,   otherwise .   ( 1 i ,   j N M )
A matching pair ( m k ) is composed with two feature points matched in two images. In other words, m k consists of the respective feature points from the reference and query images. In Equation (3), m k represents the respective positions of two feature points. Here, m k = ( p R k , p Q k ) , where p R k is the position of the feature point extracted from the reference image, and p Q k is the position of the feature point extracted from the query image. When the i th matching pair ( m i ) and j th matching pair ( m j ) are compared, if p R i matches p R j , or p Q i matches p Q j , they are determined to be overlapped, and the number one is assigned to o v l p [ i , j ] . With this equation, one or zero is assigned to every o v l p [ i , j ] , and finally, an overlap matrix of size NM × NM with o v l p [ i , j ] for all i , j as its elements is generated. In Figure 3, the circles mean the feature points and lines mean the matching pairs. In addition, dotted lines are overlapped matching pairs and the solid-lines are non-overlapped matching pairs. The generated overlap matrix is used in the clustering process.

3.2.2. Making a Symmetric Similarity Matrix

With a deformable object, various deformations may occur because its shape can change. Therefore, it is difficult to evaluate image matching with deformable objects using conventional geometric verification. From the matching pairs composed of the typical conventional geometric verification RANSAC [5], a transform matrix is generated and inliers and outliers are distinguished. On the other hand, a deformable object cannot be defined with a single transform matrix.
Figure 4a presents two images with rigid objects, one of which has one transform matrix (T1). The reference image’s rigid object is transformed geometrically to T1 in the query image. On the other hand, Figure 4b shows two images with deformable objects, and has many transform matrices (T2, T3, and T4). In this case, a deformable object of the reference image is transformed geometrically to T2, T3, and T4, in the query image. Therefore, because a deformable object cannot be defined with one transform matrix, a new method is required for the approach by generating many transform matrices in a small region. One method used here is to make a symmetric similarity matrix. The symmetric similarity matrix consists of the similarity between transform matrices composed in a point unit. In other words, a symmetric similarity matrix is composed of geometric similarity between all matching pairs.
To find the geometric similarity between a matching pair, first, a transform matrix is obtained between a matching pair. The transform matrix used here is a homography matrix [27]. Because a homography matrix uses the projective transform method among various transform matrices, it is suitable for obtaining geometric similarity. To compose a homography matrix, the position (pi), scale (si), and orientation (oi) of a feature point are used, and the matrix is composed using the WGC (Weak Geometric Consistency) [28] method. Using the homography matrix ( H k ) composed this way, the geometric similarity ( d g s ) between a matching pair is found using the Pairwise-WGC [29] method, as expressed in (4).
d g s ( m i , m j ) =   1 2 ( | p Q j H i p R j | + | p Q i H j p R i | ) , ( 1 i ,   j N M )
The two matching pairs to be compared are given as m i = ( p R i , p Q i , H i ) and m j = ( p R j , p Q j , H j ). | · | denotes the Euclidean distance, and d g s ( m i , m j ) is small if H i and H j are similar. If geometric similarity is obtained between every matching pair, a symmetric similarity matrix of size NM × NM with d g s ( m i , m j ) as the element is composed, as shown in Figure 5. The symmetric similarity matrix has zero diagonal elements.
d g s ( m i , m j ) = s i m ( i , j )
As written in Equation (5), each element of a symmetric similarity matrix represents geometric similarity ( d g s ) between a matching pair m i and m j , and means the similarity ( s i m ) between i and j . Here, the i and j indices become the minimum units for clustering.
Simply composing a symmetric similarity matrix does not mean a new geometric verification. The new geometric verification intended here refers to everything, from using the composed symmetric similarity matrix, to finally performing the cluster verification after undergoing the clustering process.

3.2.3. Agglomerative BST (Binary Search Tree) Clustering

For clustering, agglomerating clusters by identifying the similarities between the cluster hierarchically is common. The methods for identifying the similarity between clusters include AGNES using the single-link, complete-link, and average-link methods [30]. In the ACC and IACC algorithm [21,22], clustering is performed adaptively using the adaptive partial link method. These clustering methods, however, have a large limitation in that the speed decreases with increasing number of clusters. In general, when the number of initial clusters is n, the hierarchical clustering method has a complexity of O(n3) because the similarity between clusters needs to be calculated and updated. Here, updating means obtaining a new similarity between an agglomerated cluster and the remaining clusters. The complexity of the similarity calculation between clusters can be reduced using the symmetric similarity matrix obtained earlier, but an additional calculation is essential in the case of an update. In this paper, an algorithm is proposed to reduce the complexity by simplifying the conventional agglomerative hierarchical clustering. The update process that comprises a large proportion of the complexity is omitted, and clustering is performed by constructing a BST (Binary Search Tree) [31] with the basic clusters obtained from symmetric similarity matrix.
The pseudocode presented earlier shows the BST clustering process in detail. In the initialization part, Ntree is the number of binary trees (BTt) generated, and BTt represents the tth binary tree. The BST clustering process that appears hereon is performed the maximum of Nbc times. Nbc is the number of s i m ( i , j ) in the upper triangular part, excluding the diagonal elements in the symmetric similarity matrix, and N b c =   N M × N M N M 2 . When the BST clustering process is examined, first, i and j with minimum similarity are found in the symmetric similarity matrix (because the symmetric similarity matrix is a symmetrical matrix, they are found only when i > j ). Here, BST clustering is terminated if the similarity is larger than the given threshold δ s   (similarity threshold). Next, an element of the overlap matrix with i and j as the index is confirmed. If the value for o v l p [ i , j ] is one, clustering is not formed because the feature point with an overlap between positions cannot be considered as a robust feature.
Agglomerative BST Clustering
  Ntree = 0, k = 0, BTt = {ø}, sumS = 0  // Initialization
  /*  BST clustering  */
   repeat
     k = k + 1
     // Find i,j
	 
{ i , j } = argmin i > j   ( symmetric   similarity   matrix )  
if sim(i,j) > δs then {break} // overlap check if ovlp[i,j] then {sim(i,j) = ∞, continue} // Using BST, Searching & Inserting chk = 0, t = 0 repeat if {i,j} ∈ BTt then {chk = 1, break} else if i ∈ BTt then {Insert j into BTt , chk = 1, break} else if j ∈ BTt then {Insert i into BTt , chk = 1, break} else {t = t + 1} until t = = Ntree // make new BTt if chk = = 0 and sim(i,j) < thres(δs,sumS) then { Make BTt and Insert i,j into BTt Ntree = Ntree + 1 sumS += sim(i,j) } sim(i,j) = ∞ until k = = Nbc if any one of the nodes in BTt (0 ≤ tNtree) is the same, merges them. The rest of BTt is cluster Ct    (0 ≤ tNcluster)
In the next part, searching and inserting i and j is performed using BST. This process is performed the maximum of Ntree times, and if a node is searched at least once in BTt, it is terminated. In total, there are three cases of nodes searched from BTt. The first is the case where both i and j are searched. Here, because all pertinent nodes exist, the process is terminated without insertion. Next is a case where only i is searched. Here, j is inserted as a new leaf node in BTt, and the process is terminated. Finally, in the case where only j is searched, i is inserted as a new leaf node, and the process is terminated. Figure 6 gives an example of the searching and inserting process of BTt. For example, when the   i = 8 and j = 35, Figure 6a shows that the node 8 of BT0 is searched. This is the case where i is searched. As shown in Figure 6b, j = 35 is inserted as a new leaf node in BT0 because j is not searched in BT0.
t h r e s ( δ s , s u m S ) = δ s s u m S / ( N t r e e + 1 )
A new BTt is generated when t = 0 or searching is not done. To generate a new BTt, an additional threshold is required. The root node (first node) is important for generating binary trees. If the root node is incorrect, binary tree generated from the root node can generate large errors. The additional threshold makes the root node more robust. As written in Equation (6), it is an adaptive threshold. Because s i m ( i , j ) increases as BTt is generated, threshold must also increase. The adaptive threshold is the value that divides similarity threshold ( δ s ) by the mean of the sum of root node’s similarities. In the BTt generated here, i and j are inserted as new nodes. Next, it finds new i   and   j with the minimum similarity value again by providing s i m ( i , j ) = and clustering is repeated the maximum of Nbc times. Finally, it checks whether to merge between the generated binary trees. If any one of the nodes in the generated binary trees is the same, they are merged. To merge or not, all the rest of BTt generated this way become cluster C t with the basic clusters. For example, in BT5 of Figure 6, because all nodes form a basic cluster, C 5   = {7,6,60,42,28,44}. The clusters C t generated this way finally undergo cluster verification.

3.2.4. Cluster Verification

Finally, in the matching method, the cluster verification step determines the suitability of the clusters C t obtained as described earlier. This step is required because even if a cluster is agglomerated by the geometric similarity between the basic clusters, there is still the possibility of error. In particular, this must be considered when the cluster area is too small when the possibility of error is high. Figure 7 gives examples of mismatching results without using cluster verification, where the cluster area is too small compared to the entire image area.
Cluster Verification
  cluster Ct, t = 0
  areaimg1 = entire reference image(=img1) area
  areaimg2 = entire query image(=img2) area
  repeat
    {cvimg1, cvimg2} = find each convex-hull in Ct
    ratioimg1 = (calculate area of cvimg1)/areaimg1
    ratioimg2 = (calculate area of cvimg2)/areaimg2
	
    qmin = min(ratioimg1, ratioimg2)
    qratio = qmin/max(ratioimg1, ratioimg2)
    qsize = the number of elements in Ct
	
    if qmin > τmin and qratio > τratio
	and qsize > τsize then {Ct is TRUE}
    t = t + 1
   until t = = Ncluster
The previous pseudocode shows the proposed cluster verification step. Cluster verification obtains the determination criteria based on the ratio between the entire image area and the cluster area. The cluster area is calculated by obtaining a convex hull from the positions of the feature points. Here, the feature points can be obtained from the indices that correspond to each element of cluster C t . Using the ratio that can be obtained from both the reference and query images, the minimum value q m i n and ratio q ratio of the minimum and maximum values are obtained. As another criterion, q s i z e , the number of elements of C t , is obtained. These three determination criteria and respective thresholds, τ m i n , τ r a t i o , and τ s i z e , are compared, and when they are all larger than the respective thresholds, the pertinent cluster C t is determined to be suitable. If at least one is determined to be suitable from the clusters, C t , two images are finally determined to be matching.

4. Experiment

4.1. Experiment Conditions

To evaluate the matching performance, an experiment was performed with five types of image sets. Among these, two types were image sets that contain actual deformable objects, and the other three types were image sets where the images become artificially deformable using TPS (Thin-Plate-Spline). As shown in Figure 8, the image sets that contain actual deformable objects were composed of clothes and snack packs, which are commonly encountered in real life. For the image sets that uses TPS, Stanford University’s SMVS standard images [32] and some of the ImageNet’s Natural images (flowers, trees, leaves,) [33] and Oxford University’s buildings images [34] were used. In the image set, the reference images were constructed with those images where a feature that could represent an object appears at the front. In the case of query images, they were constructed with the images of clothes where a person wears the clothes in various poses; images of snack packs, where various deformations are applied due to the contents in the snack packs; and SMVS and IN-N (ImageNet’s Natural), and Oxbuild (Oxford building images), where warping is applied based on several arbitrary points using TPS. Table 1 lists the composition of the five types of image sets. The annotations consist of images, matching pairs of images, and non-matching pairs of images.
To measure the proposed algorithm performance, TPR (True Positive Rate) in Equation (7) and FPR (False Positive Rate) in Equation (8) were used. TPR is an equation for finding the robustness among the algorithm characteristics; a larger value indicates better performance. On the other hand, FPR is an equation for finding the independence among the algorithm characteristics; a smaller value indicates better performance. TPR was obtained from the matching pairs of images in Table 1, and FPR is obtained from the non-matching pairs of images in Table 1. The accuracy defined in Equation (9) represents the relationship between TPR and FPR for an objective comparison. Finally, the matching time was measured to determine the fast matching speed.
The proposed algorithm use SIFT [1] for feature extraction like the common comparison algorithms such as ACC [21], IACC [22], and RANSAC [5]. By doing this, we can compare the performance of matching method under the same conditions. In addition, SIFT showed better performance compared with the other feature descriptors such as SURF and BRISK in our experiment which is consistent with other findings [35,36] for images with various deformations. Although SIFT has slower speed for extracting features, it was determined to be an appropriate choice for the feature descriptor.
Here, the experiment was performed by applying all the major parameters required for feature extraction in SIFT. The thresholds for cluster verification were fixed as τ m i n = 0.001, τ r a t i o = 0.5, τ s i z e = 3.
T P R = T P T P + F N = T P P
F P R = F P F P + T N = F P N
A c c u r a c y = T P + T N P + N
For performance test, we used an Intel Core i5-2500 (quad core) CPU with the clock speed of 3.3 GHz and 8 GB RAM running the Windows 7(64-bit). In addition, all algorithms are implemented in the C ++ environment.

4.2. Experiment Results

Table 2 presents the average computational time and memory storage required to build and use binary trees. Compared with non-binary tree case, when δ s increases, the algorithm runs faster; when δ s is above 30, it is faster than non-binary tree case. Since average memory storage required to build binary trees occupies a small part of the whole memory, it is determined to be better to use binary trees.
Figure 9 presents the top three values of accuracy (A1, A2, A3) for each algorithm using Equation (9). These are the results of experimenting with the image set of clothes, snack packs, SMVS (using TPS), IN-N (using TPS) and Oxbuild (using TPS). In the case of RANSAC, the accuracies were very low because it is not an algorithm suitable for images with deformable objects. The other algorithms showed better performance with the proposed algorithm showing the best performance. Figure 10 presents the recall vs. precision curve using similarity threshold ( δ s ) in each image set. The proposed algorithm outperformed the other algorithms, especially for high recall values.
Table 3, Table 4, Table 5, Table 6 and Table 7 list the matching times for each image set. Here, the matching time means the average matching time between two images, and the unit is ms (milliseconds). The matching time was obtained by changing the value of the threshold δ s , which is a common parameter of the three algorithms ( δ s = 1, 10, 20, 30, 40, and 50). When δ s decreases, TPR and FPR become lower. On the other hand, when δ s becomes larger, TPR and FPR become higher. For each algorithm, “match” and “n_match” are obtained. Here, “match” is the average matching time for the matching pairs of images, and “n_match” is the average matching time for the non-matching pairs of images. As δ s becomes relatively large, the matching time increases, and the matching time for “match” takes longer than for “n_match”. “n_match” is faster because there are relatively fewer matching pairs composed from the feature points, and there are little or no clusters composed. A comparison of the algorithms showed that the matching time of the proposed algorithm was faster than the other algorithms. In particular, for “match”, it was approximately 10–70 times faster than the ACC algorithm, and approximately 2–10 times faster than the IACC algorithm. Although there was some difference depending on the image set, the proposed algorithm’s matching time was the fastest.
Table 8 is a summary of the final results. The values from the table pertain to TPR (Equation (7)), FPR (Equation (8)), Accuracy (Equation (9)) and time (=matching time) in the case of δ s where the accuracy of each algorithm is highest. Here, “time” is the total average matching time of adding “match” and “n_match” from Table 3, Table 4, Table 5, Table 6 and Table 7. Comprehensive examination of the results confirms that the proposed algorithm is superior to the other algorithms.
Figure 11 presents examples that show the matching results using the proposed algorithm, where red convex hull indicates a suitable cluster.

5. Conclusions

In this paper, a new matching algorithm between images with deformable objects was proposed. A matching algorithm can be called a good algorithm if three aspects, i.e., robustness, independence, and fast matching, are excellent. Among these aspects, slow matching is the most significant weakness of conventional deformable object matching algorithms. To resolve this weakness, the speed was dramatically improved by reducing the complexity using the feature selection and BST (Binary Search Tree) clustering. The matching results were reliable because the suitability of the composed clusters is determined by the cluster verification step.
The experiment was performed using image sets with various deformable characteristics. As a result, while showing better TPR and FPR performance, compared to conventional algorithms, the proposed algorithm achieves 2–60 times faster matching speed than the conventional algorithms. Fast matching is a very important characteristic because image matching is used for content-based image retrieval. Therefore, the algorithm proposed in this paper can be used more effectively than the conventional algorithms in deformable object-contained image retrieval.

Acknowledgments

We would like to thank the anonymous reviewers for their generous review. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2010-0020163) and the Ministry of Science, ICT & Future Planning (2015R1C1A1A01055914).

Author Contributions

Jaehyup Jeong and Insu Won provided the main idea of this paper, designed the overall architecture of the proposed algorithm and wrote the paper; Jaehyup Jeong and HunJun Yang conducted the test data collection and designed the experiments; and Bowon Lee and Dongseork Jeong supervised the work and revised the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  2. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Proceedings of the European Conference on Computer Vision (ECCV), Graz, Austria, 7–13 May 2006; pp. 404–417.
  3. Matas, J.; Chum, O.; Urban, M.; Pajdla, T. Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 2004, 22, 761–767. [Google Scholar] [CrossRef]
  4. Mikolajczyk, M.; Schmid, C. Scale & affine invariant interest point detectors. Int. J. Comput. Vis. 2004, 60, 63–86. [Google Scholar]
  5. Fischler, M.A.; Martin, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. ACM Proc. Commun. 1981, 24, 381–395. [Google Scholar] [CrossRef]
  6. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Stateline, NV, USA, 3–8 December 2012; pp. 1097–1105.
  7. Duan, L.Y.; Lin, J.; Chen, J.; Huang, T.; Gao, W. Compact Descriptors for Visual Search. IEEE Multimed. 2014, 21, 30–41. [Google Scholar] [CrossRef]
  8. Chen, D.M.; Tsai, S.S.; Chandrasekhar, V.; Takacs, G. Inverted Index Compression for Scalable Image Matching. In Proceedings of the IEEE 2010 Data Compression Conference, Snowbird, UT, USA, 24–26 March 2010; p. 525.
  9. Chum, O.; Matas, J.; Kittler, J. Locally optimized RANSAC. Pattern Recognit. 2003, 2781, 236–243. [Google Scholar]
  10. Li, Y.; Snavely, N.; Huttenlocher, D.P. Location recognition using prioritized feature matching. In Proceedings of the European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; pp. 791–804.
  11. Na, S.; Oh, W.; Jeong, D. A Frame-Based Video Signature Method for Very Quick Video Identification and Location. ETRI J. 2013, 35, 281–291. [Google Scholar] [CrossRef]
  12. Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. BRIEF: Binary Robust Independent Elementary Features. In Proceedings of the European Conference on Computer Vision (ECCV), Heraklion, Greece, 5–11 September 2010; pp. 778–792.
  13. Leutenegger, S.; Chli, M.; Siegwart, R. BRISK: Binary Robust Invariant Scalable Keypoints. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555.
  14. Alahi, A.; Ortiz, R.; Vandergheynst, P. FREAK: Fast Retina Keypoint. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 510–517.
  15. Desai, A.; Lee, D.J.; Ventura, D. Matching Affine Features with the SYBA Feature Descriptor. In Proceedings of the Advances in Visual Computing, Las Vegas, NV, USA, 8–10 December 2014; pp. 448–457.
  16. Fowers, S.G.; Desai, A.; Lee, D.J.; Ventura, D.; Wilde, D.K. An efficient tree-based feature descriptor and matching algorithm. AIAA J. Aerosp. Inf. Syst. 2014, 11, 596–606. [Google Scholar] [CrossRef]
  17. Tran, Q.H.; Chin, T.J.; Carneiro, G.; Brown, M.S.; Suter, D. In defence of RANSAC for outlier rejection in deformable registration. In Proceedings of the European Conference on Computer Vision (ECCV), Firenze, Italy, 7–13 October 2012; pp. 274–287.
  18. Pilet, J.; Lepetit, V.; Fua, P. Real-time nonrigid surface detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 822–828.
  19. Kettani, O.; Ramdani, F.; Tadili, B. An Agglomerative Clustering Method for Large Data Sets. Int. J. Comput. Appl. 2014, 92, 1–7. [Google Scholar] [CrossRef]
  20. Zhou, F.; Torre, F.D. Factorized graph matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 127–134.
  21. Cho, M.; Lee, J.; Lee, K.M. Feature correspondence and deformable object matching via agglomerative correspondence clustering. In Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 1280–1287.
  22. Yang, H.; Won, I.; Jeong, D. On the Improvement of Deformable Object Matching. In Proceedings of the Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV), Okinawa, Japan, 2–5 February 2014; pp. 279–282.
  23. Francini, G.; Lepsøy, S.; Balestri, M. Selection of local features for visual search. Signal Process. Image Commun. 2013, 28, 311–322. [Google Scholar] [CrossRef]
  24. Tsai, S.S.; Chen, D.; Takacs, G.; Chandrasekhar, V.; Vedantham, R.; Grzeszczuk, R.; Girod, B. Fast geometric re-ranking for image-based retrieval. In Proceedings of the IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 1029–1032.
  25. Lepsøy, S.; Francini, G.; Cordara, G.; Gusmão, P.P.B. Statistical modelling of outliers for fast visual search. In Proceedings of the IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11–15 July 2011; pp. 1–6.
  26. Won, I.; Jeong, J.; Yang, H.; Kwon, J.; Jeong, D. Adaptive Image Matching Using Discrimination of Deformable Objects. Symmetry 2016, 8, 68. [Google Scholar] [CrossRef]
  27. Chum, O.; Pajdla, T.; Sturm, P. The Geometric Error for Homographies. Comput. Vis. Image Underst. 2005, 97, 86–102. [Google Scholar] [CrossRef]
  28. Jegou, H.; Douze, M.; Schmid, C. Hamming embedding and weak geometric consistency for large scale image search. In Proceedings of the European Conference on Computer Vision, Marseille, France, 12–18 October 2008; pp. 304–317.
  29. Xie, H.; Gao, K.; Zhang, Y.; Li, J.; Liu, Y. Pairwise weak geometric consistency for large scale image search. In Proceedings of the ACM International Conference on Multimedia Retrieval, Trento, Italy, 18–20 April 2011; pp. 42–50.
  30. Theodoridis, S.; Koutroumbas, K. Pattern Recognition, 3rd ed.; Academic Press: Cambridge, MA, USA, 2006; pp. 541–587. [Google Scholar]
  31. Cormen, T.H.; Leiscrson, C.E.; Rivers, R.L.; Stein, C. Introduction to Algorithms, 3rd ed.; MIT Press: Cambridge, MA, USA; McGraw-Hill: New York, NY, USA, 2009; pp. 286–307. [Google Scholar]
  32. Chandrasekhar, V.R.; Chen, D.M.; Tsai, S.S.; Cheung, N.; Chen, H.; Takacs, G.; Reznik, Y.; Vedantham, R.; Grzeszczuk, R.; Bach, J. The stanford mobile visual search data set. In Proceedings of the ACM Conference on Multimedia Systems, San Jose, CA, USA, 23–25 February 2011; pp. 117–122.
  33. Deng, J.; Dong, W.; Socher, R. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255.
  34. Philbin, J.; Chum, O.; Isard, M.; Sivic, J.; Zisserman, A. Object retrieval with large vocabularies and fast spatial matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, MN, USA, 17–22 June 2007.
  35. Khan, N.; McCane, B.; Mills, S. Better than SIFT? Mach. Vis. Appl. 2015, 26, 819–836. [Google Scholar] [CrossRef]
  36. Kashif, M.; Deserno, T.M.; Haak, D.; Jonas, S. Feature description with SIFT, SURF, BRIEF, BRISK, or FREAK? A general question answered for bone age assessment. Comput. Biol. Med. 2016, 68, 67–75. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the proposed algorithm.
Figure 1. Flowchart of the proposed algorithm.
Symmetry 09 00025 g001
Figure 2. Example of the feature points in an image: (a) feature points using only SIFT; and (b) the feature points using feature selection.
Figure 2. Example of the feature points in an image: (a) feature points using only SIFT; and (b) the feature points using feature selection.
Symmetry 09 00025 g002
Figure 3. Example of matching pairs that overlap or not.
Figure 3. Example of matching pairs that overlap or not.
Symmetry 09 00025 g003
Figure 4. Comparison example of a transform matrix (Ti): (a) rigid object in the images; and (b) deformable object in the images.
Figure 4. Comparison example of a transform matrix (Ti): (a) rigid object in the images; and (b) deformable object in the images.
Symmetry 09 00025 g004
Figure 5. Example of a symmetric similarity matrix (NM = 5).
Figure 5. Example of a symmetric similarity matrix (NM = 5).
Symmetry 09 00025 g005
Figure 6. Example of binary search tree (t = 5). The circles in blue indicate the nodes in BTt and the oval in purple indicate two candidate node {i = 8, j = 35}. (a) Node 8 is searched in BT0 (red dotted arrow and circle); (b) Node 35 is inserted as a new leaf node in BT0 (red solid arrow and red number in the circle).
Figure 6. Example of binary search tree (t = 5). The circles in blue indicate the nodes in BTt and the oval in purple indicate two candidate node {i = 8, j = 35}. (a) Node 8 is searched in BT0 (red dotted arrow and circle); (b) Node 35 is inserted as a new leaf node in BT0 (red solid arrow and red number in the circle).
Symmetry 09 00025 g006
Figure 7. Examples of mismatching results without using cluster verification.
Figure 7. Examples of mismatching results without using cluster verification.
Symmetry 09 00025 g007
Figure 8. Examples of reference and query (deformable) images: (a) clothes; (b) snack packs; (c) SMVS (using TPS); (d) IN-Natural (using TPS); and (e) Oxbulid (using TPS).
Figure 8. Examples of reference and query (deformable) images: (a) clothes; (b) snack packs; (c) SMVS (using TPS); (d) IN-Natural (using TPS); and (e) Oxbulid (using TPS).
Symmetry 09 00025 g008
Figure 9. Accuracy of the proposed and other algorithms.
Figure 9. Accuracy of the proposed and other algorithms.
Symmetry 09 00025 g009
Figure 10. Recall vs. Precision curve of the proposed and other algorithms.
Figure 10. Recall vs. Precision curve of the proposed and other algorithms.
Symmetry 09 00025 g010
Figure 11. Examples of matching results using proposed algorithm.
Figure 11. Examples of matching results using proposed algorithm.
Symmetry 09 00025 g011aSymmetry 09 00025 g011b
Table 1. Configuration of image set.
Table 1. Configuration of image set.
Image SetAnnotations
Clothes1250 images
996 matching pairs of images
4233 non-matching pairs of images
Snack packs400 images
300 matching pairs of images
3000 non-matching pairs of images
SMVS (using TPS)20,400 images
6576 matching pairs of images
7805 non-matching pairs of images
IN-N (using TPS)1246 images
623 matching pairs of images
5598 non-matching pairs of images
Oxbuild (using TPS)5063 images
5063 matching pairs of images
20,252 non-matching pairs of images
Table 2. Requirements of the computational time and memory storage about binary tress.
Table 2. Requirements of the computational time and memory storage about binary tress.
δ s Non-Binary TreeUse of Binary Trees
Average Time (ms)Average Time (ms)Average Memory (MB)
10.0040.0050.257
100.2360.2663.876
200.7530.7765.935
301.5011.4397.472
402.5482.3668.747
503.7843.3669.784
Table 3. Matching time (ms) on the “clothes” image set.
Table 3. Matching time (ms) on the “clothes” image set.
δ s ACCIACCProposed
Matchn_MatchMatchn_MatchMatchn_Match
1269.6031.9057.314.3910.214.11
10777.1141.78284.796.1713.084.16
201113.0364.06436.488.5318.804.20
301227.3081.64514.1510.6426.524.27
401334.33100.00561.2112.3629.654.34
501365.15121.29584.2913.6135.324.48
Table 4. Matching time (ms) on the “snack packs” image set.
Table 4. Matching time (ms) on the “snack packs” image set.
δ s ACCIACCProposed
Matchn_MatchMatchn_MatchMatchn_Match
162.616.0310.275.017.644.98
10204.056.4321.055.088.114.97
20231.666.6423.865.178.784.98
30244.626.8025.715.249.385.03
40252.616.9826.745.2910.085.01
50257.757.0927.595.3210.494.99
Table 5. Matching time (ms) on the “SMVS (using TPS)” image set.
Table 5. Matching time (ms) on the “SMVS (using TPS)” image set.
δ s ACCIACCProposed
Matchn_MatchMatchn_MatchMatchn_Match
1127.3310.1614.433.866.503.38
10843.8442.93105.067.538.983.59
201063.1769.73142.0610.6810.883.63
301155.2587.01157.0012.9913.073.81
401189.7698.73164.4213.5315.423.95
501212.29105.59170.4414.3118.574.25
Table 6. Matching time (ms) on the “IN-N (using TPS)” image set.
Table 6. Matching time (ms) on the “IN-N (using TPS)” image set.
δ s ACCIACCProposed
Matchn_Matchmatchn_MatchMatchn_Match
162.169.2610.473.557.773.61
10671.4831.29121.125.9410.863.66
20938.0062.39198.808.7513.723.71
301072.4085.72240.5510.6016.873.76
401158.80102.34261.7111.5920.043.88
501208.94113.75280.6712.6322.933.87
Table 7. Matching time (ms) on the “Oxbuild (using TPS)” image set.
Table 7. Matching time (ms) on the “Oxbuild (using TPS)” image set.
δ s ACCIACCProposed
Matchn_Matchmatchn_MatchMatchn_Match
1115.4422.6734.619.6821.097.14
101102.4596.69283.8416.2928.767.37
201518.45177.52405.7423.8332.117.42
301740.47241.87455.4329.6237.327.66
401826.40278.72486.0432.3544.667.85
501907.35309.01501.3135.2852.928.09
Table 8. Experiment results (TPR, FPR, Accuracy, and time (ms)).
Table 8. Experiment results (TPR, FPR, Accuracy, and time (ms)).
Image SetResultRANSACACCIACCProposed
clothesTPR0.3190.7010.6890.807
FPR0.4010.0120.0090.010
Accuracy0.5460.9340.9330.955
time (ms)71.98358.22126.2114.77
Snack packsTPR0.3170.7730.7770.847
FPR0.4360.0030.0050.004
Accuracy0.5410.9760.9750.983
time (ms)28.7728.897.315.47
SMVS (using TPS)TPR0.9830.9230.9230.948
FPR0.7500.0340.0210.023
Accuracy0.5850.9460.9540.963
time (ms)39.23611.6185.7010.80
IN-N (using TPS)TPR0.8520.6690.6590.775
FPR0.7400.0060.0040.007
Accuracy0.5660.9610.9620.971
time (ms)37.06198.1834.954.71
Oxbuild (using TPS)TPR0.8320.7530.7750.830
FPR0.8580.0120.0110.011
Accuracy0.4940.9410.9460.957
time (ms)69.45539.59114.7913.59

Share and Cite

MDPI and ACS Style

Jeong, J.; Won, I.; Yang, H.; Lee, B.; Jeong, D. Deformable Object Matching Algorithm Using Fast Agglomerative Binary Search Tree Clustering. Symmetry 2017, 9, 25. https://doi.org/10.3390/sym9020025

AMA Style

Jeong J, Won I, Yang H, Lee B, Jeong D. Deformable Object Matching Algorithm Using Fast Agglomerative Binary Search Tree Clustering. Symmetry. 2017; 9(2):25. https://doi.org/10.3390/sym9020025

Chicago/Turabian Style

Jeong, Jaehyup, Insu Won, Hunjun Yang, Bowon Lee, and Dongseok Jeong. 2017. "Deformable Object Matching Algorithm Using Fast Agglomerative Binary Search Tree Clustering" Symmetry 9, no. 2: 25. https://doi.org/10.3390/sym9020025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop