Next Article in Journal
Large Language Model-Guided SARSA Algorithm for Dynamic Task Scheduling in Cloud Computing
Previous Article in Journal
Efficient Scalar Multiplication of ECC Using Lookup Table and Fast Repeating Point Doubling
Previous Article in Special Issue
A New Alternating Suboptimal Dynamic Programming Algorithm with Applications for Feature Selection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identifying All Matches of a Rigid Object in an Input Image Using Visible Triangles

by
Abdullah N. Arslan
Department of Computer Science, East Texas A & M University, Commerce, TX 75428, USA
Mathematics 2025, 13(6), 925; https://doi.org/10.3390/math13060925
Submission received: 13 January 2025 / Revised: 26 February 2025 / Accepted: 7 March 2025 / Published: 11 March 2025
(This article belongs to the Special Issue Dynamic Programming)

Abstract

:
It has been suggested that for objects identifiable by their corners, every triangle formed by these corner points can serve as a reference for detecting other corner points. This approach enables effective rigid object detection, including partial matches. However, when there are many corner points, the implementation becomes impractical due to excessive memory requirements. To overcome this, we propose a new algorithm that leverages Delaunay triangulation, considering only the triangles generated by the Delaunay triangulation to reduce the complexity of the original approach. Our algorithm is significantly faster and requires significantly less memory, offering a viable solution for large problem instances. Moreover, it excels at identifying all matches of a queried object in an image when visible triangles of the object are present. A triangle formed by an object’s vertices is considered visible if a matching triangle is detected, and no vertices from any other object lie within its circumcircle. Recent AI-based methods have revolutionized rigid object matching, providing impressive accuracy with deep learning techniques. However, these methods require extensive training and specialized hardware like GPUs. In contrast, our approach requires no training or specialized hardware, making it a lightweight and efficient solution that maintains strong matching capabilities without the overhead of AI-based methods. Our study of the geometric features, combined with Delaunay triangulation, offers new mathematical insights.

1. Introduction

Rigid object identification in 2D involves recognizing and matching objects under transformations such as translation, rotation, and scaling while preserving the object’s geometric properties. This problem is fundamental in computer vision and has applications in fields ranging from robotics to medical imaging. Early approaches relied on corner detection techniques, such as the Harris Corner Detector [1], to identify robust features in an image. Traditional methods, such as the Viola–Jones algorithm [2], used the boosted cascades of simple features for object detection. Advances in feature-based methods, particularly the Scale-Invariant Feature Transform (SIFT) [3] and the Speeded-Up Robust Features (SURF) [4], revolutionized the field by enabling robust matching under varying scales and orientations. Shape-based approaches, such as the Shape Context method [5], provided additional tools for identifying objects using their contours and spatial relationships. These methods often rely on principles from computational geometry, including the Hausdorff distance [6], to compare and match geometric configurations. Geometric-based descriptors have also been explored for facial expression recognition, demonstrating their effectiveness in capturing structural features [7]. Textbooks such as Multiple View Geometry in Computer Vision [8] and Computer Vision: Algorithms and Applications [9] offer comprehensive overviews of these techniques and their integration into modern vision systems. Together, these methods and references form the foundation for accurate and efficient rigid object identification in 2D.
Rigid object detection and identification in complex scenarios have undergone significant advancements, particularly with the integration of deep learning techniques. The field saw a transformative shift with deep learning-based methods, beginning with the YOLO series [10], which reformulated object detection as a single regression problem, enabling real-time performance. RetinaNet [11] improved accuracy by introducing focal loss to address class imbalance, while Detection Transformer (DETR) [12] leveraged transformer architectures to capture global relationships within images, simplifying detection pipelines. Specialized methods further refined rigid object detection, including viewpoint adaptation techniques [13], which enhanced robustness across varying perspectives, and rigidity-aware detection for 6D object pose estimation [14], which improved detection accuracy in cluttered scenes. Additionally, application-specific advancements, such as rigid object tracking for augmented reality [15], explored dense tracking methods to enhance the performance of Augmented Reality (AR) devices. These developments illustrate how deep learning has revolutionized rigid object detection, achieving improved accuracy, efficiency, and adaptability across diverse applications.
In this work, we adopt a traditional approach by leveraging geometric features and introducing a new algorithm. Our motivation is as follows: While recent AI-driven methods have significantly advanced rigid object matching with impressive accuracy using deep learning, these techniques require extensive training and specialized hardware, such as GPUs. In contrast, our method eliminates the need for training or dedicated hardware, providing a lightweight and efficient solution that maintains strong matching performance without the computational overhead of AI-based approaches. Furthermore, our algorithm offers new insights into the concept of object identification, potentially inspiring further advancements in this domain.
A triangle-based geometric feature has been introduced as a valuable tool for object detection, utilizing the corners of rigid objects within images [16]. By employing a reference triangle and calculating distances to corners, other corner positions can be predicted and verified for object detection, even in partially occluded images. However, the original algorithm’s complexity increases significantly with the number of corner points, leading to high time and space requirements. To overcome this, we propose an alternative that uses the results of Delaunay triangulation as input, enhancing time and space efficiency while preserving accuracy.
For the method proposed in [16], the key step is to locate a triangle of query Q in the searched image I. The locations of all other corners can then be predicted using the distances from this triangle. The method in Ref. [16] considers all possible triangles as reference candidates. We propose a new approach that utilizes triangles obtained from Delaunay triangulations for both Q and I. The performance of the new method depends on successfully locating a triangle of Q within I. The correct detection of each instance is guaranteed if a certain condition is met. The new method improves both time and space efficiency. In particular, the bottleneck due to the space requirement is removed.
For a given set of vertices (corners), when no four distinct corner points are circular, the set of triangles obtained from a Delaunay triangulation is unique. This property has been well established and applied in computer vision [17] for tracing correspondences in video. The precision of our method does not rely on a unique Delaunay triangulation. It suffices for the Delaunay triangulations of Q and I to contain at least one pair of similar triangles corresponding to each instance of Q in I. For this condition to hold, the instance of Q in I must include a triangle from the Delaunay triangulation of I that is similar to a triangle in the Delaunay triangulation of Q and does not enclose any point from an overlapping object, including points from another instance of Q. We refer to such a triangle as a visible triangle of Q in the image I.
The outline of the paper is as follows: In Section 2, we briefly summarize the key ideas of the object detection method presented in [16]. Next, in the same section, we introduce a new algorithm that leverages the same geometric feature while incorporating the results of Delaunay triangulation for more efficient object detection. In Section 3, we present our test results obtained using the new algorithm. We discuss implementation details and the advantages of our new algorithm in Section 4. Finally, we provide our concluding remarks in Section 5.

2. Methods

Arslan [16] introduces a geometric feature that associates objects with identifiable corners. By considering any triangle, the unique coordinates of its corner points can be derived based on the distances to these corners. The process involves predicting all corner points by matching triangles and checking all possible triangles.
We summarize the geometric feature, the definitions, and the algorithm in [16]. These provide the foundation for the new ideas and the algorithm we propose in this work.
The key observation in this approach is that the Euclidean distances from the corners of a triangle uniquely identify a point in 2D. Figure 1 illustrates a query object and the search image in which this object is being searched. Only the corner points are extracted and considered. The images are repeated in the figure beneath the original images, now also displaying the detected corner points. All possible triangles and their pairwise matches are considered in object detection. In the case shown in Figure 1, one triangle (among many possible ones) is shaded, serving as a reference. A different corner point is selected to illustrate that it can be identified by the distances to the corners (vertices) of the triangle in the query object. These are shown by the dashed lines in the figure. Similarly, the corresponding point in the search image can be identified by using a matching triangle in the search image. There are two instances of the query image in the search image, labeled as 1 and 2. This occurs in both cases. The described process for one corner point applies similarly to all other corner points. Collectively, all corners, and hence all instances 1 and 2 are identified in this example.
Let i , j , k denote the triangle whose vertices are indexed by i , j , k , and whose internal angles are i j k , j k i , k i j in clockwise order. For any two triangles t 1 and t 2 , let t 1 t 2 indicate that t 1 and t 2 are similar ( they have the same internal angles and proportional side lengths).
A triangle feature descriptor of Q with triangle i , j , k is defined as D Q , i , j , k = ( i , j , k , i j k , j k i , k i j , | i j | , P = { ( d i s t ( i , x ) , d i s t ( j , x ) , d i s t ( k , x ) ) | x Q { i , j , k } } , where d i s t denotes the Euclidean distance between two points. We use D Q , i , j , k . P to refer to the set P in this definition.
A triangle feature descriptor is a local feature descriptor for Q that predicts and yields a match for Q. A set is defined for the predicted and verified points for a potential match of Q in I. For a , b , c I i , j , k Q , let S D Q , i , j , k , I , a , b , c denote the set of points predicted based on a , b , c in I, and triangle feature descriptor D Q , i , j , k for Q. More precisely, this can be expressed as follows:
Definition 1.
For a , b , c I and i , j , k Q such that a , b , c i , j , k , S D Q , i , j , k , I , a , b , c =   { x | x is identified by ( r · d i s t ( i , x ) , r · d i s t ( j , x ) , r · d i s t ( k , x ) ) for some ( d i s t ( i , x ) , d i s t ( j , x ) , d i s t ( k , x ) ) D Q , i , j , k . P , r = | a b | / | i j | } { a , b , c } . For a , b , c i , j , k , S D Q , i , j , k , I , a , b , c = .
In other words, S D Q , i , j , k , I , a , b , c is a set of predicted points, each of which corresponds to a point in Q for a potential match of Q in I. If the calculated ratio | S D Q , i , j , k , I , a , b , c I | / | Q | is greater than or equal to a given threshold ratio H in ( 0 , 1 ] , then, the subset S D Q , i , j , k , I , a , b , c I is considered an instance of Q in I.
The objective is to find all instances of Q in I, which can be described as follows: M ( Q , I , H ) = { ( a , b , c , i , j , k ) | a , b , c I , i , j , k Q such that | S D Q , i , j , k , I , a , b , c I | / | Q | H } .
We summarize Algorithm 1 for the object detection problem. It takes as input a set Q of corner points for the query object and a set I containing corner points belonging to objects in the searched image. The threshold H is a parameter in ( 0 , 1 ] used to distinguish significant matches from insignificant findings.
Algorithm 1: Algorithm for finding all matches of Q in I, summarized from [16].
Mathematics 13 00925 i001
In Lines 1–3, the algorithm calculates all internal angles ( α , β , γ ) for the triangles in Q and I and assigns them to bins A ( α , β , γ ) , which are created using discretization parameters. These bins contain triangles from Q and I, referred to as A ( α , β , γ ) . Q and A ( α , β , γ ) . I , respectively.
In Lines 4–9, the algorithm iterates over all discrete angle values. For each set of angles, all matching pairs of triangles in the Cartesian product of A ( α , β , γ ) . Q and A ( α , β , γ ) . I are considered. The corresponding geometric feature, as defined in Definition 1, is then computed in Line 5. Based on this feature, the predicted position of each corner point in I is determined. If a sufficient number of point matches are found in Line 6, a match for Q in I is confirmed and reported in Line 7.
The total time complexity of Algorithm 1 is Θ ( N 3 + C 3 + N 2 N ) , where N is the number of similar triangle pairs in Q and I. We note that the resulting time complexity is sensitive to the output size. The values of C and N are predefined for the algorithm, and the maximum value of N depends on C and N. Consequently, the time requirement can be adjusted by tuning the parameters C and N. The parameter C controls the granularity of angles, where π / C represents the size of an angular bin within which all angles are treated as equivalent. The parameter N sets an upper bound on the number of points considered in the computation. In the experiments reported, the typical values are C = 17 , N = 10 for the query and N = 40 for the searched image. The value of N is often in the order of several thousand. However, the actual time complexity, Θ ( N 3 + C 3 + N 2 N ) , can be significant, as N may grow as large as Ω ( N 6 ) . To see this, note that there can be O ( N 3 ) similar triangles in both Q and I, implying N = Ω ( N 6 ) in the worst case. We encountered such instances during our tests.
A careful study of the space requirement of Algorithm 1 shows that its space complexity is Θ ( C 3 N 3 ) , as there can be Θ ( N 3 ) triangles from Q and I in each of the Θ ( C 3 ) bins. While this complexity can be mitigated by imposing limits on corner-laden objects, its practicality diminishes when dealing with images containing numerous objects with many corners, exacerbating both time and space complexities. In particular, the space requirement can become prohibitive. Although in many cases these parameters are not very large, the space complexity remains the most limiting factor when a high number of corners needs to be detected and processed in input images. It is not surprising that we encountered instances where Algorithm 1 failed to complete due to insufficient memory.
To address these challenges, our paper proposes a novel approach that leverages Delaunay triangulation, considering only the triangles generated by the Delaunay triangulation, unlike in [16]. The number of triangles considered is linear with respect to the number of corner points, offering a substantial reduction in complexity. We demonstrate that focusing solely on the triangles within the Delaunay triangulation suffices for many practical cases in which a certain condition holds. The result is a more feasible and efficient object recognition algorithm, as validated by our empirical results. There are examples where our new method produces correct results efficiently, while Algorithm 1 cannot be executed due to insufficient memory.
Another contribution of our new algorithm is its efficient handling of rigid object transformations by considering all vertex permutations of all triangles in Q. This approach does not impact the asymptotic time or space complexity of the algorithm, as the number of considered triangles remains small relative to the input size.
For a given set of discrete points P, a Delaunay triangulation, denoted by D e l ( P ) [18], is a triangulation in which no point in P lies inside the circumcircle of any triangle in D e l ( P ) . For any triangle in D e l ( P ) , its circumcircle does not contain any other point in P. D e l ( P ) satisfies several interesting properties. For example, in D e l ( P ) for any P, the minimum angle among all the triangles is maximized. The Delaunay triangulation of N points has O ( N ) (approximately 2 N 4 ) triangles.
Figure 2 illustrates an example problem instance in which a query object and a search image contain two matches to this query, labeled as 1 and 2. The Delaunay triangulations for both the query and the image are shown beneath them. Algorithm 1 examines all possible triangles and their pairwise matches to identify all matches for the query. In contrast, the newly proposed algorithm focuses only on triangles within the Delaunay triangulations and their pairwise matches. In this case, the query matches are successfully identified as long as at least one correct triangle is found and used as a reference for each query match. The Euclidean distances from the corners of a triangle uniquely identify a point in 2D. In Figure 2, after aligning each common pair of triangles indicated by the dashed lines, we calculate the Euclidean distances from the query’s vertices to the corners of their reference triangle. Using these distances, along with the scaling factor between the matching triangles, we determine the positions of all other vertices in a potential match within the search image. In Figure 2, each of the two shaded triangles yields instances labeled as 1 and 2 of the query in the search image.
For a set of points in the plane, the Delaunay triangulation is unique if and only if no four points are cocircular, meaning that the circumcircle passing through any three points does not contain any other points from the set. In such cases, the Delaunay triangulation is well-defined and unique.
Let Q be the set of corner points for a query object and I be the set of corner points obtained from an input image.
A triangle t is a visible triangle of Q in I if t is in D e l ( I ) and there exists a matching triangle in D e l ( Q ) . Two triangles are considered a match if they are similar (i.e., if they have the same angles). A triangle t is a yielding triangle for an instance of Q in I if t is a visible triangle of Q in I, and the vertices predicted and verified using the geometric feature, with t as a reference, yield an instance of Q in I. In Figure 2, two triangles in D e l ( I ) are highlighted with shaded interiors. The dashed lines connect these triangles to their corresponding matching triangles in D e l ( Q ) . The highlighted triangles in D e l ( I ) are yielding triangles. It is important to note that other yielding triangles exist, but only two are shown for simplicity of illustration.
If an instance of Q in I contains all vertices of Q and does not overlap with any other object, then all triangles of Q can be found as visible triangles in I. In our proposed method, any of these triangles will yield a correct match. However, if the Delaunay triangulation for Q is not unique, there can still be a visible triangle in an instance of Q in I. A necessary and sufficient condition for detecting this instance is the existence of a yielding triangle from D e l ( I ) whose circumcircle does not encompass any point belonging to another (overlapping) object, including another instance of Q. This assumption is reasonable, as the absence of such a visible triangle precludes the possibility of detecting object instances within the given image. Furthermore, without this assumption, it is unclear whether any method could reliably determine or suggest the presence of an object instance.
We note that in Q, every triangle formed by three vertices whose circumcircle does not contain any other vertex will be part of D e l ( Q ) according the definition of Delaunay triangulation. For a given triangle in D e l ( Q ) , if a similar triangle is detected in D e l ( I ) , it may indicate a local feature that could lead to a match with Q in I. At least some local features must be preserved in images when a match exists. More formally, a matching pair of triangles in D e l ( I ) and D e l ( Q ) can serve as a potential reference for an instance of Q in I.
The concepts of visible and yielding triangles are used solely for analyzing the performance of our proposed method. The existence of a yielding triangle in D e l ( I ) for an instance of Q in I implies that we find this instance if we examine all matching pairs of triangles in D e l ( Q ) and D e l ( I ) as references. Our proposed algorithm is based on comparing triangles in D e l ( Q ) and D e l ( I ) and determining whether they yield matches. By using a visible triangle in I and a matching triangle in Q as a reference, the proposed method can predict all other corners of the queried object for a potential instance of Q in I. Under certain practical assumptions, this approach identifies the same matches while requiring significantly less time and space. In some cases, the proposed method produces results even when Algorithm 1 fails to run due to its excessive memory requirements.
Let Q 1 and Q 2 be two rigid objects in 2D, represented as sets of points. We consider transformations on rigid objects [19]. We say that Q 1 and Q 2 in 2D are geometrically isomorphic if and only if there exists a rigid transformation T such that Q 2 = T ( Q 1 ) , where T is one of the following: translation, a shift of all points in Q 1 by a fixed vector v; rotation, a rotation of all points in Q 1 around a fixed point (e.g., the origin) by an angle θ ; reflection, a mirroring of all points in Q 1 across a line of reflection, or a combination of these, for example, a reflection followed by a translation. These transformations preserve the geometry of rigid objects, ensuring that the distances and angles between all points remain unchanged. We also extend T to include the transformation resize, which scales Q 1 by a positive real factor r. After applying operations in T on Q 1 , the angles and distance ratios are preserved. For any three points p 1 , p 2 , p 3 Q 1 , the angles between the vectors in Q 1 are preserved in Q 2 between the vectors for the corresponding points p 1 , p 2 , p 3 Q 2 , respectively. For any two points p 1 , p 2 Q 1 , and the corresponding points p 1 , p 2 Q 2 , respectively, the ratio of the distance between p 1 and p 2 to the distance between p 1 and p 2 is preserved. This holds for all pairs of points in the same object.
If Q 1 and Q 2 are geometrically isomorphic, then D e l ( Q 1 ) and D e l ( Q 2 ) contain the same triangles, with the corresponding vertices, identical angles, and proportional side lengths.
Definition 2.
Compute f T  M(f(Q),I,H), where f ( Q ) denotes applying the transformation f from T = { t r a n s l a t i o n , r o t a t i o n , r e f l e c t i o n , r e s i z i n g , c o m p o s i t i o n } to Q.
The formulation in Definition 2 considers all possible transformations of Q. Any triangle in Q can be chosen as a reference. Applying the transformation T ( Q ) to Q is equivalent to transforming the reference triangle and computing the corresponding points to match with respect to this transformed triangle. We compute this using the set of triangles obtained from the Delaunay triangulation and their possible transformations. Algorithm 1 utilizes the geometric feature defined in Definition 1 and implements an approach that is invariant to both scaling and rotation. Scaling invariance is achieved by computing the scaling factor between two matching triangles and uniformly applying this factor to all distances.
To achieve invariance under all transformations in T, we consider all permutations of a matching triangle i , j , k in D e l ( Q ) with a given triangle a , b , c in D e l ( I ) whenever a new pair of matching triangles is identified.
Applying T to all possible triangles in Q collectively results in the same effect as applying T directly to Q. A rigid transformation T composed of translation, rotation, and reflection preserves distances and angles. When T is applied to the set of points Q, it moves each point in Q while maintaining the geometric relationships among them. The resulting set of points is T ( Q ) . If we consider all possible triangles formed by the points in Q, these triangles are subsets of Q. Applying T to each triangle involves applying T to the three vertices of the triangle, resulting in a new triangle that has undergone the same transformation as the individual points. When T is applied to every triangle i , j , k in Q, each triangle undergoes the same rigid transformation, resulting in the transformed set { T ( { i , j , k } } of triangles. The “collective total effect” of applying T to all possible triangles in Q is equivalent to transforming Q as a whole. Since the triangles are subsets of Q, transforming all triangles corresponds to transforming all points in Q. Thus, { T ( i , j , k ) | i , j , k Q } T ( Q ) . Applying T to all possible triangles in Q collectively is equivalent to applying T directly to Q. The transformation of the points in Q inherently transforms all the triangles formed by those points, preserving the geometric structure.
Our observation is that any transformation based on a reference triangle can be applied to the entire Q using this reference triangle and the geometric feature in Definition 1, as all angles and length ratios are preserved under these transformations. Comparing Q and f ( Q ) for f T is equivalent to comparing Q and the portion of Q associated with the reference triangle g found in I after applying f to g. This comparison can also be achieved by accounting for all possible configurations of g.
For a given { i , j , k } , let π i , j , k be the permutation set { ( i , j , k ) , ( i , k , j ) , ( j , i , k ) , ( j , k , i ) , ( k , i , j ) , ( k , j , i ) } . We note that for any rigid transformation of f T , and for any { i , j , k } , where i , j , k D e l ( Q ) , f ( { i , j , k } ) is in π i , j , k . Based on this, we reformulate the objective in Definition 2 as follows:
Definition 3.
Compute { ( a , b , c , i , j , k ) | a , b , c D e l ( I ) , ( i , j , k ) π i , j , k , i , j , k D e l ( Q ) such that | S D Q , i , j , k , I , a , b , c I | / | Q | H } .
We perform preprocessing before searching for a rigid object in an image. To extract the corner points Q and I from the query and search input images, respectively, we use the Shi–Tomasi Corner Detector [20] implemented in the following Python version 3.11 function: cv.goodFeaturesToTrack(). This function applies a slight modification to the Harris Corner Detector and produces better results. It is based on computing the minimum eigenvalue of the structure tensor (second-moment matrix). The dominant operations involve computing image gradients and eigenvalue calculations, both of which run in linear time. Thus, its time and space complexities are O ( M ) , where M is the number of pixels. From the detected corner points, we use the Delaunay triangulation implementation in Python [21] to generate Delaunay triangulations. This implementation is based on an incremental convex hull algorithm for triangulation. For N input points in 2D, the Delaunay triangulation is computed in expected time O ( N log N ) and in worst-case time O ( N 2 ) using O ( N ) space. Corner detection is a crucial step in both Algorithm 1 and our newly proposed Algorithm 2. Although Delaunay triangulation is an additional preprocessing step for Algorithm 2, its computational cost is not significant in the overall complexity of our approach. We propose an object matching algorithm that takes as input the Delaunay triangulations D e l ( Q ) and D e l ( I ) , generated from Q and I, respectively.
By applying Delaunay triangulation, we obtain a list of vertices along with their neighboring vertices. Using this information, we construct a list of triangles, generating D e l ( Q ) and D e l ( I ) . As in Algorithm 1, we assume that both | Q | and | I | are O ( N ) , and therefore, | D e l ( Q ) | and | D e l ( I ) | are also O ( N ) .
Algorithm 2 takes Q, I, D e l ( Q ) , and D e l ( I ) as input. It finds and returns all matches of Q in I based on the assumption that D e l ( Q ) defines the object Q. Every match of D e l ( Q ) found in D e l ( I ) corresponds to an instance of Q in I.
The input threshold H is a parameter in ( 0 , 1 ] that is used to separate significant matches from other insignificant partial matches.
The loop in Lines 1–10 considers each triangle a , b , c in I, and compares it with all permutations i , j , k of every triangle i , j , k in D e l ( Q ) in the loop in Lines 2–9. Line 4 computes the set S D Q , i , j , k , I , a , b , c of predicted points in I, as described in Definition 1, using the dynamic programming approach which was also used in Algorithm 1. This computation is performed by first matching the two triangles, a , b , c and i , j , k , and then extending the match by one verified point at a time. The position of each new point in I is predicted based on the triangle feature descriptor and subsequently verified. This iterative process is a key aspect that characterizes our algorithm as a dynamic programming algorithm.
Algorithm 2: Algorithm for finding all matches of Q in I.
Mathematics 13 00925 i002
From the similar triangles i , j , k and a , b , c , the scale ratio is computed as r = | a b | / | i j | . For each point x Q , the transformed coordinates x are determined using the scaled distance triplet ( r · d i s t ( i , x ) , r · d i s t ( j , x ) , r · d i s t ( k , x ) ) along with the vertices of a , b , c . This transformation is achieved by finding the intersection of three circles centered at a , b , c , with respective radii r · d i s t ( i , x ) , r · d i s t ( j , x ) , and r · d i s t ( k , x ) . This is done by creating a system of three quadratic equations and solving them. In this work, we improved the solution of the resulting system of equations by developing a different sequence of intermediate calculations, as shown in Appendix A, making the solutions more numerically robust compared to those in Algorithm 1. For equality tests involving real numbers, we consistently applied tolerance values throughout our implementation.
The coordinates of x I are computed with an error tolerance, adjusted dynamically based on the scaling factor, for each pair of similar triangles. Each computed x is then checked within I to determine if there exists a point in I within the tolerance distance. If such a point is found, it is considered a corresponding match for x in a potential alignment. Initially, every similar triangle pair contributes three point matches due to the alignment of their corner points.
Line 5 first evaluates the total number of matched points, including additional ( x , x ) pairs beyond the triangle corners, and verifies whether the condition | S D Q , i , j , k , I , a , b , c I | / | Q | H is satisfied. We note that since verification is done in I, the computed set is described as the intersection of S D Q , i , j , k , I , a , b , c , and I. If the tested condition holds, Line 6 reports this set as a new match. It is easy to see that all elements of the objective set in Definition 3 will be reported in this manner.
The loop in Lines 1–10 iterates O ( | D e l ( I ) | ) = O ( N ) times. Similarly, the loop in Lines 2–9 iterates O ( | D e l ( Q ) | ) = O ( N ) times, too. The loop in Lines 3–8 iterates 6 times. In Line 4, solving a system of three quadratic equations for each point requires constant time. Therefore, Line 4, as well as Lines 5 and 6, each run in time O ( | Q | ) = O ( N ) . Thus, we conclude that the overall time complexity of the algorithm is O ( N 3 ) . The space requirement of the algorithm is O ( N ) since the number of triangles considered from Q and I is O ( N ) . The pairwise comparisons and matching-related computations are performed using O ( N ) space. This represents a significant improvement over Algorithm 1, which has a space complexity of Θ ( C 3 N 3 ) , where C is a discretization factor for angles (e.g., C = 17 ).
A post-processing of this set can be performed to report all instance of Q in I. In this work, we follow the same objective. However, we propose a method that does not consider all possible triangles in Q and I.
Unlike Algorithm 1, the new algorithm does not consider every possible triangle from Q and I. Provided that a triangle in D e l ( Q ) is visible in I and included in D e l ( I ) , the new algorithm will not miss any instances of Q in I. This holds true for all practical purposes. For this to fail, objects would need to overlap at every triangle in Q formed by adjacent neighbors. Under such conditions, it is unclear whether an instance of Q exists in I at all.

3. Results

The new algorithm we propose improves time requirement to Θ ( N 3 ) and space requirement to Θ ( N ) .
We also study the accuracy of our algorithm. Let R be a subset of points in I that corresponds to an instance of Q in I. We consider the question of whether there exists a yielding triangle p D e l ( I ) . To analyze this, suppose that R matches a sufficiently large subset of T ( Q ) . Let t be a triangle in D e l ( Q ) . If the circumcircle defined by the corner points of t does not contain any other points, the triangle t will be part of every Delaunay triangulation of the set Q. Therefore, in most cases, different Delaunay triangulations of Q and any transformed version T ( Q ) that appears in I will share many triangles. The probability that two such triangulations have no triangles in common is negligible. In other words, they will “almost certainly” share a triangle that is a yielding triangle. If any of these triangles is visible in R I , it will yield a match.
For a query object Q, triangles in small components of the object can serve as effective identifiers. For example, in the 2D image of an arrow, the triangular head of the arrow can define such a component. Experimental results reported in Figure 3 and Figure 4 illustrate this. The execution times shown under the images were obtained by running Algorithms 1 and 2 on a computer with a 2.3 GHz AMD EPYC 7501 processor.
In Figure 3, we present results from Algorithm 2 for the same cases examined in [16] and provide comparisons. In parts (a) and (b), all possible matches have been identified. In part (b), our new results from Algorithm 2 outperform those obtained by Algorithm 1, a difference we attribute to the improved corner-finding algorithm introduced in this study. In this case, Algorithm 2 demonstrates superior performance.
In Figure 4, we report results on completely new cases. Although the objects in parts (a) and (b) are not rigid, the results remain accurate. When shapes are sufficiently preserved and contain visible triangles, our algorithm performs well, even when the objects are not rigid. The objects in parts (c) and (d) are rigid, yielding excellent results. In part (c), the queried object has one perfect match and one near-perfect match. Both were accurately detected by the algorithm, reporting 100% and 79% matches, respectively. In part (d), all matches have been accurately found. Most matches are perfect 100 % matches. Partial matches near the edges of the rectangular frame are correctly identified, with match ratios starting at 87 % . In our tests, Algorithm 1 failed to solve this case due to a space bottleneck encountered during runtime. In contrast, Algorithm 2 handled it efficiently, as shown in part (d).

4. Discussion

Algorithm 1 introduced a novel idea for detecting rigid objects. However, there are cases where the algorithm’s high complexity makes it impractical. The method’s precision relies on finding matching triangles and using them as references to predict and verify all corner points in a potential match. The new algorithm uses the same geometric feature, however, it reduces the search complexity for a matching pair of triangles significantly. This is achieved through the use of Delaunay triangulation. When there is an instance of the searched object Q in an image I, the probability that no common pair of triangles from D e l ( Q ) and D e l ( I ) exists is virtually zero. This guarantees finding the match while simultaneously saving time and space. If D e l ( Q ) and D e l ( I ) do not share any similar triangles, then, for every triangle in D e l ( Q ) , a vertex belonging to another object (or another instance of Q) appears in the circumcircle of each similar triangle in D e l ( I ) . Consequently, we can conclude that no detectable instance of Q exists in I in this scenario.
We summarize the similarities and differences between Algorithms 1 and 2 as follows: Both algorithms use the same geometric feature and take as input the sets Q, I, and a threshold value H. They follow the same fundamental approach—first locating a reference triangle and then constructing a match by iteratively predicting and verifying the next vertex in a dynamic programming manner. With standard line lengths and spacing in C++ code, Algorithms 1 and 2 were implemented in 595 and 466 lines, respectively. Except for a few simple common functions, they implement significantly different algorithms. Algorithm 2 differs from Algorithm 1 in several key aspects. It also takes D e l ( Q ) and D e l ( I ) as input, utilizing the Delaunay triangulations of Q and I to restrict the search space. Instead of considering all possible triangles, it only examines those within these triangulations, significantly reducing complexity. Additionally, Algorithm 2 solves the resulting system of quadratic equations differently when computing predicted vertex positions, as shown in Appendix A. Another major distinction is that Algorithm 2 accounts for rigid transformations by considering all permutations of triangle vertices. This approach, in conjunction with the geometric feature in Definition 1, inherently handles transformations of the entire object, including translation and reflection, without requiring additional space.
Algorithm 2 can be further improved to reduce both time and space complexities through better implementation. For instance, similar triangles from D e l ( Q ) and D e l ( I ) could be grouped into the same bins. Then, only the pairs (and their permutations) within the same bins would need to be considered in Line 4 of Algorithm 2, rather than all possible pairs.
In Algorithm 2, we generated the positions of bounding boxes (for inclusion relations, the largest bounding boxes). This approach was sufficient for our purpose of obtaining results for comparison. We manually added the directed lines to indicate the positions of the detected matches and included the match percentages in Figure 3 and Figure 4.
We have limited our comparisons to Algorithm 1, which have demonstrated the effectiveness of the geometric feature in comparison to the fully affine-invariant ASIFT algorithm [22], as shown in [23]. AI-based algorithms belong to a different class, as they require training on the query and rely on substantial computational resources, including extensive processing power. Naturally, these methods produce more robust, accurate, and precise results than ours. However, our algorithm is specifically designed for rigid objects without any prior training, utilizing lightweight computation. This makes it particularly well-suited for real-time search applications on untrained queries. Moreover, our study of the proposed geometric feature and its integration with Delaunay triangulation provides new mathematical insights into this method.

5. Conclusions

We propose an improved object detection algorithm for rigid objects using a previously introduced geometric feature. The algorithm leverages triangles from Delaunay triangulation as reference points. Based on these triangles, potential matches are identified through a dynamic programming approach to handle various rigid object transformations. Experiments showed results that were equal to or better than those reported in [16]. Additionally, we presented a test case that could not be handled efficiently using the same geometric feature in the past, and this was accurately and efficiently solved by the new algorithm. Our exploration of this geometric feature and its integration with Delaunay triangulation provides fresh mathematical insights into the method.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy and legal reasons.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Our function computePoint, shown below, computes a point at given distances from a known reference triangle. In our implementation, it appears slightly different.
#include <iostream>
#include <cmath>
#include <vector>
struct Point {
    double x;
    double y;
};
Point computePoint(const std::vector<double>& distances) {
 // vector triangle has the vertices of a reference triangle, for example:
 // std::vector<Point> triangle = {{0, 0}, {1, 0}, {0.5, std::sqrt(3) / 2}};
 // distances to vertices are given
 // Get the coordinates of each corner
    double x1 = triangle[0].x;
    double y1 = triangle[0].y;
    double x2 = triangle[1].x;
    double y2 = triangle[1].y;
    double x3 = triangle[2].x;
    double y3 = triangle[2].y;
    // Compute the coefficients for solving the system of equations
    double A = 2 * (x2 - x1);
    double B = 2 * (y2 - y1);
    double C = distances[0]*distances[0] - distances[1]*distances[1]
            - x1*x1 + x2*x2 - y1*y1 + y2*y2;
    double D = 2 * (x3 - x1);
    double E = 2 * (y3 - y1);
    double F = distances[0]*distances[0] - distances[2]*distances[2]
            - x1*x1 + x3*x3 - y1*y1 + y3*y3;
    // Solve the system of equations to find the coordinates of the point
    double x = (B * F - E * C) / (B * D - E * A);
    double y = (D * C - A * F) / (B * D - E * A);
    return {x, y};
}

References

  1. Harris, C.; Stephens, M. A Combined Corner and Edge Detector. In Proceedings of the 4th Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151. [Google Scholar]
  2. Viola, P.; Jones, M. Rapid Object Detection using a Boosted Cascade of Simple Features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; pp. I-511–I-518. Available online: https://ieeexplore.ieee.org/document/990517/ (accessed on 7 January 2025).
  3. Lowe, D.G. Object Recognition from Local Scale-Invariant Features. In Proceedings of the 7th IEEE International Conference on Computer Vision, Corfu, Greece, 20–25 September 1999; pp. 1150–1157. [Google Scholar]
  4. Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. In Proceedings of the 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 404–417. [Google Scholar]
  5. Belongie, S.; Malik, J.; Puzicha, J. Shape Context: A New Descriptor for Shape Matching and Object Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef]
  6. Huttenlocher, D.P.; Klanderman, G.A.; Rucklidge, W.J. Comparing Images Using the Hausdorff Distance. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 850–863. [Google Scholar] [CrossRef]
  7. Acevedo, D.; Negri, P.; Buemi, M.E.; Fernez, F.G.; Mejail, M. A simple geometric-based descriptor for facial expression recognition. In Proceedings of the IEEE 12th International Conference on Automatic Face & Gesture Recognition, Washington, DC, USA, 30 May–3 June 2017; pp. 802–808. [Google Scholar]
  8. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  9. Szeliski, R. Computer Vision: Algorithms and Applications; Springer: London, UK, 2011. [Google Scholar]
  10. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  11. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  12. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 213–229. [Google Scholar]
  13. Shrivastava, A.; Malisiewicz, T.; Gupta, A.; Efros, A.A. Data-Driven Visual Similarity for Cross-Domain Image Matching. ACM Trans. Graph. (TOG) 2011, 30, 1–10. [Google Scholar] [CrossRef]
  14. Li, Y.; Wang, G.; Ji, X. Rigidity-Aware Detection for 6D Object Pose Estimation. arXiv 2023, arXiv:2303.12396. [Google Scholar]
  15. Klein, G.; Murray, D. Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. Available online: https://dr.lib.iastate.edu/server/api/core/bitstreams/dc6e4360-4340-4c9c-bbd1-08ad2ab0bcb9/content (accessed on 7 January 2025).
  16. Arslan, A.N. Object Detection in Images by Verifying Corners at Predicted Positions. In The 6th Computational Methods in Systems and Software Conference 2022, Springer Series, Data Science and Intelligent Systems as the Proceedings of CoMeSySo; Springer: Cham, Switzerland, 2022; LNNS 597; Available online: https://link.springer.com/chapter/10.1007/978-3-031-21438-7_72 (accessed on 1 January 2024).
  17. Dinas, S.; Bañón, J.M. A review on Delaunay triangulation with application on computer vision. Int. J. Comput. Eng. (IJCSE) 2014, 3, 9–18. [Google Scholar]
  18. Delaunay, B. Sur la sphère vide. Bull. L’Académie Sci. L’Urss Cl. Sci. Mathématiques Nat. 1934, 6, 793–800. [Google Scholar]
  19. Anjyo, K.; Ochiai, H. Rigid Transformation. In Mathematical Basics of Motion and Deformation in Computer Graphics, 2nd ed.; Synthesis Lectures on Visual Computing: Computer Graphics, Animation, Computational Photography and Imaging; Springer: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  20. Shi, J.; Tomasi. Good features to track. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar] [CrossRef]
  21. Delaunay Implementation in Python. The SciPy Community. Available online: https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Delaunay.html (accessed on 2 January 2024).
  22. Yu, G.; Morel, J.-M. ASIFT: An Algorithm for Fully Affine Invariant Comparison. Image Process. Line 2011, 1, 11–38. [Google Scholar] [CrossRef]
  23. Image Processing Online. IPOL-Journal. Available online: http://demo.ipol.im/demo/my_affine_sift/ (accessed on 5 July 2022).
Figure 1. Illustration of finding corresponding points in instances 1 and 2 of a query object.
Figure 1. Illustration of finding corresponding points in instances 1 and 2 of a query object.
Mathematics 13 00925 g001
Figure 2. Simple example illustrating the newly proposed approach.
Figure 2. Simple example illustrating the newly proposed approach.
Mathematics 13 00925 g002
Figure 3. Results comparing Algorithm 2 and Algorithm 1 on two test cases: (a,b). Detected corner points are marked on the query and search images. The middle column shows the Delaunay triangulations of the images on the left, while matches found by Algorithm 2 are shown on the right with directed lines and percentages where applicable.
Figure 3. Results comparing Algorithm 2 and Algorithm 1 on two test cases: (a,b). Detected corner points are marked on the query and search images. The middle column shows the Delaunay triangulations of the images on the left, while matches found by Algorithm 2 are shown on the right with directed lines and percentages where applicable.
Mathematics 13 00925 g003
Figure 4. Results of Algorithm 2 on new test cases (ad). The middle column shows the Delaunay triangulations of the images on the left, while matches found by Algorithm 2 appear on the right with directed lines and percentages where applicable.
Figure 4. Results of Algorithm 2 on new test cases (ad). The middle column shows the Delaunay triangulations of the images on the left, while matches found by Algorithm 2 appear on the right with directed lines and percentages where applicable.
Mathematics 13 00925 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arslan, A.N. Identifying All Matches of a Rigid Object in an Input Image Using Visible Triangles. Mathematics 2025, 13, 925. https://doi.org/10.3390/math13060925

AMA Style

Arslan AN. Identifying All Matches of a Rigid Object in an Input Image Using Visible Triangles. Mathematics. 2025; 13(6):925. https://doi.org/10.3390/math13060925

Chicago/Turabian Style

Arslan, Abdullah N. 2025. "Identifying All Matches of a Rigid Object in an Input Image Using Visible Triangles" Mathematics 13, no. 6: 925. https://doi.org/10.3390/math13060925

APA Style

Arslan, A. N. (2025). Identifying All Matches of a Rigid Object in an Input Image Using Visible Triangles. Mathematics, 13(6), 925. https://doi.org/10.3390/math13060925

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop