Next Article in Journal
Deep Integration of Fiber-Optic Communication and Sensing Systems Using Forward-Transmission Distributed Vibration Sensing and on–off Keying
Previous Article in Journal
Shuffle Attention-Based Pavement-Sealed Crack Distress Detection
Previous Article in Special Issue
Fall Detection Method for Infrared Videos Based on Spatial-Temporal Graph Convolutional Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local-Peak Scale-Invariant Feature Transform for Fast and Random Image Stitching

by
Hao Li
1,
Lipo Wang
2,
Tianyun Zhao
3 and
Wei Zhao
1,*
1
State Key Laboratory of Photon-Technology in Western China Energy, International Collaborative Center on Photoelectric Technology and Nano Functional Materials, Institute of Photonics & Photon Technology, Northwest University, Xi’an 710127, China
2
UM-SJTU Joint Institute, Shanghai Jiao Tong University, Shanghai 200030, China
3
School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(17), 5759; https://doi.org/10.3390/s24175759
Submission received: 31 July 2024 / Revised: 23 August 2024 / Accepted: 3 September 2024 / Published: 4 September 2024
(This article belongs to the Special Issue Multi-Modal Data Sensing and Processing)

Abstract

:
Image stitching aims to construct a wide field of view with high spatial resolution, which cannot be achieved in a single exposure. Typically, conventional image stitching techniques, other than deep learning, require complex computation and are thus computationally expensive, especially for stitching large raw images. In this study, inspired by the multiscale feature of fluid turbulence, we developed a fast feature point detection algorithm named local-peak scale-invariant feature transform (LP-SIFT), based on the multiscale local peaks and scale-invariant feature transform method. By combining LP-SIFT and RANSAC in image stitching, the stitching speed can be improved by orders compared with the original SIFT method. Benefiting from the adjustable size of the interrogation window, the LP-SIFT algorithm demonstrates comparable or even less stitching time than the other commonly used algorithms, while achieving comparable or even better stitching results. Nine large images (over 2600 × 1600 pixels), arranged randomly without prior knowledge, can be stitched within 158.94 s. The algorithm is highly practical for applications requiring a wide field of view in diverse application scenes, e.g., terrain mapping, biological analysis, and even criminal investigation.

1. Introduction

Image stitching is employed to reconstruct complete image information from the image fragments [1]. Due to the limited capability to capture a large area with high spatial resolution, image stitching finds widespread uses in engineering applications such as machine vision [2,3,4,5], augmented reality [6,7,8,9], navigation [10,11], and panoramic shooting [12,13,14]. Furthermore, it plays an indispensable role in supporting scientific research, such as the construction of large-scale bioimages [15,16,17,18,19], micro-/nanostructures [20,21,22], aerial photography [23,24,25], and space remote sensing [26].
Various image stitching techniques can be traced back to the late 1970s or early 1980s [27,28,29], aiming to merge multiple images to achieve a wider field of view. In these early stages, multiple images of the same scene were simply superimposed [1]. At present, advanced image stitching techniques can be primarily categorized into two categories: region-based and feature-based [28]. In comparison to the computationally intensive region-based stitching techniques [13], feature-based techniques impose fewer requirements on region overlapping [30], rendering them increasingly appealing in recent years. Harris et al. [31] initially proposed a feature-based stitching technique for detecting features in images based on corner points. Subsequently, Lowe et al. [32] introduced the scale-invariant feature transform (SIFT) approach for feature detection, utilizing the Gaussian pyramid and Gaussian difference pyramid. Although SIFT has a series of features, such as robustness, scale invariance, and rotation invariance, it necessitates substantial computational resources, leading to a slower feature point detection and accordingly, lower stitching speed.
To address the limitations of SIFT, Rosten et al. [33] proposed the Features from Accelerated Segment Test (FAST), a machine learning algorithm for high-speed corner detection. However, FAST sacrifices scale invariance and rotation invariance features. To mitigate these drawbacks, Bay et al. [34] proposed the Speeded Up Robust Features (SURF) algorithm in 2008, which introduced a new scale and rotation invariant detector and descriptor. While SURF offers scale invariance, and rotation invariance, it is less robust to viewpoint changes. In 2011, Rublee et al. [35] presented an efficient alternative to SIFT and SURF, named Oriented FAST and Rotated BRIEF (ORB). However, it is susceptible to variations in illumination. In the same year, Stefan et al. [36] proposed the Binary Robust Invariant Scalable Keypoints (BRISK) method. Similar to SIFT, BRISK predominantly utilizes FAST9-16 to identify feature points and generates feature descriptor vectors in binary form, which exhibit sensitivity to variations in illumination and rotation. Later, to overcome boundary blurring and detail loss in SIFT and SURF, Alcantarilla et al. [37] proposed the KAZE algorithm in 2012, which can preserve more details by constructing a nonlinear scale space to detect feature points. However, similar to SIFT, the KAZE algorithm also requires a large calculation time. In 2013, Pablo et al. [38] proposed the Accelerated KAZE (AKAZE) algorithm to improve the calculation efficiency by changing the description of feature vectors.
Although in recent years the research focuses have shifted more toward the deep learning neural network [39,40,41], it is still effective to optimize and update the conventional algorithms for image stitching applications. For instance, the parity algorithm by Wu et al. [42], as an improvement of SIFT, can reduce the matching error with better accuracy. The segmentation algorithm by Gan et al. [43] helps to improve the stitching speed and the SIFT accuracy. An image registration method [44] combines Shi-Tomasi with SIFT to improve the matching accuracy and reduce the computational cost.
In feature-based image stitching techniques, the precise definition and detection of feature points are pivotal. The algorithm’s efficiency can be significantly enhanced if there is a reasonable replacement for the Gaussian pyramid and Gaussian difference pyramid operations in SIFT. In the dissipation element (DE) analyses of turbulent flows [45,46], the dynamic nature of the field can be characterized by the statistical properties of the extremal points. Moreover, the concept of extremal points can be extended to the multiscale level [47]. From a geometric standpoint, any image can be viewed as a structure with multiscale intensity, akin to the identified DEs. Considering the multiscale nature of the DE analysis in turbulent flow field and that in image analysis, it is possible to replace the Gaussian pyramid in the SIFT algorithm by the extremal points in DE analysis, to improve the efficiency of feature point detection in the SIFT algorithm.
In this study, we aim to construct a fast feature point detection algorithm, termed the local-peak scale-invariant feature transform (LP-SIFT), which integrates SIFT with the concept of local extremal points or image peaks at the multiscale level. Specifically, the featured points can be substituted with the multiscale local peak points without reliance on the Gaussian pyramid and differential pyramid in SIFT. Consequently, the efficiency of feature point detection can be significantly enhanced. Additionally, by combing LP-SIFT and RANSAC in image stitching, the stitching speed can be notably improved, particularly in the processing of large images, when compared with the original SIFT method. The framework of the algorithm is briefly introduced in Figure 1. Finally, we demonstrate that random images or segments of a large image can be successfully stitched within an acceptable time without the requirement of prior knowledge.

2. Materials and Methods

The principle of the LP-SIFT method is schematically diagramed in Figure 2. Generally, there are five sub-steps, namely, image preprocessing, feature point detection, feature point description, feature point matching, and image stitching. We will initially provide a detailed introduction to each step. Subsequently, a strategy of applying LP-SIFT to restore a large image from random image fragments will be elucidated.

2.1. Image Preprocessing

In the following exemplified case, the two images (Figure 2a) designated for stitching are referred to as the reference image (stored as the reference matrix M 1 ) and registered image (stored as the registration matrix M 2 ). Since the feature points adopted in LP-SIFT are the local peak points (both maximum and minimum), to avoid the difficulty of locating them in the constant image intensity region, e.g., due to saturation, it is numerically meaningful to first impose a small linear background on both images. Therefore, the new image matrixes M n , k becomes
M n , k i , j = M k i , j + i 1 n c k + j α
where α 1 is the linear noise coefficient. Now the size of M n , k is n r k × n c k , n r k is the row of the M n , k , and n c k is the column of the M n , k , where k = 1 represents parameters of the reference image and k = 2 represents parameters of the registered image.

2.2. Feature Point Detection

In step 2, we utilize the local peak points of multiscale images as the feature points (Figure 2b). Both the reference image and the registered image are partitioned into squared interrogation windows of size L , with varying scales, e.g., L = 32 to 128. The maximum and minimum points in each interrogation window are collected as feature points, which can be formulated as
M n , k , max p , L = max M n p , k i , j ,   i , j [ 0 , L ] M n , k , min p , L = min M n p , k i , j ,   i , j [ 0 , L ]
where M n p , k i , j is the pth interrogation window of L × L pixels.

2.3. Feature Point Description

Once all the feature points are gathered, the SIFT feature description vector is employed to characterize the acquired feature points. First, as shown in Figure 2d, around each feature point (represented by a red spot), a square region with the size w is extracted, with
w = β L d ,
where β = β 0 L / L m a x 1 / 2 is an adjustable factor to control the size of the square region and β 0 is the initial β when the interrogation window size L equals its maximum value L m a x (e.g., 128). To ensure w is not over L , it requires
β 0 1 d L L m a x 1 2 .
We further divide the square region into d × d subregions, e.g., d = 4 , as commonly used [48]. In each subregion M n , k , s , there are w s × w s pixels, with w s = β L as the side length of the subregion. At each pixel i , j in this subregion, the magnitude f k i , j and orientation θ k i , j of the image gradient can be calculated as
f k i , j = a 2 + b 2 θ k i , j = b / a a = M n , k , s i + 1 , j M n , k , s i 1 , j b = M n , k , s i , j + 1 M n , k , s i , j 1
Then, the distribution of the image gradient in 8 directions [48] can be determined based on θ k i , j and f k i , j . After the calculation in eight directions for each subregion, a 128-element array can be obtained, i.e., d × d × 8 when d = 4 . Such a 128-element array is then used as the feature description vector ( d f ) of the feature points [32]. Due to the statistical determination of the array over a wide area, the matching robustness is maintained.

2.4. Feature Point Matching

The position information (i.e., the pixel coordinates of the feature points) and feature description vector of each feature point in the reference images are denoted as p v 1 and d f 1 , respectively. The corresponding ones in the registered image are denoted as p v 2 and d f 2 . Then, the difference ( ) between the feature description vectors is
= d f 1 d f 2 2 .
The smaller the difference ( ) between the two feature description vectors, the more similar they are. The preset threshold of is denoted as s . Clearly, s signifies the requirement for similarity between the feature description vectors, thereby determining the number of matching points to be retained. Only when < s , the p v 1 , p v 2 , d f 1 , and d f 2 of the matched pairs are stored for image stitching.

2.5. Image Stitching

Based on the matched pairs obtained previously, the imaging stitching process utilizes the Random Sample Consensus (RANSAC) algorithm [49]. RANSAC operates under the fundamental assumption that the sample comprises both accurate data (inliers, data that conform to the model) and abnormal data (outliers, data that deviate from the expected range and do not align with the model), which may, for instance, result from noise [50] and improper measurements, assumptions, or calculations. Furthermore, RANSAC also assumes that for a given accurate dataset, the model parameters can be consistently computed.
As shown in Figure 2f, a homography matrix ( H ) is employed to depict the perspective transformation of a plane in the real world along with its corresponding image. This matrix is utilized to facilitate the transformation of the image from one viewpoint to another through the perspective transformation process. Therefore, the relationship between the matched pairs can be obtained as follows
x y 1 = H x y 1
where x , y represents the pixel coordinates of the matched points in the reference image. x , y represents the pixel coordinates of the matching points in the registered image. H can be expressed as
H = cos θ sin θ t x sin θ cos θ t y 0 0 1
where θ represents the angular difference between the reference image and the registered image. t x and t y represent the translational difference between the reference image and the registered image. All the quantities in Equation (8) are determined through the matched image pairs.

3. Results of Stitching on Two Images

In this section, we present a performance evaluation of the LP-SIFT algorithm, along with comparative results from other algorithms, such as SIFT, ORB, BRISK, and SURF. Table 1 presents different hardware and software resources used in this study. The code of the LP-SIFT algorithm was written in MATLAB 2021a with parallel computing. The code of SIFT [51] was downloaded from the internet. The code packages of ORB, SURF, and BRISK are from the Computer Vision Toolbox of MATLAB.
To ensure consistent comparison across different scenarios, the SIFT, ORB, BRISK, SURF, and LP-SIFT algorithms were utilized to compute feature points and feature description vectors in the two images. Subsequently, the RANSAC algorithm was employed to stitch the images together.

3.1. Datasets

To evaluate the performance of LP-SIFT algorithm, a range of datasets that encompass various scenarios, pixel sizes, and different levels of distortions were studied. These datasets include a rich assortment of illumination conditions and structural features in both natural and artificial environments. In this study, two datasets were used: Dataset-A and Dataset-B. Dataset-A contains three image pairs commonly adopted in relevant studies, namely, (1) mountain [52], (2) street view [53], and (3) terrain [54]. Dataset-B is captured by a camera of a mobile phone (PGKM10, OnePlus) with a resolution of 6 Mega-Pixels. It contains three image pairs: (1) building, (2) campus view (translation), and (3) campus view (rotation). The image pairs of the datasets are shown in Figure 3 as examples.

3.2. Evaluation Metrics

Since the datasets selected in this study were all captured from actual scenes without a reference image, it is difficult to use structural similarity (SSIM) [55] and the peak signal-to-noise ratio (PSNR) [56] to evaluate the image stitching results. To compare the stitching results of different methods, indicators such as average gradient (AG) [57] and spatial frequency (SF) [58] are employed.
AG reflects the detail contrast and texture transformation in the image and can be used to evaluate the quality of the fused image. In image stitching, the larger the AG, the better the stitching quality. AG can be defined as follows [57]:
G = 1 c r i = 1 c j = 1 r ( M i + 1 , j M i 1 , j ) 2 + ( M i , j + 1 M i , j ) 2 2
where c represents the number of rows of the image and r represents the number of columns of the image.
SF is another evaluation metric reflecting the change rate of the image gray level. The larger the SF, the clearer the image, particularly for the image fusion after stitching. SF is calculated as follows [58]:
S F = 1 c r i = 1 c j = 1 r M i , j M i , j 1 + 1 c r i = 1 c j = 1 r M i , j M i 1 , j

3.3. Images of Small Size

Small-sized images are commonly employed in various fields such as medical imaging [18], industrial inspection [59,60,61], and bridge inspection [62]. Hence, the performance of image stitching, which combines LP-SIFT and RANSAC, is initially assessed for these small-sized images. In our investigation, the images in the datasets of mountain and street view have small sizes. The sizes of the images are 602 × 400 pixels and 653 × 490 pixels, respectively. The stitching results are collectively shown in Figure 4, incorporating those obtained using SIFT, ORB, BRISK, SURF, and LP-SIFT. The corresponding parameters of the stitching process are summarized in Table 2. It is evident that all the five feature point detection algorithms, when combined with RANSAC can successfully stitch the two images. We computed the AG and SF of the stitching results. While the SIFT algorithm yielded the highest value for the mountain dataset and the ORB algorithm yielded the highest value for the street view dataset, the differences among the values of the five algorithms are very small. The stitching effect across the various algorithms is comparable. However, there are significant differences in computation times. It should be noted that the computation time encompasses feature point detection, feature description vector calculation, pair matching, and image stitching.
For the mountain dataset, the computation times are 101.21 s for SIFT, 0.71 s for ORB, 1.30 s for BRISK, 0.85 s for SURF, and 1.16 s for LP-SIFT. ORB takes the least time, while SIFT takes the most time. The comparison is clearer in Figure 5. However, for the street view dataset, the computation times are 226.62 s for SIFT, 2.27 s for ORB, 2.22 s for BRISK, 2.54 s for SURF, and 2.05 s for LP-SIFT. LP-SIFT takes the least time, while SIFT takes the most time. Furthermore, the stitching outcomes produced by the SIFT algorithm exhibit notable misalignment at the seams, whereas the other algorithms maintain alignment without noticeable discrepancies. In summary, SIFT is the most computationally intensive, with occasional misalignment in stitching results, while the stitching time of LP-SIFT, ORB, BRISK, and SURF are on the same level.

3.4. Images of Medium Size

For the purpose of high-quality visual presentation of photos [63], videos [64], and other scenes, images of 1080P (1080 × 1920 pixels) are commonly utilized. In our dataset, the image pairs in the datasets of terrain and building have a medium size. The former has a size of 1024 × 768 pixels and the latter are 1080 × 1920 pixels.
The stitching results are collectively shown in Figure 6, incorporating those obtained using SIFT, ORB, BRISK, SURF, and LP-SIFT. The corresponding parameters of the stitching process are summarized in Table 2. For the terrain dataset, the five feature point detection algorithms, when combined with RANSAC, can successfully stitch the two images. The SIFT algorithm takes up to 1674.87 s to accomplish the stitching; in contrast, ORB, BRISK, SURF, and LP-SIFT algorithms take 15.77 s, 3.20 s, 5.16 s and 4.47 s, respectively. Furthermore, the LP-SIFT algorithm provides the highest evaluation metrics in determining the parameters for stitching results.
For the building dataset, only four feature point detection algorithms, when combined with RANSAC, can successfully stitch the two images. Although the SURF algorithm yielded the highest evaluation metrics, the differences among the values of the four algorithms are within 2%, which is negligible; the ORB algorithm takes 327.25 s, the BRISK algorithm takes 4.08 s, the SURF algorithm takes 1.28 s, and the LP-SIFT algorithm takes 2.03 s. Since the SIFT algorithm takes more than 104 s but does not return any matching result, the computation is terminated without returning a stitching time. In this computation set, the BRISK, SURF, and LP-SIFT algorithms require significantly less time than SIFT and ORB, which experiences a notable increase in the number of feature points as the image size is enlarged.

3.5. Images of Large Size with Translational Displacement

Large-sized images are frequently employed in mobile photography [65,66], satellite remote sensing [67,68], UAV aerial photography [23,69], and other fields. In our dataset, the image pairs in the campus view (translation) datasets have a large size. The size of the images is 3072 × 4096 pixels.
The stitching results are collectively shown in Figure 7a, incorporating those obtained using BRISK, SURF, and LP-SIFT. The corresponding parameters of the stitching process are summarized in Table 2. For campus view (translation) datasets, only three feature point detection algorithms, when combined with RANSAC, can successfully stitch the two images. Although the evaluation metrics computed from the SURF algorithm are the highest, the differences with those of BRISK and LP-SIFT are less than 1%. Therefore, in the stitching of campus view (translation) datasets, the three algorithms show a similar stitching effect: the BRISK algorithm takes 195.44 s, the SURF algorithm takes 6.52 s, and the LP-SIFT algorithm takes 4.49 s. Since the execution time for SIFT exceeded 104 s without yielding any results, the computation was halted at that point. On the other hand, the ORB algorithm detected an excessive number of feature points, surpassing the computer’s memory capacity and leading to a stitching failure. Among the three algorithms, LP-SIFT exhibits orders of improvement in stitching efficiency. One key advantage of LP-SIFT is its adjustable interrogation window size, allowing for the control of the number of feature points even in extremely large images with a high signal-to-noise ratio (SNR). By applying a larger interrogation window size, LP-SIFT can effectively limit the number of feature points, resulting in a faster stitching process compared to other algorithms. This enhanced efficiency makes LP-SIFT a compelling choice for image stitching tasks, especially in scenarios where computational resources are limited.

3.6. Images of Large Size with Rotational Displacement

The SIFT algorithm has rotation invariance. In this section, we hope to demonstrate that the LP-SIFT algorithm also reserves the rotation invariance feature. The image pairs in the campus view (rotation) datasets with a large size of 3072 × 4096 pixels are applied. The image pairs are captured separately by the same camera, and are not artificially rotated from each other. SIFT, ORB, BRISK, SURF, and LP-SIFT algorithms were employed to detect feature points and feature description vectors. The RANSAC algorithm is further used to stitch the images. Figure 7b depicts the stitching results achieved by combining the SURF and LP-SIFT feature point detection algorithms with RANSAC, along with the stitching parameters summarized in Table 2. It is evident that both feature point detection algorithms can successfully stitch the two images. Relative to SURF, the LP-SIFT algorithm provides a better stitching effect according to the higher evaluation metrics, in addition to a shorter stitching time. The SURF algorithm takes 11.42 s, while the LP-SIFT algorithm takes 4.58 s, which is only 40% of the computation time of SURF. Similar to the results seen in Section 3.4, the SIFT algorithm requires an unacceptably long time for stitching, while the ORB and BRISK algorithms fail in stitching due to the overflow of memory. Therefore, LP-SIFT shows the capability of stitching images with rotational displacements, and is particularly fast for images with a large size.

3.7. Discussion

Overall, the LP-SIFT algorithm shows comparable evaluation metrics as the other stitching algorithms in all the five datasets. It has the same or even better robustness as other algorithms such as ORB and BRISK. Compared to the original SIFT method, the speed of feature point detection of LP-SIFT can be promoted by 109 times for small-size images and by orders of magnitude for larger images. The time consumption in SIFT is primarily attributed to the calculation of feature description vectors, particularly when dealing with many detected feature points, especially in the case of large images. ORB and BRISK face challenges due to the rapid increase in feature points with image size. This may not be a big deal if the computational resource is sufficiently large, e.g., a commercial workstation, but it could be an obstacle for applications in a portable computation system. The SURF algorithm is applicable for stitching images of different sizes, but the overall efficiency is relatively lower than that of LP-SIFT.
Relative to SIFT, ORB, BRISK, and SURF, LP-SIFT can flexibly adjust the interrogation window size to determine the multiscale local peaks. Therefore, the number of feature points can be well controlled without a significant increase with the image size, which is the advantage of replacing the Gaussian pyramid and difference pyramid feature point detection. This is why LP-SIFT can perform fast image stitching for large images. One may need to note that the algorithm of LP-SIFT is programed in MATLAB without acceleration by a GPU. If LP-SIFT is developed by C/C++ with GPU acceleration, the computation efficiency can be significantly promoted.
On the other hand, similar to the conventional image stitching techniques, the LP-SIFT algorithm faces challenge when dealing with periodic structures, or in the environments with weak texture features and a high noise (low signal-to-noise ratio) level. Furthermore, if the images have more small-scale contents (high-frequency components), more feature points on small scales may be inevitable. The size of the interrogation window may be reduced. Accordingly, the number of feature points and the computational time will also be increased.

4. Mosaic of Multiple Images without Prior Knowledge

In various application scenes, e.g., criminal investigation [70], remote sensing monitoring [68], and UAV aerial photography [69], images are probably fragmented with unknown positions, angles, and sequences that need to be restored. Thus, the mosaic strategy of stitching multiple images, which is more complex than the two-image case, is also crucial. Here we propose a mosaic strategy for combining multiple images without prior knowledge.
As illustrated in Figure 8, the first step involves using LP-SIFT to compute the homography matrix between each pair of images within a given dataset, which is stored in the matrix ( H M ). This aims to find the transformation relationship between individual image pairs. The process can be parallelized using CPU computing to save computation time.
Subsequently, the number of nonzero elements in each row or column of H M is counted, and the reference image is selected based on the row or column with the highest count of nonzero elements. This process guarantees the reference image with the most neighbor images (the images can be stitched with the reference image) can be stitched first to reduce the subsequent iterations and save the time required to produce the mosaic. The images matched with the reference image are stitched in this round, leaving the unmatched images stored in the unmatched set. Then, in the second round, another reference image is selected from the unmatched set according to H M . By repeating the process above, the mosaic is finally terminated until the number in the unmatched set reaches 0.
Figure 9a depicts the original image captured in this experiment. The image size is 6400 × 4270. Figure 9b shows the fragmented images randomly divided from the original image. The approach outlined in Figure 8 is employed to seamlessly merge the image fragments shown in Figure 9b. As shown in Figure 9c, the result is highly coincident with the original image in Figure 9a. The stitching time is 158.94 s, which is acceptable for a wide range of applications.

5. Conclusions

In this study, we propose a fast feature point detection algorithm, namely local-peak scale-invariant feature transform (LP-SIFT), which integrates the concept of local extremal points or image peaks at the multiscale level with SIFT. By integrating LP-SIFT and RANSAC in image stitching, significant improvements in stitching speed are achieved compared to the original SIFT method. Furthermore, LP-SIFT was evaluated against ORB, BRISK, and SURF for stitching images of varying sizes. Due to its adaptability in adjusting the interrogation window size, LP-SIFT demonstrates a minimal increase in the number of detected feature points with increasing image size, resulting in a noticeable reduction in stitching time, particularly for large-scale cases. Additionally, we also provide a strategy for seamlessly stitching multiple images using LP-SIFT without prior knowledge. It is anticipated that LP-SIFT will contribute to diverse application scenarios such as terrain mapping, biological analysis, and even criminal investigations.

Author Contributions

H.L.: Investigation, Visualization, Writing—Original draft preparation, Software. L.W.: Methodology, Writing—Reviewing and Editing. T.Z.: Writing—Reviewing and Editing. W.Z.: Supervision, Conceptualization, Methodology, Data curation, Validation, Writing—Reviewing and Editing. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China (51927804).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the privacy. During the preparation of this work, the author(s) used ChatGPT 4.0 in order to check essay presentation, grammar, and spelling. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Deshmukh, P.; Paikrao, P. A Review of Various Image Mosaicing Techniques. In Proceedings of the 2019 Innovations in Power and Advanced Computing Technologies (i-PACT), Vellore, India, 22–23 March 2019. [Google Scholar] [CrossRef]
  2. Lee, W.T.; Chen, H.I.; Chen, M.S.; Shen, I.C.; Chen, B.Y. High-resolution 360 Video Foveated Stitching for Real-Time VR. Comput. Graph. Forum 2017, 36, 115–123. [Google Scholar] [CrossRef]
  3. Saeed, S.; Kakli, M.U.; Cho, Y.; Seo, J.; Park, U. A High-Quality Vr Calibration and Real-Time Stitching Framework Using Preprocessed Features. IEEE Access 2020, 8, 190300–190311. [Google Scholar] [CrossRef]
  4. Dang, P.; Zhu, J.; Zhou, Y.; Rao, Y.; You, J.; Wu, J.; Zhang, M.; Li, W. A 3D-Panoramic Fusion Flood Enhanced Visualization Method for VR. Environ. Model. Softw. 2023, 169, 105810. [Google Scholar] [CrossRef]
  5. Liu, Z.; Chang, S. A Study of Digital Exhibition Visual Design Led by Digital Twin and VR Technology. Meas. Sens. 2024, 31, 100970. [Google Scholar] [CrossRef]
  6. Greibe, T.; Anhøj, T.A.; Johansen, L.S.; Han, A. Quality Control of Jeol Jbx-9500fsz E-Beam Lithography System in a Multi-User Laboratory. Microelectron. Eng. 2016, 155, 25–28. [Google Scholar] [CrossRef]
  7. Pan, J.; Liu, W.; Ge, P.; Li, F.; Shi, W.; Jia, L.; Qin, H. Real-Time Segmentation and Tracking of Excised Corneal Contour by Deep Neural Networks for Dalk Surgical Navigation. Comput. Methods Programs Biomed. 2020, 197, 105679. [Google Scholar] [CrossRef]
  8. Wang, Y.; Yang, J. Origin of Organic Matter Pore Heterogeneity in Oil Mature Triassic Chang-7 Mudstones, Ordos Basin, China. Int. J. Coal Geol. 2024, 283, 104458. [Google Scholar] [CrossRef]
  9. Zhou, Q.; Zhou, Z. Web-Based Mixed Reality Video Fusion with Remote Rendering. Virtual Real. Intell. Hardw. 2023, 5, 188–199. [Google Scholar] [CrossRef]
  10. He, Z.; He, Z.; Li, S.; Yu, Y.; Liu, K. A Ship Navigation Risk Online Prediction Model Based on Informer Network Using Multi-Source Data. Ocean Eng. 2024, 298, 117007. [Google Scholar] [CrossRef]
  11. Wang, B.; Gou, S.; Di, K.; Wan, W.; Peng, M.; Zhao, C.; Zhang, Y.; Xie, B. Rock Size-Frequency Distribution Analysis at the Zhurong Landing Site Based on Navigation and Terrain Camera Images along the Entire Traverse. Icarus 2024, 413, 116001. [Google Scholar] [CrossRef]
  12. Cao, M.; Zheng, L.; Jia, W.; Liu, X. Constructing Big Panorama from Video Sequence Based on Deep Local Feature. Image Vis. Comput. 2020, 101, 103972. [Google Scholar] [CrossRef]
  13. Lyu, W.; Zhou, Z.; Chen, L.; Zhou, Y. A Survey on Image and Video Stitching. Virtual Real. Intell. Hardw. 2019, 1, 55–83. [Google Scholar] [CrossRef]
  14. Wang, Q.; Reimeier, F.; Wolter, K. Efficient Image Stitching through Mobile Offloading. Electron. Notes Theor. Comput. Sci. 2016, 327, 125–146. [Google Scholar] [CrossRef]
  15. Torres, R.; Mahalingam, G.; Kapner, D.; Trautman, E.T.; Fliss, T.; Seshamani, S.; Perlman, E.; Young, R.; Kinn, S.; Buchanan, J.; et al. A Scalable and Modular Automated Pipeline for Stitching of Large Electron Microscopy Datasets. eLife 2022, 11, e76534. [Google Scholar] [CrossRef]
  16. Ma, B.; Zimmermann, T.; Rohde, M.; Winkelbach, S.; He, F.; Lindenmaier, W.; Dittmar, K.E.J. Use of Autostitch for Automatic Stitching of Microscope Images. Micron 2007, 38, 492–499. [Google Scholar] [CrossRef]
  17. Yang, F.; Deng, Z.-S.; Fan, Q.-H. A Method for Fast Automated Microscope Image Stitching. Micron 2013, 48, 17–25. [Google Scholar] [CrossRef] [PubMed]
  18. Yang, F.; He, Y.; Deng, Z.S.; Yan, A. Improvement of Automated Image Stitching System for DR X-ray Images. Comput. Biol. Med. 2016, 71, 108–114. [Google Scholar] [CrossRef] [PubMed]
  19. Seo, J.-H.; Yang, S.; Kang, M.-S.; Her, N.-G.; Nam, D.-H.; Choi, J.-H.; Kim, M.H. Automated Stitching of Microscope Images of Fluorescence in Cells with Minimal Overlap. Micron 2019, 126, 102718. [Google Scholar] [CrossRef] [PubMed]
  20. Lei, Z.; Liu, X.; Zhao, L.; Chen, L.; Li, Q.; Yuan, T.; Lu, W. A Novel 3D Stitching Method for WLI Based Large Range Surface Topography Measurement. Opt. Commun. 2016, 359, 435–447. [Google Scholar] [CrossRef]
  21. Yang, P.; Ye, S.-w.; Peng, Y.-f. Three-Dimensional Profile Stitching Measurement for Large Aspheric Surface during Grinding Process with Sub-Micron Accuracy. Precis. Eng. 2017, 47, 62–71. [Google Scholar] [CrossRef]
  22. Kim, W.Y.; Seo, B.W.; Lee, S.H.; Lee, T.G.; Kwon, S.; Chang, W.S.; Nam, S.-H.; Fang, N.X.; Kim, S.; Cho, Y.T. Quasi-Seamless Stitching for Large-Area Micropatterned Surfaces Enabled by Fourier Spectral Analysis of Moiré Patterns. Nat. Commun. 2023, 14, 2202. [Google Scholar] [CrossRef]
  23. Feng, A.; Vong, C.N.; Zhou, J.; Conway, L.S.; Zhou, J.; Vories, E.D.; Sudduth, K.A.; Kitchen, N.R. Developing an Image Processing Pipeline to Improve the Position Accuracy of Single UAV Images. Comput. Electron. Agric. 2023, 206, 107650. [Google Scholar] [CrossRef]
  24. Feng, S.; Gao, M.; Jin, X.; Zhao, T.; Yang, F. Fine-Grained Damage Detection of Cement Concrete Pavement Based on UAV Remote Sensing Image Segmentation and Stitching. Measurement 2024, 226, 113844. [Google Scholar] [CrossRef]
  25. Wang, X.; He, N.; Hong, C.; Wang, Q.; Chen, M. Improved Yolox-X Based Uav Aerial Photography Object Detection Algorithm. Image Vis. Comput. 2023, 135, 104697. [Google Scholar] [CrossRef]
  26. Zeng, W.; Deng, Q.; Zhao, X.; Li, D.; Min, X. A Method for Stitching Remote Sensing Images with Delaunay Triangle Feature Constraints. Geocarto Int. 2023, 38, 2285356. [Google Scholar] [CrossRef]
  27. Rui, T.; Hu, Y.; Yang, C.; Wang, D.; Liu, X. Research on Fast Natural Aerial Image Mosaic. Comput. Electr. Eng. 2021, 90, 107007. [Google Scholar] [CrossRef]
  28. Ghosh, D.; Kaabouch, N. A Survey on Image Mosaicing Techniques. J. Vis. Commun. Image Represent. 2016, 34, 1–11. [Google Scholar] [CrossRef]
  29. Ma, Z.; Liu, S. A Review of 3D Reconstruction Techniques in Civil Engineering and Their Applications. Adv. Eng. Inform. 2018, 37, 163–174. [Google Scholar] [CrossRef]
  30. Bonny, M.Z.; Uddin, M.S. Feature-Based Image Stitching Algorithms. In Proceedings of the 2016 International Workshop on Computational Intelligence (IWCI), Dhaka, Bangladesh, 12–13 December 2016; pp. 198–203. [Google Scholar]
  31. Harris, C.; Stephens, M. A Combined Corner and Edge Detector. In Proceedings of the Alvey Vision Conference 1988, Manchester, UK, 31 August–2 September 1988; pp. 23.21–23.26. [Google Scholar]
  32. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  33. Rosten, E.; Drummond, T. Machine Learning for High-Speed Corner Detection. In Proceedings of the Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Part I 9. pp. 430–443. [Google Scholar]
  34. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  35. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An Efficient Alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  36. Leutenegger, S.; Chli, M.; Siegwart, R.Y. Brisk: Binary Robust Invariant Scalable Keypoints. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
  37. Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. Kaze Features. In Proceedings of the Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Part VI 12. pp. 214–227. [Google Scholar]
  38. Pablo, F.; Alcantarilla, J.N.; Bartoli, A. Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. In Proceedings of the British Machine Vision Conference, Bristol, UK, 9–13 September 2013. [Google Scholar]
  39. Li, G.; Li, T.; Li, F.; Zhang, C. NerveStitcher: Corneal Confocal Microscope Images Stitching with Neural Networks. Comput. Biol. Med. 2022, 151, 106303. [Google Scholar] [CrossRef] [PubMed]
  40. Zhu, F.; Li, J.; Zhu, B.; Li, H.; Liu, G. UAV Remote Sensing Image Stitching via Improved VGG16 Siamese Feature Extraction Network. Expert Syst. Appl. 2023, 229, 120525. [Google Scholar] [CrossRef]
  41. ul-Huda, N.; Ahmad, H.; Banjar, A.; Alzahrani, A.O.; Ahmad, I.; Naeem, M.S. Image Synthesis of Apparel Stitching Defects Using Deep Convolutional Generative Adversarial Networks. Heliyon 2024, 10, e26466. [Google Scholar] [CrossRef]
  42. Wu, Z.; Wu, H. Improved Sift Image Feature Matching Algorithm. In Proceedings of the 2022 2nd International Conference on Computer Graphics, Image and Virtualization (ICCGIV), Chongqing, China, 23–25 September 2022; pp. 223–226. [Google Scholar]
  43. Gan, W.; Wu, Z.; Wang, M.; Cui, X. Image Stitching Based on Optimized SIFT Algorithm. In Proceedings of the 2023 5th International Conference on Intelligent Control, Measurement and Signal Processing (ICMSP), Chengdu, China, 19–21 May 2023; pp. 1099–1102. [Google Scholar]
  44. Li, X.; Li, S. Image Registration Algorithm Based on Improved SIFT. In Proceedings of the 2023 4th International Conference on Electronic Communication and Artificial Intelligence (ICECAI), Guangzhou, China, 12–14 May 2023; pp. 264–267. [Google Scholar]
  45. Wang, L.; Peters, N. The Length-Scale Distribution Function of the Distance between Extremal Points in Passive Scalar Turbulence. J. Fluid Mech. 2006, 554, 457–475. [Google Scholar] [CrossRef]
  46. Peters, N.; Wang, L. Dissipation Element Analysis of Scalar Fields in Turbulence. Comptes Rendus Mécanique 2006, 334, 493–506. [Google Scholar] [CrossRef]
  47. Wang, L.P.; Huang, Y.X. Multi-Level Segment Analysis: Definition and Application in Turbulent Systems. J. Stat. Mech. Theory Exp. 2015, 2015, P06018. [Google Scholar] [CrossRef]
  48. Lowe, D.G. Object Recognition from Local Scale-Invariant Features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. [Google Scholar]
  49. Zhang, Y.; Xie, Y. Adaptive Clustering Feature Matching Algorithm Based on Sift and Ransac. In Proceedings of the 2021 2nd International Conference on Electronics, Communications and Information Technology (CECIT), Sanya, China, 27–29 December 2021; pp. 174–179. [Google Scholar]
  50. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  51. Wu, H. Image Stitching. Available online: https://github.com/haoningwu3639/ImageStitching (accessed on 22 June 2021).
  52. Zaragoza, J.; Chin, T.-J.; Brown, M.S.; Suter, D. As-Projective-as-Possible Image Stitching with Moving DLT. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 2339–2346. [Google Scholar]
  53. Hernandez Zaragoza, J.C. As-Projective-as-Possible Image Stitching with Moving DLT. Ph.D. Thesis, The University of Adelaide, Adelaide, Australia, 2014. [Google Scholar]
  54. Vedaldi, A.; Fulkerson, B. Vlfeat: An Open and Portable Library of Computer Vision Algorithms. In Proceedings of the 18th ACM International Conference on Multimedia (MM ‘10), Firenze, Italy, 25–29 October 2010. [Google Scholar]
  55. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  56. Hore, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  57. Xu, S.; Xu, Y. Objective Evaluation Method of Fusion Performance for Remote Sensing Image Based on Matlab. Sci. Surv. Mapp. 2008, 33, 143–145. [Google Scholar] [CrossRef]
  58. Heilbronner, R.; Barrett, S. Image Analysis in Earth Sciences: Microstructures and Textures of Earth Materials; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 129. [Google Scholar]
  59. Zhang, D.; Jackson, W.; Dobie, G.; West, G.; MacLeod, C. Structure-from-Motion Based Image Unwrapping and Stitching for Small Bore Pipe Inspections. Comput. Ind. 2022, 139, 103664. [Google Scholar] [CrossRef]
  60. Chatterjee, S.; Issac, K.K. Viewpoint Planning and 3D Image Stitching Algorithms for Inspection of Panels. NDT E Int. 2023, 137, 102837. [Google Scholar] [CrossRef]
  61. Popovych, S.; Macrina, T.; Kemnitz, N.; Castro, M.; Nehoran, B.; Jia, Z.; Bae, J.A.; Mitchell, E.; Mu, S.; Trautman, E.T.; et al. Petascale pipeline for precise alignment of images from serial section electron microscopy. Nat. Commun. 2024, 15, 289. [Google Scholar] [CrossRef]
  62. Xie, R.; Yao, J.; Liu, K.; Lu, X.; Liu, Y.; Xia, M.; Zeng, Q. Automatic Multi-Image Stitching for Concrete Bridge Inspection by Combining Point and Line Features. Autom. Constr. 2018, 90, 265–280. [Google Scholar] [CrossRef]
  63. Zhu, W.; Liu, L.; Jiang, G.; Yin, S.; Wei, S. A 135-Frames/s 1080p 87.5-mw Binary-Descriptor-Based Image Feature Extraction Accelerator. IEEE Trans. Circuits Syst. Video Technol. 2015, 26, 1532–1543. [Google Scholar] [CrossRef]
  64. Zhang, X.; Sun, H.; Chen, S.; Zheng, N. VLSI Architecture Exploration of Guided Image Filtering for 1080P@ 60Hz Video Processing. IEEE Trans. Circuits Syst. Video Technol. 2016, 28, 230–241. [Google Scholar] [CrossRef]
  65. Bordallo-Lopez, M.; Silvén, O.; Tico, M.; Vehviläinen, M. Creating Panoramas on Mobile Phones. In Proceedings of the Computational Imaging V, San Jose, CA, USA, 29–31 January 2007; pp. 54–63. [Google Scholar]
  66. Xiong, Y.; Pulli, K. Fast Panorama Stitching for High-Quality Panoramic Images on Mobile Phones. IEEE Trans. Consum. Electron. 2010, 56, 298–306. [Google Scholar] [CrossRef]
  67. Wang, L.; Zhang, Y.; Wang, T.; Zhang, Y.; Zhang, Z.; Yu, Y.; Li, L.J.R.S. Stitching and Geometric Modeling Approach Based on Multi-Slice Satellite Images. Remote Sens. 2021, 13, 4663. [Google Scholar] [CrossRef]
  68. Huang, B.; Collins, L.M.; Bradbury, K.; Malof, J.M. Deep Convolutional Segmentation of Remote Sensing Imagery: A Simple and Efficient Alternative to Stitching Output Labels. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 6899–6902. [Google Scholar]
  69. Ren, M.; Li, J.; Song, L.; Li, H.; Xu, T. MLP-Based Efficient Stitching Method for UAV Images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 2503305. [Google Scholar] [CrossRef]
  70. Sansoni, G.; Trebeschi, M.; Docchio, F.J.S. State-of-the-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation. Sensors 2009, 9, 568–601. [Google Scholar] [CrossRef]
Figure 1. Framework of the LP-SIFT method.
Figure 1. Framework of the LP-SIFT method.
Sensors 24 05759 g001
Figure 2. Diagram of the LP-SIFT method.
Figure 2. Diagram of the LP-SIFT method.
Sensors 24 05759 g002
Figure 3. Datasets. Dataset-A: (a) mountain [52] dataset image pair, (b) street view [53] dataset image pair, (c) terrain [54] dataset image pair. Dataset-B: (d) building dataset image pair, (e) campus view dataset (translation) image pair, (f) campus view dataset (rotation) image pair.
Figure 3. Datasets. Dataset-A: (a) mountain [52] dataset image pair, (b) street view [53] dataset image pair, (c) terrain [54] dataset image pair. Dataset-B: (d) building dataset image pair, (e) campus view dataset (translation) image pair, (f) campus view dataset (rotation) image pair.
Sensors 24 05759 g003
Figure 4. Stitching results of mountain dataset and street dataset. (a) Mountain dataset stitched by SIFT, ORB, BRISK, SURF, and LP-SIFT, respectively. In LP-SIFT, L = [32,40]. (b) Street view dataset stitched by SIFT, ORB, BRISK, SURF, and LP-SIFT, respectively. In LP-SIFT, L = [32,40].
Figure 4. Stitching results of mountain dataset and street dataset. (a) Mountain dataset stitched by SIFT, ORB, BRISK, SURF, and LP-SIFT, respectively. In LP-SIFT, L = [32,40]. (b) Street view dataset stitched by SIFT, ORB, BRISK, SURF, and LP-SIFT, respectively. In LP-SIFT, L = [32,40].
Sensors 24 05759 g004
Figure 5. Comparison of the stitching times of 5 algorithms for different datasets.
Figure 5. Comparison of the stitching times of 5 algorithms for different datasets.
Sensors 24 05759 g005
Figure 6. Stitching results of terrain dataset and building dataset. (a) Terrain dataset stitched by SIFT, ORB, BRISK, SURF, and LP-SIFT respectively. In LP-SIFT, L = [32,64]. (b) Building dataset stitched by ORB, BRISK, SURF, and LP-SIFT respectively. In LP-SIFT, L = [100,128].
Figure 6. Stitching results of terrain dataset and building dataset. (a) Terrain dataset stitched by SIFT, ORB, BRISK, SURF, and LP-SIFT respectively. In LP-SIFT, L = [32,64]. (b) Building dataset stitched by ORB, BRISK, SURF, and LP-SIFT respectively. In LP-SIFT, L = [100,128].
Sensors 24 05759 g006
Figure 7. Stitching results of campus view dataset. (a) Campus view (translation) dataset stitched by BRISK, SURF, and LP-SIFT, respectively. In LP-SIFT, L = [256,512]. (b) Campus view (rotation) dataset stitched by SURF, and LP-SIFT, respectively. In LP-SIFT, L = [256,512].
Figure 7. Stitching results of campus view dataset. (a) Campus view (translation) dataset stitched by BRISK, SURF, and LP-SIFT, respectively. In LP-SIFT, L = [256,512]. (b) Campus view (rotation) dataset stitched by SURF, and LP-SIFT, respectively. In LP-SIFT, L = [256,512].
Sensors 24 05759 g007
Figure 8. Schematic diagram of LP-SIFT image mosaic of multiple images without prior knowledge.
Figure 8. Schematic diagram of LP-SIFT image mosaic of multiple images without prior knowledge.
Sensors 24 05759 g008
Figure 9. Mosaic of multiple images by LP-SIFT without prior knowledge, where L = [512,1024]. (a) Original image; the image size is 6400 × 4270. (b) The original image is stitched into different sizes and its position is shuffled, and its size is marked below the image. (c) Stitching result, and the stitching time is 158.94 s.
Figure 9. Mosaic of multiple images by LP-SIFT without prior knowledge, where L = [512,1024]. (a) Original image; the image size is 6400 × 4270. (b) The original image is stitched into different sizes and its position is shuffled, and its size is marked below the image. (c) Stitching result, and the stitching time is 158.94 s.
Sensors 24 05759 g009
Table 1. Hardware and software specifications.
Table 1. Hardware and software specifications.
HardwareOperation systemWindows 11 64-bit operating system
ProcessorIntel Core i9-12900
Memory64 GB
Graphics cardNVIDIA GeForce RTX 3090
SoftwarePlatformMATLAB 2021a
LibraryComputer Vision Toolbox 10.0
Development environmentSIFTMATLAB 2021a
ORBMATLAB 2021a
BRISKMATLAB 2021a
SURFMATLAB 2021a
LP-SIFTMATLAB 2021a
Table 2. Parameter setup of different feature point detection algorithms. To maintain the consistency for comparison, all of them were stitched by the RANSAC algorithm. The numbers marked by background colors show the minimal stitching time, the largest AG and SF of the algorithms respectively in the processing in each dataset.
Table 2. Parameter setup of different feature point detection algorithms. To maintain the consistency for comparison, all of them were stitched by the RANSAC algorithm. The numbers marked by background colors show the minimal stitching time, the largest AG and SF of the algorithms respectively in the processing in each dataset.
NameImage SizeAlgorithmNumber of Feature PointsNumber of Matched PairsStitching Time (s)AGSF
MountainSmall image602 × 400SIFT149693915101.216.6827.73
ORB10,505692611320.716.6027.34
BRISK14161122751.306.5127.20
SURF6384902160.856.5527.28
LP-SIFT487493311.166.5427.28
Street viewSmall image653 × 490SIFT1948272623226.626.5823.96
ORB11,36315,5975412.277.1725.85
BRISK25234104592.226.7024.39
SURF93311491372.546.6324.10
LP-SIFT811812592.057.0025.23
TerrainMedium image1024 × 768SIFT949510,3681341674.875.4117.33
ORB31823182222415.775.4817.36
BRISK81498306953.205.3917.11
SURF288330372045.165.4917.75
LP-SIFT18471837294.475.5017.78
BuildingMedium image1080 × 1920SIFT×××>104××
ORB107,612108,4529720327.256.5724.29
BRISK14,66015,4286054.086.5624.33
SURF6123598517801.286.5824.35
LP-SIFT532484172.036.4724.11
Campus view
(translation)
Large image3072 × 4096SIFT×××>104××
ORB1,025,750927,050Over size×××
BRISK104,98194,6573299195.449.3724.87
SURF27,79025,46560566.529.4424.96
LP-SIFT418403294.499.3724.88
Campus view
(rotation)
Large image3072 × 4096SIFT×××>104××
ORB1,326,3891,332,929Over size×××
BRISK158,247164,035Over size×××
SURF47,56847,67811,29311.4210.4224.26
LP-SIFT422429224.5811.7427.37
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Wang, L.; Zhao, T.; Zhao, W. Local-Peak Scale-Invariant Feature Transform for Fast and Random Image Stitching. Sensors 2024, 24, 5759. https://doi.org/10.3390/s24175759

AMA Style

Li H, Wang L, Zhao T, Zhao W. Local-Peak Scale-Invariant Feature Transform for Fast and Random Image Stitching. Sensors. 2024; 24(17):5759. https://doi.org/10.3390/s24175759

Chicago/Turabian Style

Li, Hao, Lipo Wang, Tianyun Zhao, and Wei Zhao. 2024. "Local-Peak Scale-Invariant Feature Transform for Fast and Random Image Stitching" Sensors 24, no. 17: 5759. https://doi.org/10.3390/s24175759

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop