Next Article in Journal
Mapping Sustainable Cities and Communities (SDG 11) Research: A Bibliometric Review
Previous Article in Journal
A Review on Wearable Antennas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Experimental Analysis of Feature-Based Image Registration Methods in Combination with Different Outlier Rejection Algorithms for Histopathological Images †

1
Department of Computer Science and Engineering, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majhitar 737136, Sikkim, India
2
Department of Computer Applications, Sikkim Manipal Institute of Technology, Sikkim Manipal University Majhitar 737136, Sikkim, India
*
Author to whom correspondence should be addressed.
Presented at the International Conference on Recent Advances on Science and Engineering, Dubai, United Arab Emirates, 4–5 October 2023.
Eng. Proc. 2023, 59(1), 121; https://doi.org/10.3390/engproc2023059121
Published: 26 December 2023
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)

Abstract

:
Registration involves aligning two or more images by transforming one image into the coordinate system of another. Registration of histopathological slide images is a critical step in many image analysis applications including disease detection, classification, and prognosis. It is very useful in Computer-Aided Diagnosis (CAD) and allows automatic analysis of tissue images, enabling more accurate detection and prognosis than manual analysis. Due to the complexity and heterogeneity of histopathological images, registration is challenging and requires the careful consideration of various factors, such as tissue deformation, staining variation, and image noise. There are different types of registration and this work focuses on feature-based image registration specifically. A qualitative analysis of different feature detection and description methods combined with different outlier rejection methods is conducted. The four feature detection and description methods experimentally analyzed are Oriented FAST and rotated BRIEF (ORB), Binary Robust Invariant Scalable Key points (BRISK), KAZE, and Accelerated KAZE, and the three outlier rejection methods examined are Random Sample Consensus (RANSAC), Graph cut RANSAC (GC-RANSAC), and Marginalizing Sample Consensus (MAGSAC++). The results are visually and quantitively analyzed to select the method that gives the most accurate and robust registration of the histopathological dataset at hand. Several evaluation metrics, the number of key points detected, and a number of inliers are used as parameters for evaluating the performance of different feature detection–description methods and outlier rejection algorithm pairs. Among all the combinations of methods analyzed, BRISK paired with MAGSAC++ generates the most optimal registration results.

1. Introduction

Histopathology is the examination of cells and tissues under a microscope to detect diseases and characterize them. Digital pathology generates high-definition histopathology sections available for automatic analysis, enabling computer-aided diagnosis (CAD), disease grading, prognosis, and other image processing applications. Image registration is the process of putting two images in the same frame of reference by spatially changing one image to another [1,2,3,4,5]. It is a vital step in the pre-processing of images where information from multiple images must be combined [6,7,8]. Feature-based registration (FBR) is a type of registration that detects key points and aligns images by aligning the corresponding key points between them. FBR detects regions of interest based on distinctiveness, repeatability, and high information, giving better correspondence matches and fast and accurate registration. Once the key points are extracted, the next step is to match the corresponding points of the image pair. Not all the points are good matches (inliers). Removing the bad matches (outliers) from the set of all matched key points can significantly improve the quality of registration. This is performed using an outlier rejection or removal algorithm, which refines the registration to produce a set of inliers. The main objective of this work is to focus on methods that combine feature detection and description (FDD) algorithms. Here, four FFD methods, namely Oriented FAST and rotated BRIEF (ORB), Binary Robust Invariant Scalable Key points (BRISK), KAZE, and Accelerated KAZE (AKAZE) are combined with three outlier removal algorithms, RANSAC (Random Sample Consensus), GC-RANSAC (Graph Cut RANSAC), and Marginalizing Sample Consensus (MAGSAC++) and their results are evaluated. The combination of each feature detection algorithm with the outlier rejection algorithms is shown in Figure 1. Several evaluation metrics, also called the similarity metric, are used for quantitative analysis of the registration results produced by different methods.
Each of the four FDD methods is combined with three outlier removal methods, resulting in a total of 12 combinations. They are ORB with RANSAC (OR), ORB with GC-RANSAC (OGR), ORB with MAGSAC (OUM), BRISK with RANSAC (BR), BRISK with GC-RANSAC (BGR), BRISK with MAGSAC (BM), KAZE with RANSAC (KR), KAZE with GC-RANSAC (KGR), KAZE with MAGSAC (KUM), and AKAZE. A visual and quantitative analysis of each FDD method with different outlier rejection methods is carried out to find the method that performs best for the histopathological dataset at hand.

2. Related Work

Scale-Invariant Feature Transform (SIFT) is among the earliest developed FDD algorithms and has since become the most popular method for feature detection for all kinds of applications. SIFT revolutionized FDD as it converts image data into scale-invariant coordinates. These make the features invariant to scale and rotation [9,10] and partially invariant to illumination [11,12]. Most of the other handcrafted FDD methods are based on SIFT and developed as an improved version of it. Speeded-Up Robust Features (SURF) is a modified accelerated version of SIFT [13]. It has all the properties of it and is robust to changes in illumination, rotation, and scale but is much faster [13,14]. ORB is available as an open-source library and free to use [15,16], making it a popular alternative to the patented SIFT and SURF. It has a comparable matching performance to SIFT and SURF and its speed makes it applicable for real-time performance [17]. BRISK combines the FAST detector with a binary string descriptor by combining the results of simple brightness tests [15,16]. It has a high-quality performance similar to SIFT and SURF and is several orders of magnitude faster than SURF. KAZE is a scale-invariant non-linear scale space detector [18]. As a non-linear variant of SIFT, it has a similar good-quality performance and high computational cost [10]. Thus, an accelerated version of KAZE was designed called A-KAZE. The method used to create the non-linear multiscale detector is several orders of magnitude faster than KAZE.
Before computing the transformation, the presence of incorrect corresponding points that negatively impact the accuracy of the estimated transformation must be identified and removed [19]. This bad match removal process is conducted using an outlier rejection algorithm. RANSAC is simple yet powerful and hence the most widely used outlier rejection algorithm [20,21,22]. It is an iterative process that produces highly accurate estimations of the transformation parameters [22]. RANSAC, though robust, has its limitations that affect its precision and efficiency [22]. Several variations with efforts to improve its efficiency have been introduced [20]. Before choosing any method, the properties of all methods must be carefully considered as designing a single universal approach that is applicable under all scenarios is unrealistic due to the heterogeneity of the image registration process [6,8].

3. Materials and Methodology

The first step in registering images is detecting and describing the key points that are later matched to align the images. A detector is a feature-detection algorithm used to extract distinctive regions from images [13]. A descriptor is an algorithm that encodes the information of the extracted region like its pixel coordinates, orientation, and luminance that can uniquely identify it. Next, the extracted key points must be filtered to remove the outliers that negatively affect the accuracy of the registration. Once the outliers are removed, the corresponding points between the image pair are matched to estimate the transformation matrix. Based on the transformation matrix, the key points of the moving image are warped to align with the base image and registration is complete.
The analysis of the various methods mentioned has been carried out using histopathology images taken from GlaS@MICCAI’2015: Gland Segmentation [23]. It is a publicly available dataset for the accurate segmentation of glands in colon histology images. The images have been augmented using ‘torchvisions. transforms’ module of the PyTorch library. The ‘Random Perspective’ transform applies a random perspective transformation to the images. It is useful to simulate the effect of viewing an object from different angles. The ‘distortion_scale’ parameter controls the amount of distortion applied to the image. A larger value will result in a more distorted image. Here, the scale is set to 0.6. The ‘p’ parameter controls the probability that the transformation will be applied to an image. The value of p is set to 1.0 and the transform will always be applied. Figure 2 shows a sample image and a set of augmented images.
A flowchart of the methodology is shown in Figure 3. Image 1 is the first image in the dataset and is used as the fixed or reference image. Image 2 is the moving image and it changes with each iteration. ‘n’ is the total number of images being registered and hence the number of times the loop is executed.
The general steps involved in the image registration are as follows:
  • Step 1: Input image pair for registration (image1 and image2).
  • Step 2: Initialize the feature detector.
  • Step 3: Generate key points for image1 and 2 as kps1 and kps2, respectively.
  • Step 4: Generate descriptors for kps1 and kps2 as des1 and des2, respectively.
  • Step 5: Match key points using Brute Force Hammer.
  • Step 6: Extract the matched key points.
  • Step 7: Filter the matched key points using the outlier rejection method.
  • Step 8: Find the transformation matrix that maps image2 to image1.
  • Step 9: Find the height, width, and channel of image1.
  • Step 10: Warp image2 using the shape of image1 and the transformation matrix.
All steps in the process except Step 2 and Step 7 are the same. The instance of a particular FDD to be examined is created in Step 2. The outlier rejection method to be analyzed is used in Step 7. The matcher used for all the methods is Brute Force Hammer. It is based on a brute force technique in which all possible matches between two sets of features are explored and compared using the Hamming distance metric. The Hamming distance is a measure of the difference between two binary strings, with the distance determined as the number of bit positions that differ between the two strings. Using the Hamming distance, all feature descriptors in one image are compared to all feature descriptors in the other image. The best match between the two features is the one with the shortest Hamming distance.

Similarity Metrics

The results obtained from different methods of image registration techniques are evaluated using 7 similarity measures or evaluation metrics. These metrics provide a quantitative analysis of visual and semantic similarity between the input and the registered image. The 7 evaluation metrics are as follows:
  • Structural Similarity Index (SSIM): The structural information in two images is measured using the SSIM, which is a comprehensive image quality assessment metric to determine how similar the two images are. It accounts for the disparities in luminance, contrast, and structural composition between the two images.
  • Mean Squared Error (MSE): This calculates the average of the squared differences between the pixel values of the two images.
  • Root Mean Squared Error (RMSE): This is the square root of the MSE. It measures the standard deviation of the differences between the corresponding pixels of the two images.
  • Peak Signal to Noise Ratio (PSNR): PSNR computes the ratio of maximum pixel value possible (255 for a grey image) to the MSE between the two images.
  • Spatial correlation coefficient (SCC): This calculates the correlation between the registered and reference images’ pixel intensities. It is computed as the ratio of the covariance of the two images to the project of their standard deviation.
  • Universal Quality Index (UQI): Like SSIM, this also considers luminance, contrast, and structural similarity between any image pair. But UQI is based on statistical similarity whereas SSIM is based on structural similarity between images. UQI takes both local and global features into account and ranges from 0 to 1. A UQI of 1 indicates identical images and 0 indicates no similarity.
  • Visual Information Fidelity (VIF): This is a similarity metric that assumes that the output image after registration must presume the visual and structural similarity of the original image. It compares the contrast and illuminance of each corresponding pixel of the images being compared. VIF values range from 0 to 1. The higher the value, the greater the similarity, and the lower the value, the less similar the input and the output images are.

4. Results

Figure 4 shows the final output of registration after bad matches are removed and the transformation matrix is applied. Visibly, KAZE and BRISK produced good and comparable results and some slightly deformed images were produced by AKAZE.
Precision is calculated as the ratio of inliers to the total number of extracted key points. As shown in Table 1, RANSAC produces the lowest number of inliers and produces registration with the least precision when paired with any of the FDDs. The outlier rejection ratio of GC-RANSAC and MAGSAC++ is very similar and thus has a very similar precision rate. ORB and AKAZE produce the lowest number of feature points. This means they are fast but not very robust to illumination, color, contrast, or intensity variation between images. KAZE generates more feature points and produces better alignment than those two methods but is much slower. BRISK produces a high number of key points, and its speed is comparable to ORB and AKAZE. If we observe precision values just based on the number of key points generated and the number of good matches among them, BRISK and MAGSAC++ produce the most accurate registration.
Table 2, given below, shows the values of the averages of different evaluation metrics calculated after registering 100 images together. The values were consistent while registering lower as well as greater numbers of images.

5. Discussion

After experimentation and quantitative analysis of the results, it is clear that ORB has the least optimal performance in comparison to the other approaches examined in this paper. It has the lowest similarity indices, both local and global, low spatial correlation, and low visual information fidelity. The similarity measures and number of features detected by ORB and AKAZE are comparable but the latter has a notably lower rate of mean squared errors and much higher visual accuracy. KAZE and BRISK perform significantly better than the other two methods in almost all areas and they have uniform structural and statistical indices. BRISK in combination with any one of the three bad match removal techniques gives the optimal result. Its spatial correspondence, similarity indices, PSNR, and visual accuracy information rates are much greater and the sum of squared errors is minimal between the original and registered images.
Amongst the three algorithms for outlier rejection, RANSAC gives the least satisfactory results. The difference in the error rates, signal-to-noise ratio, and SSIM and UQI values produced by registration using RANSAC and the other two methods are significant. GC-RANSAC and MAGSAC++ give much more accurate results and their similarity measures are comparable. MAGSAC++, however, outperforms the other two techniques; the similarity between the input and output image is the highest, and the error rates are minimal. The spatial correspondence, PSNR, and VIF of registered images obtained using this algorithm are the highest. Figure 5 and Figure 6 below depict the different evaluation metric values from Table 2 graphically for a better understanding of the tabular data. The X-axis represents the different combinations of feature detection and outlier removal methods, while the Y-axis shows the values of the similarity measures. From the experimental data of the various similarity measures, precision and runtime values, and their quantitative analysis, it becomes clear that for the most accurate and robust feature-based registration of histopathological images with minimal computational cost, BRISK with MAGSAC++ is the best choice. Figure 5 shows a graph representing SSIM, SSC, UQI, and VIF values for different methods. Figure 6 and Figure 7 represent graphs for PSNR and RSME values calculated for different combinations of methods.

6. Conclusions

After thorough analysis of the performance of all the methods using various similarity measures, we can say that traditional RANSAC in comparison with others gives the least optimal result when used with any of the four FDDs. MAGSAC++ significantly outperforms other outlier removal techniques when combined with any of the FDDs. The mean squared error for ORB was the highest and it is the only method to give visibly faulty and incomprehensible output images. ORB and AKAZE give comparable outputs that show low similarity between input and output images and a low number of key points as compared to the other two methods. KAZE also yields precise results but is much slower than the rest. BRISK produces the best registration between images. It produces a maximum number of corresponding points and is still fast. Thus, BRISK combined with MAGSAC gives the best registration result. High accuracy and reliability are a must while processing medical images and different applications have different requirements, thus the method of choice for registration or any image processing techniques will also vary greatly. These results are specific to histopathology images and can be different depending on the imaging modality or even between histology images with different properties.

Author Contributions

Conceptualization: P.A., O.S. and B.R.; Methodology and Experimentation: P.A., O.S. and B.R.; Original Draft Writing: P.A. and B.R.; Reviewing and Editing: C.N. and M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The codes combining different methods of registration are given at https://github.com/pritikaadhikari/Image-Registration, accessed on 15 January 2023. The input image is taken from the public dataset GlaS@MICCAI available at https://www.kaggle.com/datasets/sani84/glasmiccai2015-gland-segmentation, accessed on 15 January 2023.

Acknowledgments

We would like to express our heartfelt gratitude to all the staff members of the Computer Science and Engineering Department at Sikkim Manipal Institute of Technology for their invaluable support and assistance throughout this research work. Their dedication and expertise have been crucial to the effective completion of the research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sandgren, K.; Nilsson, E.; Lindberg, A.K.; Strandberg, S.; Blomqvist, L.; Bergh, A.; Nyholm, T. Registration of histopathology to magnetic resonance imaging of prostate cancer. Phys. Imaging Radiat. Oncol. 2021, 18, 19–25. [Google Scholar] [CrossRef] [PubMed]
  2. Fu, Y.; Lei, Y.; Wang, T.; Curran, W.J.; Liu, T.; Yang, X. Deep learning in medical image registration: A review. Phys. Med. Biol. 2020, 65, 20TR01. [Google Scholar] [CrossRef] [PubMed]
  3. Abdel-Basset, M.; Fakhry, A.E.; El-Henawy, I.; Qiu, T.; Sangaiah, A.K. Feature and intensity based medical image registration using particle swarm optimization. J. Med. Syst. 2017, 41, 197. [Google Scholar] [CrossRef] [PubMed]
  4. Ofverstedt, J.; Lindblad, J.; Sladoje, N. Fast and robust symmetric image registration based on distances combining intensity and spatial information. IEEE Trans. Image Process. 2019, 28, 3584–3597. [Google Scholar] [CrossRef] [PubMed]
  5. Bermejo, E.; Chica, M.; Damas, S.; Salcedo-Sanz, S.; Cordón, O. Coral reef optimization with substrate layers for medical image registration. Swarm Evol. Comput. 2018, 42, 138–159. [Google Scholar] [CrossRef]
  6. Komura, D.; Ishikawa, S. Machine learning methods for histopathological image analysis. Comput. Struct. Biotechnol. J. 2018, 16, 34–42. [Google Scholar] [CrossRef] [PubMed]
  7. Islam, K.T.; Wijewickrema, S.; O’Leary, S. A deep learning-based framework for the registration of three dimensional multi-modal medical images of the head. Sci. Rep. 2021, 11, 1860. [Google Scholar] [CrossRef] [PubMed]
  8. Guan, S.Y.; Wang, T.M.; Meng, C.; Wang, J.C. A review of point feature based medical image registration. Chin. J. Mech. Eng. 2018, 31, 76. [Google Scholar] [CrossRef]
  9. Andersson, O.; Reyna Marquez, S. A comparison of object detection algorithms using unmanipulated testing images: Comparing SIFT, KAZE, AKAZE and ORB. 2016, Volume 20, pp. 1–15.
  10. Zhu, M.; Chen, M.; Peng, J. A Review of Medical Image Registration Methods: State-of-the-Art and Future Directions. Annu. Rev. Biomed. Eng. 2021, 23, 1–27. [Google Scholar]
  11. Pradhan, S.; Patra, D. Enhanced mutual information based medical image registration. IET Image Process. 2016, 10, 418–427. [Google Scholar] [CrossRef]
  12. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  13. Liu, C.; Xu, J.; Wang, F. A review of keypoints’ detection and feature description in image registration. Sci. Program. 2021, 2021, 8509164. [Google Scholar] [CrossRef]
  14. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. Lect. Notes Comput. Sci. 2006, 3951, 404–417. [Google Scholar]
  15. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  16. Chelluri, H.B.; Manjunathachari, K. SIFT and it’s Variants: An Overview. In Proceedings of the International Conference on Sustainable Computing in Science, Technology and Management (SUSCOM), Amity University Rajasthan, Jaipur, India, 26–28 February 2019. [Google Scholar]
  17. Muckenhuber, S.; Korosov, A.A.; Sandven, S. Open-source feature-tracking algorithm for sea ice drift retrieval from Sentinel-1 SAR imagery. Cryosphere 2016, 10, 913–925. [Google Scholar] [CrossRef]
  18. Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary robust invariant scalable keypoints. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
  19. Liu, Y.; Zhang, H.; Guo, H.; Xiong, N.N. A fast-brisk feature detector with depth information. Sensors 2018, 18, 3908. [Google Scholar] [CrossRef]
  20. Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE features. In Proceedings of the Computer Vision—ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Proceedings, Part VI 12. Springer: Berlin/Heidelberg, Germany, 2012; pp. 214–227. [Google Scholar]
  21. Savva, A.D.; Economopoulos, T.L.; Matsopoulos, G.K. Geometry-based vs. intensity-based medical image registration: A comparative study on 3D CT data. Comput. Biol. Med. 2016, 69, 120–133. [Google Scholar] [CrossRef] [PubMed]
  22. Alcantarilla, P.F.; Solutions, T. Fast explicit diffusion for accelerated features in nonlinear scale spaces. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1281–1298. [Google Scholar]
  23. Sirinukunwattana, K.; Pluim, J.P.; Chen, H.; Qi, X.; Heng, P.A.; Guo, Y.B.; Rajpoot, N.M. Gland segmentation in colon histology images: The glas challenge contest. Med. Image Anal. 2017, 35, 489–502. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Twelve combinations of FDD–Outlier Rejection algorithm pairs. ORB, BRISK, KAZE, and AKAZE are combined with RANSAC, GC-RANSAC, and MAGSAC++ to generate 12 combinations.
Figure 1. Twelve combinations of FDD–Outlier Rejection algorithm pairs. ORB, BRISK, KAZE, and AKAZE are combined with RANSAC, GC-RANSAC, and MAGSAC++ to generate 12 combinations.
Engproc 59 00121 g001
Figure 2. Sample image (left) and augmented images (right).
Figure 2. Sample image (left) and augmented images (right).
Engproc 59 00121 g002
Figure 3. Flowchart of the image registration process.
Figure 3. Flowchart of the image registration process.
Engproc 59 00121 g003
Figure 4. Output image after registration of image pairs for (a) OR, (b) OGR, (c) OUR, (d) AR, (e) AGR, (f) AUM, (g) KR, (h) KGR, (i) KUM, (j) BR, (k) BGR, and (l) BUM.
Figure 4. Output image after registration of image pairs for (a) OR, (b) OGR, (c) OUR, (d) AR, (e) AGR, (f) AUM, (g) KR, (h) KGR, (i) KUM, (j) BR, (k) BGR, and (l) BUM.
Engproc 59 00121 g004
Figure 5. Graph depicting values of SSIM, SSC, UQI, and VIF.
Figure 5. Graph depicting values of SSIM, SSC, UQI, and VIF.
Engproc 59 00121 g005
Figure 6. Graph depicting PSNR values of different methods.
Figure 6. Graph depicting PSNR values of different methods.
Engproc 59 00121 g006
Figure 7. Graph depicting RMSE values of different methods.
Figure 7. Graph depicting RMSE values of different methods.
Engproc 59 00121 g007
Table 1. Precision based on total number of matches and inliers.
Table 1. Precision based on total number of matches and inliers.
MethodsTotal MatchesInliersPrecision
OR158840.5325
OGR158920.5840
OUM158920.5864
AR130730.5638
AGR130780.6003
AUM130780.5987
KR2421450.6000
KGR2421590.6580
KUM2421590.6585
BR4032690.6663
BGR4032810.6961
BUM4032810.6986
Table 2. Evaluation metrics of different feature-based image registration methods.
Table 2. Evaluation metrics of different feature-based image registration methods.
MethodsSSIMMSERMSEPSNRSSCVIFUQI
OR0.85291305.103425.654325.65420.21560.33890.8457
OGR0.87621278.286423.021822.79630.25540.37040.8514
OUM0.8904822.980720.994723.14790.26590.37760.8557
AR0.8839830.515121.518523.20560.27080.38350.8557
AGR0.8863605.333920.304123.29520.26940.38230.8623
AUM0.8993754.262319.376424.01580.29110.40390.8599
KR0.9012356.879017.952923.47080.26340.38210.8667
KGR0.9206265.530815.612024.61490.30110.41450.8714
KUM0.9255239.979614.979924.89950.30900.42160.8730
BR0.9375190.485513.048726.14840.34320.45970.8771
BGR0.9431163.816512.092926.79190.35800.47640.8786
BUM0.9485134.103311.397227.11080.36560.48440.8799
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Adhikari, P.; Roy, B.; Sinkar, O.; Gupta, M.; Ningthoujam, C. Experimental Analysis of Feature-Based Image Registration Methods in Combination with Different Outlier Rejection Algorithms for Histopathological Images. Eng. Proc. 2023, 59, 121. https://doi.org/10.3390/engproc2023059121

AMA Style

Adhikari P, Roy B, Sinkar O, Gupta M, Ningthoujam C. Experimental Analysis of Feature-Based Image Registration Methods in Combination with Different Outlier Rejection Algorithms for Histopathological Images. Engineering Proceedings. 2023; 59(1):121. https://doi.org/10.3390/engproc2023059121

Chicago/Turabian Style

Adhikari, Pritika, Bijoyeta Roy, Om Sinkar, Mousumi Gupta, and Chitrapriya Ningthoujam. 2023. "Experimental Analysis of Feature-Based Image Registration Methods in Combination with Different Outlier Rejection Algorithms for Histopathological Images" Engineering Proceedings 59, no. 1: 121. https://doi.org/10.3390/engproc2023059121

Article Metrics

Back to TopTop