Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = maximum likelihood estimation sample consensus (MLESAC)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 37642 KB  
Article
Automated Georectification, Mosaicking and 3D Point Cloud Generation Using UAV-Based Hyperspectral Imagery Observed by Line Scanner Imaging Sensors
by Anthony Finn, Stefan Peters, Pankaj Kumar and Jim O’Hehir
Remote Sens. 2023, 15(18), 4624; https://doi.org/10.3390/rs15184624 - 20 Sep 2023
Cited by 7 | Viewed by 2084
Abstract
Hyperspectral sensors mounted on unmanned aerial vehicles (UAV) offer the prospect of high-resolution multi-temporal spectral analysis for a range of remote-sensing applications. However, although accurate onboard navigation sensors track the moment-to-moment pose of the UAV in flight, geometric distortions are introduced into the [...] Read more.
Hyperspectral sensors mounted on unmanned aerial vehicles (UAV) offer the prospect of high-resolution multi-temporal spectral analysis for a range of remote-sensing applications. However, although accurate onboard navigation sensors track the moment-to-moment pose of the UAV in flight, geometric distortions are introduced into the scanned data sets. Consequently, considerable time-consuming (user/manual) post-processing rectification effort is generally required to retrieve geometrically accurate mosaics of the hyperspectral data cubes. Moreover, due to the line-scan nature of many hyperspectral sensors and their intrinsic inability to exploit structure from motion (SfM), only 2D mosaics are generally created. To address this, we propose a fast, automated and computationally robust georectification and mosaicking technique that generates 3D hyperspectral point clouds. The technique first morphologically and geometrically examines (and, if possible, repairs) poorly constructed individual hyperspectral cubes before aligning these cubes into swaths. The luminance of each individual cube is estimated and normalised, prior to being integrated into a swath of images. The hyperspectral swaths are co-registered to a targeted element of a luminance-normalised orthomosaic obtained using a standard red–green–blue (RGB) camera and SfM. To avoid computationally intensive image processing operations such as 2D convolutions, key elements of the orthomosaic are identified using pixel masks, pixel index manipulation and nearest neighbour searches. Maximally stable extremal regions (MSER) and speeded-up robust feature (SURF) extraction are then combined with maximum likelihood sample consensus (MLESAC) feature matching to generate the best geometric transformation model for each swath. This geometrically transforms and merges individual pushbroom scanlines into a single spatially continuous hyperspectral mosaic; and this georectified 2D hyperspectral mosaic is then converted into a 3D hyperspectral point cloud by aligning the hyperspectral mosaic with the RGB point cloud used to create the orthomosaic obtained using SfM. A high spatial accuracy is demonstrated. Hyperspectral mosaics with a 5 cm spatial resolution were mosaicked with root mean square positional accuracies of 0.42 m. The technique was tested on five scenes comprising two types of landscape. The entire process, which is coded in MATLAB, takes around twenty minutes to process data sets covering around 30 Ha at a 5 cm resolution on a laptop with 32 GB RAM and an Intel® Core i7-8850H CPU running at 2.60 GHz. Full article
Show Figures

Figure 1

22 pages, 30086 KB  
Article
A Smart and Robust Automatic Inspection of Printed Labels Using an Image Hashing Technique
by Mehshan Ahmed Khan, Fawad Ahmed, Muhammad Danial Khan, Jawad Ahmad, Harish Kumar and Nikolaos Pitropakis
Electronics 2022, 11(6), 955; https://doi.org/10.3390/electronics11060955 - 19 Mar 2022
Cited by 6 | Viewed by 3137
Abstract
This work is focused on the development of a smart and automatic inspection system for printed labels. This is a challenging problem to solve since the collected labels are typically subjected to a variety of geometric and non-geometric distortions. Even though these distortions [...] Read more.
This work is focused on the development of a smart and automatic inspection system for printed labels. This is a challenging problem to solve since the collected labels are typically subjected to a variety of geometric and non-geometric distortions. Even though these distortions do not affect the content of a label, they have a substantial impact on the pixel value of the label image. Second, the faulty area may be extremely small as compared to the overall size of the labelling system. A further necessity is the ability to locate and isolate faults. To overcome this issue, a robust image hashing approach for the detection of erroneous labels has been developed. Image hashing techniques are generally used in image authentication, social event detection and image copy detection. Most of the image hashing methods are computationally extensive and also misjudge the images processed through the geometric transformation. In this paper, we present a novel idea to detect the faults in labels by incorporating image hashing along with the traditional computer vision algorithms to reduce the processing time. It is possible to apply Speeded Up Robust Features (SURF) to acquire alignment parameters so that the scheme is resistant to geometric and other distortions. The statistical mean is employed to generate the hash value. Even though this feature is quite simple, it has been found to be extremely effective in terms of computing complexity and the precision with which faults are detected, as proven by the experimental findings. Experimental results show that the proposed technique achieved an accuracy of 90.12%. Full article
(This article belongs to the Special Issue Emerging Applications of Computer Vision Technology)
Show Figures

Figure 1

20 pages, 10652 KB  
Article
An Efficient Point-Matching Method Based on Multiple Geometrical Hypotheses
by Miguel Carrasco, Domingo Mery, Andrés Concha, Ramiro Velázquez, Roberto De Fazio and Paolo Visconti
Electronics 2021, 10(3), 246; https://doi.org/10.3390/electronics10030246 - 22 Jan 2021
Cited by 4 | Viewed by 3883
Abstract
Point matching in multiple images is an open problem in computer vision because of the numerous geometric transformations and photometric conditions that a pixel or point might exhibit in the set of images. Over the last two decades, different techniques have been proposed [...] Read more.
Point matching in multiple images is an open problem in computer vision because of the numerous geometric transformations and photometric conditions that a pixel or point might exhibit in the set of images. Over the last two decades, different techniques have been proposed to address this problem. The most relevant are those that explore the analysis of invariant features. Nonetheless, their main limitation is that invariant analysis all alone cannot reduce false alarms. This paper introduces an efficient point-matching method for two and three views, based on the combined use of two techniques: (1) the correspondence analysis extracted from the similarity of invariant features and (2) the integration of multiple partial solutions obtained from 2D and 3D geometry. The main strength and novelty of this method is the determination of the point-to-point geometric correspondence through the intersection of multiple geometrical hypotheses weighted by the maximum likelihood estimation sample consensus (MLESAC) algorithm. The proposal not only extends the methods based on invariant descriptors but also generalizes the correspondence problem to a perspective projection model in multiple views. The developed method has been evaluated on three types of image sequences: outdoor, indoor, and industrial. Our developed strategy discards most of the wrong matches and achieves remarkable F-scores of 97%, 87%, and 97% for the outdoor, indoor, and industrial sequences, respectively. Full article
(This article belongs to the Special Issue Applications of Computer Vision)
Show Figures

Figure 1

25 pages, 10018 KB  
Article
Automated Georectification and Mosaicking of UAV-Based Hyperspectral Imagery from Push-Broom Sensors
by Yoseline Angel, Darren Turner, Stephen Parkes, Yoann Malbeteau, Arko Lucieer and Matthew F. McCabe
Remote Sens. 2020, 12(1), 34; https://doi.org/10.3390/rs12010034 - 20 Dec 2019
Cited by 41 | Viewed by 8092
Abstract
Hyperspectral systems integrated on unmanned aerial vehicles (UAV) provide unique opportunities to conduct high-resolution multitemporal spectral analysis for diverse applications. However, additional time-consuming rectification efforts in postprocessing are routinely required, since geometric distortions can be introduced due to UAV movements during flight, even [...] Read more.
Hyperspectral systems integrated on unmanned aerial vehicles (UAV) provide unique opportunities to conduct high-resolution multitemporal spectral analysis for diverse applications. However, additional time-consuming rectification efforts in postprocessing are routinely required, since geometric distortions can be introduced due to UAV movements during flight, even if navigation/motion sensors are used to track the position of each scan. Part of the challenge in obtaining high-quality imagery relates to the lack of a fast processing workflow that can retrieve geometrically accurate mosaics while optimizing the ground data collection efforts. To address this problem, we explored a computationally robust automated georectification and mosaicking methodology. It operates effectively in a parallel computing environment and evaluates results against a number of high-spatial-resolution datasets (mm to cm resolution) collected using a push-broom sensor and an associated RGB frame-based camera. The methodology estimates the luminance of the hyperspectral swaths and coregisters these against a luminance RGB-based orthophoto. The procedure includes an improved coregistration strategy by integrating the Speeded-Up Robust Features (SURF) algorithm, with the Maximum Likelihood Estimator Sample Consensus (MLESAC) approach. SURF identifies common features between each swath and the RGB-orthomosaic, while MLESAC fits the best geometric transformation model to the retrieved matches. Individual scanlines are then geometrically transformed and merged into a single spatially continuous mosaic reaching high positional accuracies only with a few number of ground control points (GCPs). The capacity of the workflow to achieve high spatial accuracy was demonstrated by examining statistical metrics such as RMSE, MAE, and the relative positional accuracy at 95% confidence level. Comparison against a user-generated georectification demonstrates that the automated approach speeds up the coregistration process by 85%. Full article
Show Figures

Figure 1

28 pages, 5505 KB  
Article
Drift-Aware Monocular Localization Based on a Pre-Constructed Dense 3D Map in Indoor Environments
by Guanyuan Feng, Lin Ma, Xuezhi Tan and Danyang Qin
ISPRS Int. J. Geo-Inf. 2018, 7(8), 299; https://doi.org/10.3390/ijgi7080299 - 25 Jul 2018
Cited by 8 | Viewed by 3117
Abstract
Recently, monocular localization has attracted increased attention due to its application to indoor navigation and augmented reality. In this paper, a drift-aware monocular localization system that performs global and local localization is presented based on a pre-constructed dense three-dimensional (3D) map. In global [...] Read more.
Recently, monocular localization has attracted increased attention due to its application to indoor navigation and augmented reality. In this paper, a drift-aware monocular localization system that performs global and local localization is presented based on a pre-constructed dense three-dimensional (3D) map. In global localization, a pixel-distance weighted least squares algorithm is investigated for calculating the absolute scale for the epipolar constraint. To reduce the accumulative errors that are caused by the relative position estimation, a map interaction-based drift detection method is introduced in local localization, and the drift distance is computed by the proposed line model-based maximum likelihood estimation sample consensus (MLESAC) algorithm. The line model contains a fitted line segment and some visual feature points, which are used to seek inliers of the estimated feature points for drift detection. Taking advantage of the drift detection method, the monocular localization system switches between the global and local localization modes, which effectively keeps the position errors within an expected range. The performance of the proposed monocular localization system is evaluated on typical indoor scenes, and experimental results show that compared with the existing localization methods, the accuracy improvement rates of the absolute position estimation and the relative position estimation are at least 30.09% and 65.59%, respectively. Full article
Show Figures

Figure 1

Back to TopTop