1. Introduction
UAVs (unmanned aerial vehicles) are aircraft that can operate without a human pilot. They can carry out tasks through pre-programmed routes, remote control, or autonomous navigation, reducing the need for direct human intervention. UAVs are now beginning to be used to monitor and analyze deformations in civil engineering projects and infrastructure [
1,
2,
3,
4]. Their high maneuverability, easy availability and unique aerial perspective make them highly valuable. However, current UAV displacement monitoring methods have a low accuracy that is insufficient for millimeter-scale deformation monitoring. Some experiments showed that using measurement markers can significantly enhance image matching and the camera attitude estimation accuracy [
5,
6]. However, constantly changing imaging attitude and complex imaging environments (light, wind speed, etc.) result in blurred and noisy UAV images, which reduce the accuracy of measurement results [
7]. In addition, the camera sensor’s resolution and the shadows caused by tall buildings or tree cover can also affect image interpretation and analysis. In image processing, the target coordinates cannot be accurately recognized on the map, reducing the monitoring accuracy and limiting the application of UAVs in high-precision surveys. To extend the application of UAV monitoring, it is crucial to realize high-precision and high-accuracy marker detection and localization.
UAV displacement measurement markers can be classified into three categories, inflection-point-type (“L” shaped), intersection-point-type (cross-shaped) and circular markers, which are placed within the survey area through spray painting or marker board fixing. These markers have an inflection point, intersection point and center point as the measurement points, respectively [
8]. These measurement points help manual point selection on the map. Circular markers are easy to locate due to their simple graphics, but they can be affected by lighting changes and may not be accurately positioned in environments with uneven or strongly changing lighting [
9]. Better than L-shaped markers, cross-shaped markers provide rich texture information and have a central symmetry, making them ideal for automatic point selection and thus improving the detection efficiency. In addition, cross-shaped markers have a clear center contrast and no eccentricity. Therefore, we chose cross-shaped markers as the measuring markers. Researchers also have designed special markers for specific applications [
10,
11]. However, these markers cannot be widely applied due to complicated usage conditions.
Traditionally, the coordinates of a cross-shaped marker’s measuring point are extracted manually, which is labor-intensive and inefficient. Furthermore, the accuracy of the extracted coordinates depends on the vision effect and the operator’s experience. Currently, various target detection methods are used successfully. Commonly used cross-shaped marker detection methods include the Harris algorithm [
12], the template matching algorithm [
13,
14], and deep learning methods [
15,
16,
17]. Cheng et al. [
18] used the Harris algorithm for UAV image corner detection, enhancing the feature point extraction accuracy by using the speed-up robust feature (SURF) algorithm. This improved the quality and efficiency of UAV image matching. Azimbeik K et al. [
19] designed virtual markers with special shapes and used template matching and camera calibration methods to improve image-based full-field measurements. Their method has been successfully employed to measure the displacements of a railroad bridge. With the development of deep learning algorithms in recent years, Girshick et al. [
20] proposed an R-CNN algorithm, which performs a similarity analysis on the whole detected region based on the known features of the target. This allows for selection of a region with a high similarity to the sample that was input into the convolutional neural network and for detection of the target. However, deep learning methods require a lot of labeled samples for training, which is labor-intensive and leads to a low efficiency. For non-deep learning methods, the efficiency and accuracy are affected by factors such as obscured marker imaging, overexposure, sensor displacement, and noise. Xing Lei et al. [
21] introduced a method based on the Radon transform principle to accurately detect and locate cross-shaped markers using saliency maps. This approach improved the robustness and accuracy of marker detection. However, when this method is used for UAV marker detection, its detection parameters are affected by the UAV flight altitude and the marker parameters, as well as the focal length and pixel size of vision sensors. This dependency prevents automated detection and may affect the detection accuracy in complex UAV route planning.
In this study, we improve the cross-shaped marker detection method based on the Radon transform by proposing an adaptive method for selecting detection parameters considering factors such as the UAV vision sensor parameters, flight altitude and marker parameters in data processing. The marker detection and localization method based on the Radon transform (referred to as the original Radon transform method) is introduced in
Section 2.1.
Section 2.2 presents the proposed adaptive parameter-selecting Radon transform marker detection and localization method (referred to as the adaptive Radon transform method). In
Section 3, the proposed adaptive Radon transform method is compared with the traditional Harris algorithm, the template-matching method and the original Radon transform method, and the appropriate detection parameters for different flying heights are investigated. Some conclusions are drawn in
Section 4.
3. Experiments and Analysis
This section verifies the adaptive methods for parameter selection introduced in
Section 2.2.1 and
Section 2.2.2. It also investigates the appropriate parameter combinations for different measurement conditions. The adaptive method is compared with other marker detection methods. The flowchart of the proposed adaptive method in this paper is shown in
Figure 6.
3.1. Critical Detection Radius
The critical detection radius refers to the minimum radius in which the marker center point can be detected. Only when is satisfied ( being the pixel length of the marker radius) can optimal marker detection be achieved. Obtaining the value of can exclude a large number of parameter combinations, reducing the workload involved. is calculated as follows: any two parameters in the combination of parameters are chosen, and only one parameter is adjusted downwards from its optimal value. If the marker cannot be detected, the value of the previous detected set of markers is taken as a candidate for . After all the data have been processed, the largest candidate is the final .
The UAV utilized in the experiment is the DJI Phantom 4 RTK. The weather during the experiment was sunny (23 °C) with a breeze. The UAV camera parameters are shown in
Table 1.
As shown in
Figure 7, data collection and processing were designed as follows:
- (1)
: images were acquired at different heights from 15 m to 50 m with a step size of 1 m;
- (2)
: five targets of {20, 25, 30, 35, 40} cm were laid at each height;
- (3)
: each target was processed with six presets ratios {1/4, 1/3, 1/2, 2/3, 3/4, 1}.
The
value of different parameter combinations was calculated via Equation (4) and the results are shown in
Table 2.
The maximum
value was determined via Equation (8) and taken as the critical radius
.
where
is the
of each parameter combination. The final critical detection radius is 6.
3.2. Appropriate Combinations of Detection Parameters
The
value calculated by Equation (4) is affected by the parameters
,
, and
. When determining an
value, we should consider factors like the marker installation difficulty, image quality, and marker detection calculation efficiency. Therefore,
and
should be minimized while maintaining the detection accuracy. Since UAV measurements have specific flight altitude requirements, the value of
fluctuates around a constant value. In this study, the detection accuracy is measured by the distance between the detected target point and the manually selected target point. The manually selected target point locations were obtained by averaging at least three selected points, and any
values smaller than
were rounded off during processing. The same experimental data as in
Section 3.1 were used for this experiment. We started from a height of 15 m and a step size of 5 m. The results are shown in
Figure 8.
As shown in
Figure 8, when the flight height is low, all the five target sizes are able to detect the marker center at all six acquisition radius ratios. As the height increases, the combination of a small target size and a small radius ratio appears to be unable to detect the marker (
). Additionally, as the value of
gets closer to
(
), the accuracy of marker detection decreases. At the flight altitude of 50 m, all targets are only detected at
. We analyzed the results as follows:
- (1)
The minimum detection error was determined for each height of ;
- (2)
The accuracy threshold was set and all detection errors were found that satisfy at height ;
- (3)
As the value of increases, the difficulty of marker installation is more significant than the improvement in the computational efficiency. In this paper, we give priority to rather than . Therefore, among all the detection errors that satisfy the conditions in (2), we prioritized the combination of parameters with a small marker size as the appropriate detection parameter.
The analysis results are shown in
Table 3.
The errors were calculated by the following equations:
where
denotes the RMSE of all parameter combinations,
denotes the number of parameter combinations that can detect the marker,
denotes the minimum detection error at this height, and
denotes the detection error of the best parameter combination in this paper. We have identified appropriate parameter combinations at various heights and will use them in the following sections.
3.3. Performance of the Adaptive Radon Transform Method
In this section, the performance of the proposed adaptive Radon transform method is assessed by comparing it with the original Radon transform method, the template matching method, and the Harris corner point detection method separately. For the experiment, we used the data in
Section 3.1 after Gaussian blurring and adding Gaussian noise to simulate the low image quality that occurs during real UAV operations (
Figure 9).
For the proposed method, the parameter combinations for different flying heights were selected according to the results of
Section 3.2 using the nearest-neighbor rule. For example, when H = {18, 19, 20, 21, 22} m, all parameter combinations were selected based on H = 20 m. The original Radon transform method selects parameters based on vision experience (
Figure 10). The marker information acquisition radius was determined by human judgment, and the pixel length of the marker image at that radius was acquired through statistical analysis. The edge width of the cross-shaped scoring template was determined by the distance between the edge and the centerline of the black and white areas. In this experiment, the selected heights, R and L are 15–50 m, 11 pixels, and 1.6 pixels, respectively. The same parameters were used in the Harris algorithm as those in paper [
12]. The template matching method determines a square template that can be uniformly divided into four rectangles. Two rectangles are defined as black sections and two others are white sections. These two colored sections are interlaced. The marker center is located by matching the square template with the image.
We used the four methods to detect the markers in low-quality images at all heights. If the distance between the detected marker and the manually selected marker is greater than 3 pixels, detection is considered to have failed. The results are shown in
Table 4.
As shown in
Table 4, the accuracy of the proposed method is much higher than that of the template matching method and the Harris corner point detection method and is also higher than the original Radon-transform-based detection method. The proposed method failed at detecting markers with a size of 20 cm at flight altitudes of 47, 48, 49, and 50 m (
Figure 11).
The proposed method failed at detecting markers with a size of 20 cm at flight altitudes of 47, 48, 49, and 50 m (
Figure 11). These images have some similarities: the marker’s radius is under 6 pixels, the black sector area is severely lacking, and the center area is blurred. These features limit marker detection. At flight altitudes of 47 m or above, the marker radius should be larger than 20 cm for detection.
After detecting markers, we performed marker localization using a surface-fitting method [
22]. The Radon transform method locates markers in the saliency map, whereas other methods locate markers in the original image. To evaluate accuracy, we calculated the RMSE of the five markers at each height for all four methods. The results are shown in
Figure 12.
The results in
Figure 12 show that the proposed method outperforms the other three methods in terms of the marker localization accuracy, which is as high as within 1 pixel. The overall accuracy of the original Radon transform method is high, but the positioning can be inaccurate at certain heights. This is because parameter selection is influenced by operator experience and the complex and changing image situation. The selected parameters may not be adaptable to the image data, resulting in a low accuracy. The proposed method has a higher accuracy than the template matching method and the Harris detection method, primarily because the latter two methods rely heavily on the original image quality, which is often compromised in UAV photography. The proposed adaptive Radon transform method locates markers in the saliency map. The original image quality only affects the circular spot size in the saliency map, but also has no effect on the location of the peak point. Therefore, using the generated marker saliency map for marker localization leads to a high accuracy.
As shown in
Table 5, the proposed method outperforms the template matching method and Harris algorithm with regard to the detection success rate and localization accuracy. Additionally, compared with the original Radon transform method, the proposed method avoids many manual operations, greatly improving the efficiency and also improving the detection success rate and localization accuracy. In complex imaging environments, the proposed method is able to achieve a balance between efficiency and accuracy. It ultimately contributes to the achievement of a high precision, a high efficiency and automation of UAV displacement measurements.
3.4. Displacement Measurement Experiment
In order to evaluate the effectiveness of the method proposed in this paper for dis-placement measurements, we carried out three-dimensional displacement measurement experiments in an area of about 10,000 square meters located directly south of Central South University’s stadium. The specific experimental flow is as follows:
(1) Lay out measurement markers and acquire UAV images in four missions. The measurement markers were laid out as shown in
Figure 13. Four marker control points, eleven marker displacement measurement points and two 3D slide table displacement simulation points were laid out. The markers were set at a size of 20 ∗ 20 cm based on the results in
Section 3.2. The main body of the 3D simulation point is a three-axis slide unit (with an accuracy of 1 mm) with a measurement mark fixed on the top. The slide scale can be adjusted to set the true displacement value (
Figure 14). Control points were measured using a Leica TS09 model (Leica, Wetzlar, Germany) total station with an accuracy of 2.2 mm.
(2) Select the data of the first mission and locate all measurement markers using the proposed method, the original Radon transform method, the Harris algorithm, and the template matching method separately. Reconstruct the 3D model.
(3) Export the 3D coordinates of the displacement simulation points. Select the data of the second mission and repeat steps (1) and (2) to obtain the 3D coordinates of the displacement simulation points.
(4) Analyze the difference in the 3D coordinates of the displacement simulated points obtained from the two missions. Then, we can obtain the displacement measurements of these two UAV missions via Equation (11).
where
and
denote the three-dimensional coordinates of the nth measurement point computed during the
th and
th UAV missions, respectively, and
denotes the displacement measurement result of the nth measurement point in two adjacent UAV missions.
The weather during the experiment was sunny with a breeze. The other conditions during the experiment are shown in
Table 6.
The final displacement results of the displacement simulation points were obtained as shown in
Table 7.
Based on the results in
Table 7, the RMSE of the displacement measurements of the four methods was calculated and is shown in
Table 8.
As the displacement measurement accuracy in
Table 8 shows, the proposed method outperforms the original Radon transform method, template matching method and Harris algorithm. This validates the effectiveness of the proposed method for 3D displacement deformation measurements.
4. Conclusions
This paper introduces an adaptive marker detection and localization method based on the Radon transform for solving problems in UAV displacement measurements, such as the low efficiency, precision and automation in detecting markers from low-quality images. We first study the principle and method of the original marker detection method based on the Radon transform, analyzing its limitations in UAV displacement measurements. By focusing on two key detection parameters, namely, the marker information acquisition radius and the cross-shaped scoring template edge width, we propose an adaptive marker detection method based on the Radon transform specifically applicable to marker detection in UAV measurement images.
The experimental results demonstrate that the proposed method can automatically derive the necessary measurement parameters at different flight altitudes. This greatly reduces the manual marker selection time and enhances accuracy and practicality compared with the original Radon transform method. The proposed method exhibits a higher detection success rate and accuracy and stronger noise and ambiguity resistance under complex imaging conditions compared to other methods. In the displacement measurement experiments, the proposed method displays a higher displacement measurement accuracy than the original Radon transform method, the template matching method and the Harris algorithm, demonstrating its practicality for displacement measurements. The experiments in this paper were conducted in realistic settings using standard equipment and materials, ensuring the universal applicability of the results.
This work also has some limitations. The parameter combinations obtained in
Section 3.2 may not be optimal for all actual engineering applications involving a wider range of flight altitudes and different marker installation conditions. Therefore, parameter combinations should be refined in subsequent engineering applications.