Next Article in Journal
Content of Lipids, Fatty Acids, Carbohydrates, and Proteins in Continental Cyanobacteria: A Systematic Analysis and Database Application
Next Article in Special Issue
Modeling and Analysis of Drill String–Casing Collision under the Influence of Inviscid Fluid Forces
Previous Article in Journal
Measurement and Analysis on Magnetic Field Influence of Substation for Magnetic Shielding Device
Previous Article in Special Issue
Time-Frequency Feature-Based Seismic Response Prediction Neural Network Model for Building Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Automatic Recognition Method of Artificial Ground Target Based on Improved HED

1
Graduate School, Shenyang Ligong University, Shenyang 110159, China
2
Liaoning Urban Construction Technical College, Shenyang 110122, China
3
School of Information Science and Engineering, Shenyang Ligong University, Shenyang 110159, China
4
School of Automobile and Traffic, Shenyang Ligong University, Shenyang 110159, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(5), 3163; https://doi.org/10.3390/app13053163
Submission received: 19 December 2022 / Revised: 24 January 2023 / Accepted: 20 February 2023 / Published: 1 March 2023
(This article belongs to the Special Issue Advances in Nonlinear Dynamics and Mechanical Vibrations)

Abstract

:
Automatic target recognition technology is an important research direction in the field of machine vision. Artificial ground targets, such as bridges, airports and houses, are mostly composed of straight lines. The ratio of geometric primitive lines to the triangle area formed by their combination is used as the feature quantity to describe the group of lines, so as to characterize the artificial ground target. In view of the shortcomings of traditional edge detection methods, such as background suppression, non-prominent targets, missing positions, etc., this paper proposed an image edge detection method based on depth learning. By combining the traditional edge detection algorithm with the edge detection algorithm based on an improved HED network, the real-time target image edge detection was completed. An automatic target recognition method based on template matching was proposed. This method solved the problem of both homologous template matching and heterogeneous template matching, which has important theoretical value. First, the lines were combined to form the geometric primitives of the line group, and then the relationship of the lines in the group was determined by using the characteristic quantity of the line group. The best line group matching the target template was found in the image edge, and the homonymous points in the real-time image and the target template were calculated. The affine transformation matrix between the two images was obtained according to the homonymous points, and then the accurate position of the target in the real-time image was found.

1. Introduction

Automatic target recognition in a complex background refers to target detection and recognition by specific algorithms based on the information obtained by imaging sensors. No matter whether in the military field or civilian field, automatic target recognition technology has a wide range of applications. In the military field, automatic recognition technology [1], as a key technology in the field of precision guidance, plays an important role in the modern war environment. In the civil field, automatic recognition technology plays an important role in manufacturing, logistics, anti-counterfeiting and security. The automatic target recognition method based on template matching [2] is a very important automatic recognition method. It completes the automatic target recognition by matching the template map with the real-time map. The benchmark map usually uses the structure map of the target, which is easy to calculate quickly, so it has been widely used.
Due to the changeable climate, complex environment, affine transformation [3] and other reasons related with target image, the template matching method using gray level features could not extract stable feature points for template matching between images. Therefore, this paper proposed a target representation method based on linear geometric primitives according to line stability.
At present, the research on the edge detection algorithm is still being updated. Reference [4] proposed an edge detection algorithm based on the lightweight module reconstruction model to obtain data features; Reference [5] proposed an optimized Canny edge detection algorithm and an edge detection algorithm based on depth learning; Reference [6] proposed an edge detection algorithm for color image graying; Reference [7] proposed an infrared image edge detection method based on the adaptive threshold method; Reference [8] proposed an improved Canny edge detection method; and in [9], a quantum image edge detection algorithm based on the classical Gauss–Laplace operator was proposed. In terms of edge detection, algorithms based on depth learning, such as Holistically Nested Edge Detection [10], DeepContour [11] and Deep Edge [12], have been proposed in recent years. The HED (Holistically Nested Edge Detection) algorithm is an end-to-end edge detection network [13]. The HED algorithm can perform multi-scale and multi-level learning on images and can directly operate the entire image, and it has a good effect on extracting the complex texture information of objects and backgrounds. Image feature extraction relies on high-level features with more semantic information in the HED network, but the resolution of high-level features in the HED network is very low, which leads to the loss of some information and affects the extraction of image objects. In order to solve this problem, the HED network was improved to make the output of high-level features more precise. At the same time, this algorithm was integrated with the traditional edge detection algorithm to improve the sensitivity and accuracy of location. Therefore, this paper designs an image edge detection method based on improved HED.
Traditional automatic target recognition technology based on template matching usually requires navigation information to transform the scale and perspective of the template drawing, and the images obtained in different periods and environments are different. In order to solve the above problems, this paper designs an automatic target recognition method based on template matching without leader information according to the characteristics of affine transformation invariants.
Therefore, this paper designs an image edge detection method based on the combination of traditional edge detection and depth learning. An automatic target recognition method based on template matching is designed by extracting target geometric primitives and calculating similarity.

2. Research on Edge Detection Method of Deep Learning Based on Improved HED

2.1. Edge Extraction Method Based on HED Network

Deep learning has been a popular machine learning method in recent years. Many achievements have been made in image processing, search technology, data mining, machine translation, natural language processing, multimedia learning, machine learning, speech recognition and other related fields, among which image processing has become an important research field in deep learning. In recent years, the depth learning method has been constantly updated, and the edge extraction method based on depth learning can be used to extract the object appearance details, which is more suitable for describing the object.
In the aspect of edge extraction, the HED algorithm is an end-to-end edge detection network. By multi-scale and multi-level feature extraction, it can predict the probability of the edge position and output the probability map of the edge response. It has the advantages of a fast operation speed and strong anti-noise ability, and can deal well with the complex texture information on the target and background. The edge extraction ability of natural images is far higher than that of traditional filtering methods such as Canny.
Edge detection results of HED network on natural images are shown in Figure 1. The HED structure was obtained by modifying the 16 layer VGG. A side output was added after the final convolution layer of each stage. HED designed five side outputs, which were output from the conv1-2, conv2-2, conv3-3, conv4-3 and conv5-3 of the VGG16 network, respectively, and all the full connection layers of the VGG network were removed. The receptive fields of these side outputs were 5 × 5, 14 × 14, 40 × 40, 92 × 92 and 196 × 196 respectively. The receptive field was constantly increasing, while the location information was constantly lost, and the semantic information was gradually enriched. The network deconvoluted these feature maps to make the size the same as the original image. Finally, the output results of each side output were fused through the weight value to obtain information results that included both the accurate location information of the low-level network and the rich semantic information of the high-level network, making the detection results more accurate.
In the prediction phase, the output result of the side output could be directly used as the final result. The output results of all aide outputs could also be obtained, and then an average value was calculated as the final result, which further improved the accuracy.
As shown in Table 1, side outputs 1–5 were the side output layers of the 1st–5th convolution layers, respectively. Fusion output was the output of the last layer, average 1–4 was the average output value of the 1st–4th convolutions, average 1–5, average 2–4, Average 2–5, and so on. The merged result was the combined result of the average values of the results of all layers.
In the training process, the edge detection was actually a two-classification task for each pixel. Most of the pixels were non-edge, and only a few pixels were edge. The distribution of positive and negative samples was uneven, so ordinary cross-entropy loss could not be used. Here, pos_weight cross-entropy was used. The model output is shown in Figure 2.

2.2. Improved HED Network

The traditional HED algorithm was designed based on the VGG network structure. Because of pooling in each layer, the resolution of the feature map output from the upper layer of the network was relatively low. After deconvolution, the position information loss of the edge detection results from the last two side output outputs was too large, and the internal texture information of the result target after fusion with the lower layer side output was too much. However, the network high-level feature map had rich semantic features, which could better filter out the texture edges of the target, so that the output results could better reflect the appearance details of the target. Therefore, the edge extracted from the high-level side output was more meaningful for image edge extraction. Therefore, improving the resolution of the edge information output from the last two sides was conducive to improving the performance of image edge extraction.
Therefore, the HED network structure had been improved, the third and fourth pooling layers had been removed and the deconvolution layers in the last two side output layers had been modified accordingly, so as to extract more refined edges while retaining high-level semantic information. The improved HED network structure is shown in Figure 3.
The performance of the improved HED algorithm was evaluated on the BSDS500 data set. The BSDS500 data set is a visible light image data set provided by Berkeley University. There were 500 pictures in total, 200 training pictures, 200 test pictures and 100 verification pictures. The ground truth was manually marked. The CaffeModel network architecture was adopted, and F-measure was used to measure the contour detection effect.
F m e a s u r e = 2 · p r e c i s i o n · r e c a l l p r e c i s i o n + r e c a l l
where precision is the accuracy rate and recall is the recall rate. The improved HED algorithm is better than HED algorithm. 0.5% higher than HED algorithm.
In order to verify the effectiveness of the improved HED algorithm, we used multiple groups of images for experimental comparative analysis. From Figure 4, it is not difficult to see that although the abstract edge probability map output by the HED network could clearly display the target, and the contour was fuzzy. In contrast, the improved HED network had a higher image resolution and clearer contour due to the reduction in the pooling layer, so the edge detection performance of the improved HED network was significantly better than that of the HED network.

2.3. Edge Detection Method Based on Improved HED Network

Because the output result of the improved HED network was an abstract edge gray image, not a binary edge of the image, only relying on the image edge extraction of the improved HED network could not obtain a complete and accurate image target edge. Therefore, a method combining the traditional Canny edge detection algorithm with the improved HED network edge detection algorithm was proposed. First, the Canny method was used to extract the image edge Is, and the edge image Iz was obtained by performing and operating on the image edge IHed extracted by the improved HED network. Then, the edges connected with Iz were found in Is and connected, and finally the fused edge image was obtained.
The image edge detection contrast test was carried out to verify the effectiveness of the algorithm. Figure 5 and Figure 6 show the edge detection results obtained by using different edge detection algorithms from two scenes of a visible light image and an infrared light image, respectively, where (a) is the original image, (b) is the improved HED network, (c) is the result of DeepContour edge detection, and (d) is the algorithm in this paper. It can be seen from the figure that the effect of the DeepContour method in infrared image edge detection is not ideal. The edge detection method based on the improved HED network can highlight the significant contour of the target and reduce background interference, but the contour is coarse and the positioning accuracy is insufficient. Compared with the edge detection results of DeepContour and the improved HED network, the method proposed in this paper could not only make the target contour prominent and positioning accurate, but also improve the anti-noise interference ability and sensitivity, thus obtaining better edge detection results.

3. Research on Automatic Target Recognition Algorithm Based on Geometric Primitives of Line Groups

Geometric primitives usually refer to the basic elements that constitute geometric figures. Common geometric primitives include corner, line segment, circle, ellipse, etc. Geometric primitives’ extraction plays an important role in model-based computer vision. Li Yan et al. used the arc geometric primitive to match the workpiece [14]. Zhen Zongkun et al. used points to extract architectural plane features [15]. Zhang Xu et al. used geometric primitives to characterize the scattering characteristics [16]. Common artificial ground target are mostly composed of straight lines. The ratio of geometric primitive lines to the triangle area formed by their combination is used as the feature quantity to describe the group of lines, so as to characterize the artificial ground target. In this paper, we searched for geometric primitives that could reflect the characteristics of the structure of the target, and then used these characteristics to characterize the target.
Firstly, line segments were extracted by Hough transform in the image. According to the spatial proximity relationship, multiple line segments with common intersection points between line segments were formed into the geometric primitive of the common point line. Hough transform is a common method to recognize line features in image processing, it can transform line coordinate space into parameter space. When combining line segments, first combine the line segments in pairs, determine the threshold value of the neighborhood of the line segment according to the length of the line segment, combine the adjacent line segments in space and calculate the intersection point of the line segments. Finally, combine the line segments with the common intersection points into the primitives of the common point line. The method is shown in Figure 7.
In this paper, four lines on the target image were selected as the affine invariant feature of the geometric primitive, and this feature was used to represent the measured target. The similarity between the two targets was measured by the distance between the feature quantities of the two line groups. After finding the line group that matched the most, the homonymous points between the target template and the real-time image were found, the affine transformation matrix was obtained, the target positioning in the real-time image was completed and the target was automatically identified. The main process is shown in Figure 8.
When target detection was carried out in the real-time image, it was necessary to detect lines in the real-time image first, and four different line combinations from all lines were selected as the geometric primitives of the candidate targets. Because the real-time image was easily interfered with by noise, it needed to be preprocessed. The preprocessing included filtering, edge extraction, line detection and other steps.

3.1. Noise Filtering

As can be seen from Figure 9, a mean filter, median filter and Gaussian filter were all commonly used filters, but they made the whole image smooth and the boundary in the image blurred. Bilateral filtering was a filtering operation that could not only smooth the noise but also maintained the edge information. Its calculation formula is as follows:
g ( i , j ) = 1 k ( i , j ) i , j e [ ( i i ) 2 + ( i j ) 2 ] 2 σ d 2 e [ ( i i ) 2 + ( i j ) 2 ] 2 σ r 2 f ( i , j )
where f ( i , j ) was the input image, g ( i , j ) was the image after bilateral filtering, ( i , j ) was the domain pixel of ( i , j ) , and σ d and σ r were the standard deviations of the spatial domain and gray domain, respectively. K ( i , j ) was the normalization factor.
It can be seen from the formula that the smoothness of bilateral filtering was different for different regions. In areas with infrequent gray level transformation, the smoothing effect was greater, while in areas with frequent gray level transformation, such as edges, the smoothing effect was smaller.
The bilateral filter automatically judged whether the filter core was in a flat area or an edge area: if the filter core was in a flat area, an algorithm similar to Gaussian filtering was used for filtering. If the filter core was in an edge area, the weight of the edge pixels was increased to keep these pixels unchanged as much as possible.
Bilateral filtering was used to smooth the image. As shown in Figure 10, (a) is the original image, (b) is the image after bilateral filtering, and (c) and (d) are the effect pictures after Canny edge detection on the (a) and (b) images, respectively. The results showed that bilateral filtering could effectively reduce the noise in the image and improve the effect of edge detection.

3.2. Edge Extraction

After bilateral filtering, the edge detection algorithm based on the improved HED network was used in the image to extract the salient appearance details of the target. Then the Canny edge detection algorithm was combined to complete the extraction of the target edge.
It can be seen from Figure 11 that merging the edges extracted by the improved HED network and Canny could effectively reduce the background edges in the image and obtain significant target edges.

3.3. Line Detection

Hough transform was used to extract the lines from the binary images processed by the edge extraction algorithm. Hough transform could extract all the lines in the image. Figure 12 shows the results of the line extraction of the binary image using Hough transform. It can be seen that there are many false lines, including lines that are too short caused by noise and a long line divided into multiple line segments. The line extracted by Hough transform needed to be processed accordingly to extract the main line, so as to reduce the interference caused by other false lines. The processing method was to first eliminate the false lines that were too short due to noise. The specific steps were as follows: set a fixed threshold value, and when the length of the line was less than this threshold value, it was identified as a false line, and it was eliminated.

3.4. Determination of Homonymous Points by Geometric Primitives of Line Groups

According to the property that the area ratio of any two triangles in affine transformation was affine invariant, four lines on the target image were selected to form a geometric primitive. These lines were arranged according to the angle with the positive direction of the x-axis from small to large through the line equation, and the triangle area ratio between the lines was used to describe the line group. There are four kinds of triangles composed of four straight lines. The area of the four triangles was calculated, the maximum area of the four area values was recorded as S m a x and the four area values with S m a x as the benchmark were normalized to obtain the eigenvectors [ L 1 ,   L 2 ,   L 3 ,   L 4 ] of the line group, where L i = S i / S m a x . Then this feature vector was used to measure the correspondence between the line groups. In this paper, the Euclidean distance between the feature vectors of the two target line groups was used to measure the similarity between the geometric primitives of the two line groups. If the feature vector of the template was [ L 1 0 ,   L 2 0 , L 3 0 ,   L 4 0 ] , and the feature vector of the real-time graph was [ L 1 1 ,   L 2 1 , L 3 1 ,   L 4 1 ] , then the Euclidean distance between them was as follows:
d = ( L 1 0 L 1 1 ) 2 + ( L 2 0 L 2 1 ) 2 + ( L 3 0 L 3 1 ) 2 + ( L 4 0 L 4 1 ) 2
The value with the smallest difference between the two feature vectors was taken as the best line group. After finding the best matching line group, the intersections of different lines in the line group were used as the homonymous points to calculate the affine transformation matrix between the two target images. Each line was matched one by one to determine the intersection of the lines. After calculating the one-to-one correspondence of the lines, the homonymous points between the target template and the real-time graph could be obtained. The intersection coordinates of each line were set in the template line group as M1, and the intersection coordinates of the corresponding lines in the real-time graph were set as M2, and then the relationship between them is shown in the formula, where H was the affine transformation matrix between the template graph and the real-time graph:
M 2 = H M 1
Through the affine transformation Formula 3 of the two images, we know that:
{ x = a 11 x + a 12 y + t x y = a 21 x + a 22 y + t y
Two linear equations could be listed through a pair of matching points of two images, so at least three pairs of non-collinear matching points were required to calculate the affine transformation matrix H. Then the target could be positioned in the real-time image to achieve the effect of automatic target recognition.
From Figure 13 and Figure 14, through a group of visible light images and two groups of infrared and visible light template images, the target recognition algorithm verification experiment was carried out. The experimental analysis shows that the algorithm in this paper and the SURF algorithm can accurately identify the target in visible light image target recognition, but the algorithm in this paper is obviously superior to the SURF algorithm in infrared and visible light heterogenous image recognition. The matching error is less than 10 pixels, which is determined as the correct recognition target. The image recognition rate is the ratio of the number of correctly recognized images to the total number of images. Through the comparison and verification of the image recognition rate and the SURF algorithm, 50 groups of homologous and heterogenous images were used as data validation, and the final conclusion is shown in Table 2. The SURF algorithm has good matching effect between images with good contrast and small initial registration error, but its recognition ability is insufficient in scenes with long-distance imaging, low contrast and a large initial error. The algorithm can accurately identify the target for both the same source template and the different source template. The experimental results verify the accuracy and universality of the algorithm.
Through target recognition experiments on real-time images, the linear group geometric primitives proposed in the image could effectively characterize the structural features of the target, and then accurately recognized the target and verified the effectiveness of the algorithm described in this paper.

4. Conclusions

In this paper, an automatic recognition method of artificial ground targets based on improved HED was designed. It extracted edge features through the edge detection algorithm based on improved HED, extracted lines from edge images and measured the similarity between real-time images and templates using the extracted line groups. Finally, target recognition and location were realized through the similarity measurement results. This method could remove the dependence on the leader information in practical application, and had good universality and high accuracy. The method proposed in this paper is verified by the target recognition experiments of homologous template matching and heterogeneous template matching, and it is finally proved that this method can accurately identify the target and has universality. In recent years, the theory and methods of deep learning have made rapid progress. Limited by the fact that the real-time performance cannot meet the application requirements and the lack of training samples, the real-time performance of deep learning in the edge extraction of heterogeneous matching images still has a large amount of room for improvement in the future.

Author Contributions

Conceptualization, Y.J. and W.Z.; methodology, W.Z.; software, W.Z. and X.Z.; validation, X.Z.; formal analysis, W.Z. and X.Z.; investigation, W.Z.; resources, Y.J. and W.Z.; data curation, W.Z.; writing—original draft preparation, W.Z.; writing—review and editing, W.Z.; visualization, W.Z. and X.Z.; supervision, Y.J. and W.Z.; project administration, Y.J.; funding acquisition, Y.J. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to acknowledge the support from the following projects: Liaoning Province Higher Education Innovative Talents Program Support Project (Grant No. XLYC1902095) and Liaoning Province Basic Research Projects of Higher Education Institutions (Grant No. LG202107, LJKZ0239).

Institutional Review Board Statement

We have excluded these statements because the study did not require ethical approval.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yu, W. Automatic target recognition from an engineering perspective. J. Radar Sci. 2022, 11, 737–752. [Google Scholar]
  2. Zhang, B.; Pei, Y.; Huang, H. Dual model adaptive correlation filter tracking algorithm based on multi template matching. J. Terahertz Sci. Electron. Inf. Technol. 2022, 20, 618–625. [Google Scholar]
  3. Wang, L.; Jia, H.; Zhang, Y.; Zhang, G. Study on Implementation and Optimization of ARM-based Image Geometric Transformation Library. Comput. Sci. 2022, 49, 10–17. [Google Scholar]
  4. Huang, Y.; Chen, Z.; Chen, Q. Real-Time Detection Method for Transmission Line Faults Applying Edge Computing and Improved YOLOv5s Algorithm. Electr. Power Constr. 2023, 44, 91–99. [Google Scholar]
  5. Liu, H.; Ning, J.; Zou, Q. Research on Feature Extraction Technology of Mineral Zoning Image of Spiral Concentrator Based on Deep Learning. Nonferrous Met. Eng. 2022, 12, 91–99. [Google Scholar]
  6. Liu, Y.; Zhang, T.; Wang, E. Color Image Decolorization Algorithm Based on Efficient Edge Detection. Light Ind. Mach. 2022, 40, 52–58. [Google Scholar]
  7. Wang, H.; Yang, X. Energy storage control model based on compensating deviation of new energy output forecasting curve. J. Laser 2022, 43, 124–128. [Google Scholar]
  8. Li, X.; Yan, J. Gear surface edge detection based on improved Canny algorithm. Intell. Comput. Appl. 2022, 12, 180–183. [Google Scholar]
  9. Wu, Q.; Ma, L. A quantum image edge detection algorithm based on LoG operator. J. Quantum Electron. 2022, 39, 720–727. [Google Scholar]
  10. Xie, S.; Tu, Z. Holistically-Nested Edge Detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1395–1403. [Google Scholar]
  11. Shen, W.; Wang, X.; Wang, Y.; Bai, X.; Zhang, Z. DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3982–3991. [Google Scholar]
  12. Bertasius, G.; Shi, J.; Torresani, L. DeepEdge: A multi-scale bifurcated deep network for top-down contour detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4380–4389. [Google Scholar]
  13. Zhang, X.; Ren, Y. Improved multi-scale edge detection method based on HED. Microelectron. Comput. 2021, 38, 1–6+12. [Google Scholar]
  14. Li, Y.; Lu, Z.; Liu, F.; Li, X.; Liu, K. Arc workpiece matching method based on high-precision geometric primitives. Electron. Technol. Softw. Eng. 2019, 22, 73–76. [Google Scholar]
  15. Zhen, Z.; Cai, D. Automatic feature extraction technology based on planar geometric primitives. J. Water Resour. Archit. Eng. Build. Eng. 2018, 16, 147–151. [Google Scholar]
  16. Zhang, X.; Xu, F.; Jin, Y. Review of high-frequency scattering model of canonical geometric primitives. J. Radar 2022, 11, 126–143. [Google Scholar]
Figure 1. Edge detection results of HED network on natural images. (a) Bridge original image; (b) Bridge image HED network edge detection.
Figure 1. Edge detection results of HED network on natural images. (a) Bridge original image; (b) Bridge image HED network edge detection.
Applsci 13 03163 g001
Figure 2. HED model output diagram.
Figure 2. HED model output diagram.
Applsci 13 03163 g002
Figure 3. Improved HED network structure.
Figure 3. Improved HED network structure.
Applsci 13 03163 g003
Figure 4. Comparison between HED and improved HED network. (a) Original image of dam; (b) HED image of dam; (c) improved HED network image of dam.
Figure 4. Comparison between HED and improved HED network. (a) Original image of dam; (b) HED image of dam; (c) improved HED network image of dam.
Applsci 13 03163 g004
Figure 5. Visible light edge detection results. (a) Original image 1 of dam; (b) improved HED network image 1 of dam; (c) DeepContour image 1 of dam; (d) the algorithm in this paper image 1 of dam.
Figure 5. Visible light edge detection results. (a) Original image 1 of dam; (b) improved HED network image 1 of dam; (c) DeepContour image 1 of dam; (d) the algorithm in this paper image 1 of dam.
Applsci 13 03163 g005
Figure 6. Infrared light edge detection results. (a) Original earth image 2; (b) improved HED network Original earth image 2; (c) DeepContour Original earth image 2; (d) the algorithm in this paper Original earth image 2.
Figure 6. Infrared light edge detection results. (a) Original earth image 2; (b) improved HED network Original earth image 2; (c) DeepContour Original earth image 2; (d) the algorithm in this paper Original earth image 2.
Applsci 13 03163 g006
Figure 7. Triangle formed by line group.
Figure 7. Triangle formed by line group.
Applsci 13 03163 g007
Figure 8. Flow chart of automatic target recognition.
Figure 8. Flow chart of automatic target recognition.
Applsci 13 03163 g008
Figure 9. Common noise filter. (a) Original airport image; (b) mean filtering; (c) median filtering; (d) Gaussian filtering; (e) bilateral filtering.
Figure 9. Common noise filter. (a) Original airport image; (b) mean filtering; (c) median filtering; (d) Gaussian filtering; (e) bilateral filtering.
Applsci 13 03163 g009
Figure 10. The effect picture of the image after bilateral filtering. (a) Original image of overpass; (b) image after bilateral filtering; (c) Canny edge detection on the original image; (d) Canny edge detection after bilateral filtering.
Figure 10. The effect picture of the image after bilateral filtering. (a) Original image of overpass; (b) image after bilateral filtering; (c) Canny edge detection on the original image; (d) Canny edge detection after bilateral filtering.
Applsci 13 03163 g010
Figure 11. Result of edge detection. (a) Original airport image 1; (b) the algorithm in this paper airport image; (c) original road image 2; (d) the algorithm in this paper road image 2.
Figure 11. Result of edge detection. (a) Original airport image 1; (b) the algorithm in this paper airport image; (c) original road image 2; (d) the algorithm in this paper road image 2.
Applsci 13 03163 g011aApplsci 13 03163 g011b
Figure 12. Extract main straight lines through Hough transform. (a) Airport image 1 extracting lines; (b) Road image 2 extracting lines.
Figure 12. Extract main straight lines through Hough transform. (a) Airport image 1 extracting lines; (b) Road image 2 extracting lines.
Applsci 13 03163 g012
Figure 13. Scene 1 visible light target matching. (a) Template; (b) real-time graph; (c) extract green lines from templates in Scene 1; (d) extract red lines from real-time graphs in Scene 1; (e) red box position in template in Scene 1; (f) red box position in real-time map in Scene 1; (g) red box position of SURF in Scene 1.
Figure 13. Scene 1 visible light target matching. (a) Template; (b) real-time graph; (c) extract green lines from templates in Scene 1; (d) extract red lines from real-time graphs in Scene 1; (e) red box position in template in Scene 1; (f) red box position in real-time map in Scene 1; (g) red box position of SURF in Scene 1.
Applsci 13 03163 g013aApplsci 13 03163 g013b
Figure 14. Scene 2 visible light target matching. (a) Template; (b) real-time graph; (c) extract green lines from templates in Scene 2; (d) extract red lines from real-time graphs in Scene 2; (e) red box position in template in Scene 2; (f) red box position in real-time map in Scene 2; (g) red box position of SURF in Scene 2.
Figure 14. Scene 2 visible light target matching. (a) Template; (b) real-time graph; (c) extract green lines from templates in Scene 2; (d) extract red lines from real-time graphs in Scene 2; (e) red box position in template in Scene 2; (f) red box position in real-time map in Scene 2; (g) red box position of SURF in Scene 2.
Applsci 13 03163 g014aApplsci 13 03163 g014b
Table 1. HED detection accuracy.
Table 1. HED detection accuracy.
Serial Number ODSOISAP
1Side-output 10.5950.6200.582
2Side-output 20.6970.7150.673
3Side-output 30.7830.7560.717
4Side-output 40.7400.7590.672
5Side-output 50.6060.6110.429
6Fusion-output0.7820.8020.787
7Average 1–40.7600.7840.800
8Average 1–50.7740.7970.822
9Average 2–40.7660.7880.798
10Average 2–50.7770.8000.814
11Merged result0.7820.8040.833
Table 2. Target recognition accuracy.
Table 2. Target recognition accuracy.
SceneProposedSURF
visible light92%90%
infrared light76%8%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhong, W.; Jiang, Y.; Zhang, X. Research on Automatic Recognition Method of Artificial Ground Target Based on Improved HED. Appl. Sci. 2023, 13, 3163. https://doi.org/10.3390/app13053163

AMA Style

Zhong W, Jiang Y, Zhang X. Research on Automatic Recognition Method of Artificial Ground Target Based on Improved HED. Applied Sciences. 2023; 13(5):3163. https://doi.org/10.3390/app13053163

Chicago/Turabian Style

Zhong, Wei, Yueqiu Jiang, and Xin Zhang. 2023. "Research on Automatic Recognition Method of Artificial Ground Target Based on Improved HED" Applied Sciences 13, no. 5: 3163. https://doi.org/10.3390/app13053163

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop