Next Article in Journal
Vision-Based Damage Detection for One-Fixed-End Structures Based on Aligned Marker Space and Decision Fusion
Previous Article in Journal
Influence of Parameter Uncertainty to Stator Current Reconstruction Using Modified Luenberger Observer for Current Sensor Fault-Tolerant Induction Motor Drive
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction and Optimization Algorithm for Intersection Point of Spatial Multi-Lines Based on Photogrammetry

1
School of Transportation and Logistics Engineering, Wuhan University of Technology, Wuhan 430063, China
2
CCCC Second Harbor Engineering Company Ltd., Wuhan 430040, China
3
Key Laboratory of Large-Span Bridge Construction Technology, Wuhan 430040, China
4
Research and Development Center of Transport Industry of Intelligent Manufacturing Technologies of Transport Infrastructure, Wuhan 430040, China
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(24), 9821; https://doi.org/10.3390/s22249821
Submission received: 21 October 2022 / Revised: 8 December 2022 / Accepted: 12 December 2022 / Published: 14 December 2022
(This article belongs to the Section Remote Sensors)

Abstract

:
The basic theory of photogrammetry is mature and widely used in engineering. The environment in engineering is very complex, resulting in the corners or multi-line intersections being blocked and unable to be measured directly. In order to solve this problem, a prediction and optimization algorithm for intersection point of spatial multi-lines based on photogrammetry is proposed. The coordinates of points on space lines are calculated by photogrammetry algorithm. Due to the influence of image point distortion and point selection error, many lines do not strictly intersect at one point. The equations of many space lines are used to fit their initial value of intersection point. The initial intersection point is projected onto each image, and the distances between the projection point and each line on the image plane are used to weight the calculated spatial lines in combination with the information entropy. Then the intersection point coordinates are re-fitted, and the intersection point is repeatedly projected and recalculate until the error is less than the threshold value or reached the set number of iterations. Three different scenarios are selected for experiments. The experimental results show that the proposed algorithm significantly improves the prediction accuracy of the intersection point.

1. Introduction

Cranes are important equipment for the development of modern industry and are widely used in ports, metallurgy, urban construction, aerospace, petrochemical and other fields. The role of the crane in the process of cargo transportation is irreplaceable as it can transport high-load cargo, such as the hoisting of offshore drilling platforms, the assembly of heavy ships, and the hoisting of nuclear power plant containment domes [1,2,3,4,5].
With the continuous development of hydraulic technology [6], computer technology [7], communication technology [8], advanced control technology [9], and new energy technology [10,11,12], the intelligence level of cranes is also constantly improving, but its structural safety [13] is always an important part of its safety. Once there is a problem with the structural safety of large construction machinery, it is easy to cause significant loss of life and property [14]. Therefore, large-scale construction machinery such as hoisting machinery is also a highly dangerous mechanical equipment [15].
In the past 20 years, safety assessment has been gradually applied in structural engineering, chemical disasters, information security, and other fields, while there have been relatively few safety assessment theories for cranes [16,17,18]. The load of the crane is usually large, ranging from several tons to thousands of tons, and the lifting height ranges from tens of meters to hundreds of meters [19]. Therefore, with the increasing application of cranes, the number of accidents is also rising. Therefore, it is very important to carry out safety assessment and risk monitoring of cranes.
The structural safety assessment of cranes refers to the assessment of the potential hazards and severity of mechanical structures. Commonly used crane risk assessment methods include fuzzy comprehensive assessment method, risk assessment method based on combination weighting, comprehensive risk assessment method, etc. [20]. But no matter which risk assessment method is used, key dimensions or the coordinates of key points need to be obtained. For construction machinery in complex working environments, truss is a common local structure. The key points on the crane usually show a face on the image, so the selection of measuring tools is particularly critical. Especially for the complex local structure of the crane, the design scheme of the engineering test is also critical, which is related to whether the key point coordinates can be accurately measured. After the key points are measured, they can be used as the basic data for crane safety evaluation. These data will play an important role in crane safety evaluation.
In the field of engineering, the commonly used measurement methods are: coordinate measuring machine, articulated arm measuring machine, laser tracker, structured light measurement system, total station, laser scanner, photogrammetry, etc. [21]. The three-coordinate measuring machine (CMM) has high measurement accuracy, good flexibility, and strong reverse engineering capabilities [22]. It is widely used in the mold industry, and it is a modern intelligent tool that integrates design, testing, and statistical analysis. As a portable measuring device, the articulated arm measuring machine needs to touch the point to be measured in space to complete the measurement [23]. The laser tracker uses a spherical coordinate system and relies on single-frequency laser interferometric ranging, which has high measurement accuracy and fast measurement speed. Moreover, laser tracker has certain advantages in large scenes [23]. Structured light measurement systems are divided into line structured light measurement systems and surface structured light measurement systems, which use the principle of triangulation to obtain the three-dimensional coordinates of points [24]. The total station is a high-precision measurement device that integrates light, machinery, and electricity. Its basic measurement principle is the same as that of the laser tracker, but the distance measurement method is different. Its measurement distance is long and its application range is very wide [25]. Laser scanners use a large number of laser points to form point clouds, so that three-dimensional information of the outer surface of the object to be measured can be obtained [26]. Photogrammetry is based on the relative positional relationship between the photographic center, image point, and spatial point, using two or more images to complete the coordinates calculation of the spatial points, and the measurement distance can be close or far [27]. Among the above measurement methods, the CMM, the articulated arm measurement machine, and the structured light measurement system have shorter measurement distances. Both the articulated arm measuring machine and the laser tracker are contact measuring methods. The operation of the total station is complicated, and the coordinates of a large number of points must be measured one by one. Laser scanners are extremely expensive to purchase and expensive to maintain. None of these measurement methods can meet the measurement needs of large construction machinery such as port machinery. The structure of large construction machinery such as port machinery is usually relatively tall, the range to be measured is relatively wide, and its working environment is complex. These factors determine that the measurement method that can be widely used in port machinery and other large construction machinery must be efficient, non-contact, portable, and have long measuring distance. Among the existing measurement methods, photogrammetry is undoubtedly the most advantageous. With the development of computer technology and artificial intelligence technology, the accuracy and stability of image-based visual measurement are also constantly improving. The intersection of close-range photogrammetry and advanced technologies such as computer vision is getting deeper and deeper, so the application range of close-range photogrammetry is also expanding [28].
The demand for close-up photogrammetry in aerospace [29], automobile manufacturing [30], mold manufacturing [31], material science [32], biomedicine [33], cultural heritage protection [34], etc. is increasing. After close-range photogrammetry entered the digital stage in 1980, many digital close-range photogrammetry systems appeared. There are TRITOP system of German GOM company, DPA-Pro system of German AICON 3D company, etc., whose technologies all come from the IAPG Institute. In addition, there are the V-STARS system of GSI Corporation of the United States, the Australis system of Photometrix Corporation of Australia, and the Metronor system of Metronor Corporation of Norway. The V-STARS system was developed by GSI Corporation of the United States and is currently the most mature commercial digital close-range industrial photogrammetry system in the world [35]. The Lensphoto multi-baseline digital close-up photogrammetry system, which was directly participated and launched by Academician Zhang Zuxun of Wuhan University, has been applied in the fields of water conservancy and electric power measurement, building measurement, cultural relics protection, and other fields [36].

2. Related Work

The application of close-range photogrammetry in port machinery and other large construction machinery is also increasing. Lu Enshun from Wuhan University of Technology [37] developed a photogrammetry-based radius measurement algorithm for the structural characteristics of port machinery, he used the approximate distortion model of the camera to determine the weight of each midperpendicular, and compared the equal-weight fitting algorithm with the weighted fitting algorithm. Wang Qi [38] developed a focal length calibration algorithm based on photogrammetry for port cranes. The central idea is to obtain image information through the Internet of Things technology, and then obtain the attitude parameters by minimizing the error function. Lin Xuanxiang [39] disclosed a method for detecting the deformation of the main beam of a hoisting machine based on photogrammetry, which uses photogrammetry to obtain the coordinates of the point to be measured, and then fits a curve according to the obtained coordinates to determine the deformation of the main beam. At present, the application of photogrammetry in large-scale construction machinery is relatively small, and there is a relatively broad research space.
In the theory of close-range photogrammetry [40], there are usually observation errors, so redundant observations are introduced to increase the accuracy of the calculation. Using multiple images to complete the space intersection is a commonly used adjustment method. Li Zhongmei added the overall least squares estimation in the process of multi-image space intersection and removed the gross error. Li Jiatian [41] combined the space intersection with the neural network to reduce the influence of nonlinear errors on solving three-dimensional coordinates. Faugeras and Mourrain [42] creatively gave a new derivation of the three-focal tensor equation, using three images to complete the solution; this method has been widely used in various fields. In the optimal solution triangulation of three views, Stewenius et al. used the method of interactive algebra to solve the polynomial matrix to obtain the triangulation result [43]. Agarwal et al. addressed the problem of global triangulation using fractional programming methods [44]. Dai et al. proposed a norm-based optimization method to improve the efficiency of triangulation [45]. In the multi-view triangulation, Zhang et al. selected some observation vectors as subsets, and then solved the subset data, which also improved the efficiency of triangulation [46].
Entropy is a thermodynamic concept proposed by physicist R. Clausius. C.E.Shannon first introduced entropy in thermodynamics into information theory. The appearance of information entropy is a sign of the generalization of entropy. It is widely used in physics, chemistry, medicine, water conservancy, communication, and mechanical safety assessment [47]. Since information entropy can measure the uncertainty of the appearance of things, it is also widely used in image processing to increase the accuracy of feature extraction, but there are few studies that can combine the basic principles of information entropy and photogrammetry.
For the safety assessment of large machinery such as port machinery, critical dimensions or the coordinates of key points are very important. Due to the limitation of the shooting angle or the occlusion of other mechanical structures, the key points are likely to be occluded or cannot be directly measured. Inspired by previous studies by scholars, the key points for dimension measurement or safety assessment are generally the intersection points of three straight lines or corner points with obvious features [48,49]. Combining structural features of large construction machinery, a prediction and optimization algorithm for the intersection point of multiple space straight lines is proposed [41,50,51]. In order to comprehensively consider the distortion of image points [52,53], the error of point selection, and the different shooting conditions of each image, the algorithm introduces information entropy [54,55]. Then the spatial lines involved in the calculation of fitting points are weighted [37] to further improve the fitting accuracy.
Therefore, compared with the previous work, the innovations of this paper are as follows: (1) we use multiple images to predict and optimize the intersection of straight lines; (2) an iterative optimization method based on reprojection is proposed; and (3) we introduce information entropy and weight the space line.
The rest of this paper is arranged as follows. Section 3.1 introduces the intersection point prediction algorithm of equal-weighted multiple lines. Section 3.2 introduces the intersection point prediction algorithm of weighted multiple lines. Section 4 introduces three experiments, all of which are used to verify the accuracy and stability of the algorithm in this paper. Section 5 provides a detailed and comprehensive analysis of the experimental results. Section 6 summarizes the entire paper.

3. Methodology

3.1. The Traditional Intersection Point Prediction Algorithm of Multiple Straight Lines

The manufacturing and installation process of the camera is very precise, but there will still be certain errors [56]. At the same time, when calculating the coordinates of the spatial points corresponding to the image points through the basic principle of photogrammetry [40], the selection of the image points will also have certain errors. Various factors lead to errors in the calculation of the spatial coordinates of the points to be measured. Hence, there will also be errors between the coordinates of the prediction intersection point and the ideal point.
Taking the intersection point of three straight lines as an example, as shown in Figure 1, there are seven image points in the image plane, namely, a, b, c, d, e, f, and g, all of which are ideal image points, and the corresponding ideal space points are A, B, C, D, E, F, and G. The actual selected image points are a , b , c , d , e , f , and g , and the actual spatial points calculated by photogrammetry are A , B , C , D , E , F , and G . The straight line formed by points a and b is l a b , the straight line formed by points c and d is l c d , the straight line formed by points e and f is l e f , and the straight lines l a b , l c d , and l e f intersect at point g. Due to optical distortion and point selection error, the straight line formed by points a and b is l a b , the straight line formed by points c and d is l c d , and the straight line formed by points e and f is l e f . The straight lines l a b , l c d and l e f will not strictly intersect at the point g . Whether it is the image plane where the image plane lines l a b , l c d , and l e f are located or the object space where the space straight lines L A B , L C D , and L E F are located, their schematic diagrams of the intersection point can both be represented by Figure 1. According to the basic principles of photogrammetry [40], the coordinates of the six spatial points A , B , C , D , E , and F are obtained, and then the equations of the three straight lines L A B , L C D , and L E F are obtained according to the spatial coordinates of the six points. The three straight lines L A B , L C D , and L E F will not strictly intersect at the point G . To obtain the spatial coordinates of the fitting point G , a mathematical model is established to minimize the sum of the distances from the fitting point G to the three spatial straight lines.
The distances from image point g to image plane lines l a b , l c d , and l e f are d 1 , d 2 , and d 3 , respectively. The distances from point G to lines L A B , L C D , and L E F are D 1 , D 2 , and D 3 , respectively. The solution formulas are as follows.
D 1 = { [ X G ( X B X A ) m 1 + X A ] 2 + [ Y G ( Y B Y A ) m 1 + Y A ] 2 + [ Z G ( Z B Z A ) m 1 + Z A ] 2 } 1 / 2
D 2 = { [ X G ( X D X C ) m 2 + X C ] 2 + [ Y G ( Y D Y C ) m 2 + Y C ] 2 + [ Z G ( Z D Z C ) m 2 + Z C ] 2 } 1 / 2
D 3 = { [ X G ( X F X E ) m 3 + X E ] 2 + [ Y G ( Y F Y E ) m 3 + Y E ] 2 + [ Z G ( Z F Z E ) m 3 + Z E ] 2 } 1 / 2
where
m 1 = ( X B X A ) ( X G X A ) + ( Y B Y A ) ( Y G Y A ) + ( Z B Z A ) ( Z G Z A ) ( X B X A ) 2 + ( Y B Y A ) 2 + ( Z B Z A ) 2
m 2 = ( X D X C ) ( X G X C ) + ( Y D Y C ) ( Y G Y C ) + ( Z D Z C ) ( Z G Z C ) ( X D X C ) 2 + ( Y D Y C ) 2 + ( Z D Z C ) 2
m 3 = ( X F X E ) ( X G X E ) + ( Y F Y E ) ( Y G Y E ) + ( Z F Z E ) ( Z G Z E ) ( X F X E ) 2 + ( Y F Y E ) 2 + ( Z F Z E ) 2
D 1 , D 2 , and D 3 can be considered to be errors caused by camera distortion and point selection. Therefore, when the sum of ( D 1 + D 2 + D 3 ) reaches the minimum value, the coordinates of point G are the desired coordinates. To facilitate the calculation, the minimum value of the sum of ( D 1 + D 2 + D 3 ) is converted into the minimum value of the sum of ( D 1 2 + D 2 2 + D 3 2 ) . The calculation formula is as follows.
W ( X G , Y G , Z G ) = D 1 2 + D 2 2 + D 3 2
W is the function value, and the MATLAB optimization toolbox is used to constrain the nonlinear minimization function W to obtain coordinates of the point G . This method is the traditional algorithm (TA).

3.2. Intersection Point Prediction Algorithm of Weighted Multiple Lines

The initial coordinate value of the fitting point G can be obtained from the formula in Section 3.1, and the corresponding image point g can be obtained by projecting the spatial point G onto the image. The calculation formula is as follows [40].
x g = f ( cos φ cos κ sin φ sin ω sin κ ) ( X g X S ) + cos ω sin κ ( Y g Y S ) + ( sin φ cos κ + cos φ sin ω sin κ ) ( Z g Z S ) sin φ cos ω ( X g X S ) sin ω ( Y g Y S ) + cos φ cos ω ( Z g Z S ) Δ x
y g = f ( cos φ sin κ sin φ sin ω sin κ ) ( X g X S ) + cos ω cos κ ( Y g Y S ) + ( sin φ cos κ + cos φ sin ω sin κ ) ( Z g Z S ) sin φ cos ω ( X g X S ) sin ω ( Y g Y S ) + cos φ cos ω ( Z g Z S ) Δ y
where X S , Y S , Z S , φ , ω , and κ are the exterior orientation elements of the camera, and Δ x , Δ y , and f are the intrinsic parameters of the camera.
When the error of the image point is larger, the distance from point g to the straight line where the image point is located is also larger. When using multiple images to solve, due to the different shooting conditions of each image, the confidence of each image is different, and the confidence of the straight line composed of spatial points is also different. Thus, the information entropy is used to represent uncertainty. The larger the entropy value is, the smaller the weight of the indicator. The smaller the value is, the greater the weight. For n images, m space straight lines, the distance between the jth straight line on the ith image and the point g on the image is denoted as d g ( i , j ) , and there is a negative correlation between the weight P e of each spatial line and the corresponding d g , that is, P e d g 1 . Therefore, the information entropy formula of the jth straight line in space is as follows.
H j = 1 / ln n j = 1 n ( 1 / d g ( i , j ) ) ln ( 1 / d g ( i , j ) )
Then, the weight formula of the jth spatial straight line is as follows.
P e ( j ) = v j k = 1 n v k
where v i = 1 H i .
Therefore, the formula for calculating the sum of the errors is as follows.
W ( X G , Y G , Z G ) = j = 1 m ( P e ( j ) D j 2 )
where D j is the distance from point G to the jth straight line.
This method is called the weighted intersection point prediction and optimization algorithm (WIPPOA). The iterative calculation flow of this method is shown in Figure 2.

4. Case Study

4.1. Experimental Flow

Since the effect of the algorithm can not be presented by computer simulation, the physical model is taken, and three images and three straight lines are taken as examples for verification experiments. In order to verify the stability and accuracy of the algorithm, three scenes are selected in the experiment. The first experimental scene is for the black and white square model, the feature points of the model are very clear. The second experiment is aimed at engineering scene, the object to be measured is shot without obvious identification points. The third experimental scene is also aimed at engineering scene, but this object to be measured is shot with clearly marked points. The experimental flow chart is in Figure 3. The coordinates of the points forming the lines are calculated based on the basic principles of photogrammetry [40], and then the equations of the three lines are calculated. Finally, the intersection point of the lines are fitted by the method of Section 3.1 and Section 3.2, and the experimental effects of the two algorithms are compared.
The three sets of photos taken in the experiment are shown in Figure 4, Figure 5 and Figure 6.

4.2. Experimental Platform

4.2.1. First Experiment

As shown in Figure 7, the experimental model is composed of black and white squares with a side length of 100 mm. Points 1, 2, 3, and 4 are control points, and points A, B, C, D, E, F , and G are to be measured. Points A and B determine the straight line L A B , points C and D determine the straight line L C D , points E and F determine the straight line L E F , and point G is the intersection point of the straight lines L A B , L C D , and L E F . There will be a certain error between point G fitted by the three straight lines and the ideal point G, and the error is used to measure the accuracy of the two methods.
The Sony A7R4A camera is used to complete the experiment, and the camera is calibrated by the MATLAB calibration tool. The camera parameters are shown in Table 1, where f is the focal length of the image, Δ x and Δ y are the principal point deviations of the image, k 1 and k 2 are the radial distortions, q 1 and q 2 are the tangential distortions, and s 1 and s 2 are the poise prism distortions.
The object space coordinates of the control points and points to be measured are shown in Table 2.
Three images are taken at three camera stations, as shown in Figure 4. According to the basic principle of photogrammetry [40], the three images are used to complete the calculation of the points to be measured.
The image plane coordinates of the three images are shown in Table 3.
The six exterior orientation elements of the three images are shown in Table 4.
The calculated coordinates of the space points are shown in Table 5.

4.2.2. Second Experiment

As shown in Figure 8, points 1, 2, 3, and 4 are control points, and points A, B, C, D, E, F, and G are to be measured.
A Cannon 5DS camera is used to complete the experiment. The camera parameters are shown in Table 6. As shown in Figure 5, three images are taken at three camera stations.
The object space coordinates of the control points and points to be measured are shown in Table 7.
The image plane coordinates of the three images are shown in Table 8.
The six exterior orientation elements of the three images are shown in Table 9.
The calculated coordinates of the space points are shown in Table 10.

4.2.3. Third Experiment

As shown in Figure 9, points 1, 2, 3, and 4 are control points, and points A, B, C, D, E, F, and G are to be measured.
A Cannon 5DS camera is used to complete the experiment, and the camera parameters are shown in Table 11.
The object space coordinates of the control points and points to be measured are shown in Table 12.
The image plane coordinates of the three images are shown in Table 13.
The six exterior orientation elements of the three images are shown in Table 14.
The calculated coordinates of the space points are shown in Table 15.

5. Results and Discussion

The three experimental results of the weighted algorithm based on information entropy proposed in Section 3.2 are shown in Table 16, Table 17 and Table 18. Across 10 iterations, the object space coordinates of the fitting point G and the weights of the space lines are updated.
Table 19 shows the coordinates of point G obtained by algorithm TA and algorithm WIPPOA.
The object of first experiment is a model with simple background and obvious feature points, so the error is small. The second experiment and the third experiment are engineering experiments with complex background, so the error is large. As can be seen from Table 16, Table 17 and Table 18, the algorithm can converge after a few iterations, and the convergence speed is relatively fast. As can be seen from Figure 10, the results of weighted iteration algorithm based on information entropy proposed in Section 3.2 are superior to equal weight iteration algorithm in all three scenarios. Therefore, the above experiments can show that the proposed algorithm can effectively improve the fitting accuracy of the intersection point, and has good stability and wide applicability.

6. Conclusions

Based on the basic theory of photogrammetry and the structural characteristics of large engineering machinery such as port machinery, a prediction and optimization algorithm for intersection point of spatial multi-Lines based on photogrammetry is proposed under the condition of considering the influence of many factors such as image point distortion and point selection error. The algorithm takes the spatial fitting point calculated by the traditional method as the initial point of weighted iteration. The projection points on each image are obtained from the spatial fitting points, and the distance between the projection points and each line in the image plane is combined with the information entropy to determine the weight of the space line, and then more accurate spatial coordinates of the intersection points can be obtained. Experimental results show that the proposed iterative optimization algorithm based on information entropy can significantly improve the accuracy of fitting points, and this algorithm has strong practicability.
Due to the limitations of experimental conditions and in order to better present comprehensive experimental data, the experiment in this paper only uses three images to calculate the intersection point. In subsequent studies, more images can be used to solve the problem to obtain more accurate fitting point coordinates. This method is not only applicable to port machinery, but also to other large engineering machinery. At the same time, this paper also introduces the information entropy in the proposed algorithm and uses information entropy to determine the relative weights. In future studies, we will focus on other structural characteristics of port machinery and other large machinery, and more a targeted algorithm will be put forward. We will make the photogrammetry applied more widely in engineering, at the same time, we will try to apply information entropy to more aspects of photogrammetry.

Author Contributions

Z.Z. and G.W. guided the direction of the paper. H.X. provided the experimental platform and completed the experiment. C.Z. analyzed the experimental data and wrote the paper. All authors have read and approved the final manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

In the process of completing this paper, we would like to thank Zhangyan Zhao for his guidance on the direction of the paper, and thank other partners for providing key data. We also express our sincere thanks to the journal editors and anonymous reviewers for their help with the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fan, X.; Xu, G.; Wang, A. Evaluation Method of Remaining Fatigue Life for Crane Based on the Acquisition of the Equivalent Load Spectrum by the Artificial Neural Network. J. Mech. Eng. 2011, 47, 69–74. [Google Scholar] [CrossRef]
  2. Wang, S. New technology of semi-submersible drilling platform construction based on 20,000 t gantry crane. Ocean Eng. 2011, 29, 128–131. [Google Scholar]
  3. Zhai, G.; Ma, Z.; Zhu, H. The wind tunnel tests of wind pressure acting on the derrick of deepwater semi-submersible drilling platform. Energy Procedia 2012, 14, 1267–1272. [Google Scholar] [CrossRef] [Green Version]
  4. Choi, K.; Kim, D.J. Lifting analysis for heavy ship hull blocks using 4 cranes. In Proceedings of the Fourteenth International Offshore and Polar Engineering Conference, Toulon, France, 23–28 May 2004. [Google Scholar]
  5. Deng, G.; Ye, Y. The Integral Lifting of Sky-Dome of 2RX Security Cover of Nuclear Power Station. Constr. Technol. 2000, 29, 21–23. [Google Scholar]
  6. Lin, T.; Chen, Q.; Ren, H.; Huang, W.; Chen, Q.; Fu, S. Review of boom potential energy regeneration technology for hydraulic construction machinery. Renew. Sustain. Energy Rev. 2017, 79, 358–371. [Google Scholar] [CrossRef]
  7. Alberts, G.J.; Rol, M.A.; Last, T.; Broer, B.W.; Bultink, C.C.; Rijlaarsdam, M.S.; Van Hauwermeiren, A.E. Accelerating quantum computer developments. EPJ Quantum Technol. 2021, 8, 18. [Google Scholar] [CrossRef]
  8. Ramahatla, K.; Mosalaosi, M.; Yahya, A.; Basutli, B. Multi-Band Reconfigurable Antennas For 5G Wireless and CubeSat Applications: A Review. IEEE Access 2022, 10, 40910–40931. [Google Scholar] [CrossRef]
  9. Xu, Q.; Vafamand, N.; Chen, L.; Dragičević, T.; Xie, L.; Blaabjerg, F. Review on advanced control technologies for bidirectional DC/DC converters in DC microgrids. IEEE J. Emerg. Sel. Top. Power Electron. 2020, 9, 1205–1221. [Google Scholar] [CrossRef]
  10. Chen, Z.; Shi, J.; Li, Y.; Ma, B.; Yan, X.; Liu, M.; Jin, H.; Li, D.; Jing, D.; Guo, L. Recent progress of energy harvesting and conversion coupled with atmospheric water gathering. Energy Convers. Manag. 2021, 246, 114668. [Google Scholar] [CrossRef]
  11. Ahmad, T.; Zhang, D.; Huang, C.; Zhang, H.; Dai, N.; Song, Y.; Chen, H. Artificial intelligence in sustainable energy industry: Status Quo, challenges and opportunities. J. Clean. Prod. 2021, 289, 125834. [Google Scholar] [CrossRef]
  12. Martinez-Burgos, W.J.; de Souza Candeo, E.; Medeiros, A.B.P.; de Carvalho, J.C.; de Andrade Tanobe, V.O.; Soccol, C.R.; Sydney, E.B. Hydrogen: Current advances and patented technologies of its renewable production. J. Clean. Prod. 2021, 286, 124970. [Google Scholar] [CrossRef]
  13. Sadeghi, S.; Soltanmohammadlou, N.; Rahnamayiezekavat, P. A systematic review of scholarly works addressing crane safety requirements. Saf. Sci. 2021, 133, 105002. [Google Scholar] [CrossRef]
  14. Lingard, H.; Cooke, T.; Zelic, G.; Harley, J. A qualitative analysis of crane safety incident causation in the Australian construction industry. Saf. Sci. 2021, 133, 105028. [Google Scholar] [CrossRef]
  15. Zhang, W.; Xue, N.; Zhang, J.; Zhang, X. Identification of critical causal factors and paths of tower-crane accidents in China through system thinking and complex networks. J. Constr. Eng. Manag. 2021, 147, 04021174. [Google Scholar] [CrossRef]
  16. Li, B. Risk Assessment of Metallic Structure in Bridge Crane and Its Reliability. Ph.D. Thesis, Shanghai Jiaotong University, Shanghai, China, 2010. [Google Scholar]
  17. Zhang, P. The Major Hazards Identification and Risk Assessment Method Research on Gantry Crane. Ph.D. Thesis, Wuhan Engineering University, Wuhan, China, 2014. [Google Scholar]
  18. Dong, Q. Risk, Life Assessment and Repairable Decision-making of Mobile Crane Jib Structure in Service. Ph.D. Thesis, Taiyuan University of Science and Technology, Taiyuan, China, 2017. [Google Scholar]
  19. Mao, Y. Researches on Boom Telescopic Path Planning and Counterweight Mechanism Control Technologies of All-terrain Crane. Ph.D. Thesis, Jilin University, Jilin, China, 2021. [Google Scholar]
  20. Li, A. Research on Safety Assessment Methods of Quayside Container Crane. Ph.D. Thesis, Wuhan University of Technology, Wuhan, China, 2017. [Google Scholar]
  21. Zhang, Z.; Zheng, S.; Wang, X. Development and application of industrial photogrammetry technology. Acta Geod. Et Cartogr. Sin. 2022, 51, 843. [Google Scholar]
  22. Huang, G. Digital Close-Up Industrial Photogrammetry Theory, Method and Application; Science Press: Beijing, China, 2016. [Google Scholar]
  23. Yu, L.; Zhao, H. Key technologies and advances of articulated coordinate measuring machines. Chin. J. Sci. Instrum 2017, 38, 1179–1888. [Google Scholar]
  24. Zhao, G.; Zhang, C.; Jing, X.; Sun, Z.; Zhang, Y.; Luo, M. Station-transfer measurement accuracy improvement of laser tracker based on photogrammetry. Measurement 2016, 94, 717–725. [Google Scholar] [CrossRef]
  25. Van der Jeught, S.; Dirckx, J.J. Real-time structured light profilometry: A review. Opt. Lasers Eng. 2016, 87, 18–31. [Google Scholar] [CrossRef]
  26. Kačur, I.P.I.P.J. In Proceedings of the 2016 17th International Carpathian Control Conference (ICCC), High Tatras, Slovakia, 29 May–1 June 2016. Available online: https://www.aconf.org/conf_74423.2016_17th_International_Carpathian_Control_Conference_(ICCC).html (accessed on 20 October 2022).
  27. Zhang, Z. Digital Photogrammetry; Wuhan University Press: Wuhan, China, 2012. [Google Scholar]
  28. Zhang, Z. Digital Photogrammetry and Computer Vision. Ph.D. Thesis, Wuhan University, Wuhan, China, 2004. [Google Scholar]
  29. Liu, T.; Burner, A.W.; Jones, T.W.; Barrows, D.A. Photogrammetric techniques for aerospace applications. Prog. Aerosp. Sci. 2012, 54, 1–58. [Google Scholar] [CrossRef]
  30. Blagojević, M.; Rakić, D.; Topalović, M.; Živković, M. Optical coordinate measurements in automotive industry. Teh. Vjesn. 2016, 23, 1541–1546. [Google Scholar]
  31. Xiao, Z.; Liang, J.; Tang, Z. 3D photogrammetry measurement and inspect technology for foam model of auto die. J. Plast. Eng. 2009, 16, 150–155. [Google Scholar]
  32. Steeves, J.; Pellegrino, S. Post-cure shape errors of ultra-thin symmetric CFRP laminates: Effect of ply-level imperfections. Compos. Struct. 2017, 164, 237–247. [Google Scholar] [CrossRef]
  33. Struck, R.; Cordoni, S.; Aliotta, S.; Pérez-Pachón, L.; Gröning, F. Application of photogrammetry in biomedical science. Biomed. Vis. 2019, 1120, 121–130. [Google Scholar]
  34. Arias, P.; Herraez, J.; Lorenzo, H.; Ordonez, C. Control of structural problems in cultural heritage monuments using close-range photogrammetry and computer methods. Comput. Struct. 2005, 83, 1754–1766. [Google Scholar] [CrossRef]
  35. Li, Y. Research on Key Technologies of Optical Three-Dimensional Shape Measurement Based on Close-range Industry Photogrammetry. Ph.D. Thesis, Shanghai University, Shanghai, China, 2020. [Google Scholar]
  36. Zhang, Z. 30 Years of Digital Photogrammetry Research; Wuhan University Press: Wuhan, China, 2007. [Google Scholar]
  37. Lu, E.; Liu, Y.; Zhao, Z.; Liu, Y.; Zhang, C. A Weighting Radius Prediction Iteration Optimization Algorithm Used in Photogrammetry for Rotary Body Structure of Port Hoisting Machinery. IEEE Access 2021, 9, 140397–140412. [Google Scholar] [CrossRef]
  38. Wang, Q.; Zhao, Z.; Lu, E.; Liu, Y.; Liu, L. A robust and accurate camera pose determination method based on geometric optimization search using Internet of Things. Int. J. Distrib. Sens. Netw. 2019, 15, 1550147719857581. [Google Scholar] [CrossRef]
  39. Lin, X.; Chen, Z.; Sun, Z.; Ling, X.; Li, Z.; He, P.; Wang, Z.; Wu, Y. A Photogrammetry-Based Deformation Detection Method for Main Beams of Hoisting Machinery. CN Patent 110068282 A, 30 July 2019. [Google Scholar]
  40. Wang, Z. Principle of Photogrammetry; Surveying and Mapping Publishing House: Beijing, China, 1979; Volume 7. [Google Scholar]
  41. Li, J.; Wang, C.; A, X.; Yan, L.; Zhu, Z.; Gao, P. Method of close-range space intersection combining multi-image forward intersection with single hidden layer neural network. Acta Geod. Cartogr. Sin. 2020, 49, 736. [Google Scholar]
  42. Faugeras, O.; Mourrain, B. On the geometry and algebra of the point and line correspondences between n images. In Proceedings of the IEEE International Conference on Computer Vision, Cambridge, MA, USA, 20–23 June 1995; pp. 951–956. [Google Scholar]
  43. Stewénius, H.; Schaffalitzky, F.; Nistér, D. How hard is 3-view triangulation really? In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volume 1, pp. 686–693. [Google Scholar]
  44. Kahl, F.; Agarwal, S.; Chandraker, M.K.; Kriegman, D.; Belongie, S. Practical global optimization for multiview geometry. Int. J. Comput. Vis. 2008, 79, 271–284. [Google Scholar] [CrossRef]
  45. Dai, Z.; Wu, Y.; Zhang, F.; Wang, H. A novel fast method for L-infty problems in multiview geometry. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 116–129. [Google Scholar]
  46. Zhang, Q.; Chin, T.J. Coresets for triangulation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2095–2108. [Google Scholar] [CrossRef]
  47. Zhang, J. Information Entropy Theory and Application; China Water Resources and Hydropower Press: Beijing, China, 2012. [Google Scholar]
  48. Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; Volume 1. [Google Scholar]
  49. Guru, D.; Dinesh, R. Non-parametric adaptive region of support useful for corner detection: A novel approach. Pattern Recognit. 2004, 37, 165–168. [Google Scholar] [CrossRef]
  50. Li, Z.; Bian, S.; Qu, Y. Robust total least squares estimation of space intersection appropriate for multi-images. Acta Geod. Et Cartogr. Sin. 2017, 46, 593. [Google Scholar]
  51. Zhang, J.; Hu, A. Method and precision analysis of multi-baseline photogrammetry. J. Wuhan Univ. (Inform. Sci. Ed.) 2007, 32, 847–851. [Google Scholar]
  52. Marrugo, A.G.; Gao, F.; Zhang, S. State-of-the-art active optical techniques for three-dimensional surface metrology: A review. JOSA A 2020, 37, B60–B77. [Google Scholar] [CrossRef] [PubMed]
  53. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  54. Amigó, J.M.; Balogh, S.G.; Hernández, S. A brief review of generalized entropies. Entropy 2018, 20, 813. [Google Scholar] [CrossRef] [Green Version]
  55. Li, A.; Zhao, Z. Crane safety assessment method based on entropy and cumulative prospect theory. Entropy 2017, 19, 44. [Google Scholar] [CrossRef] [Green Version]
  56. Salvi, J.; Armangue, X.; Batlle, J. A comparative review of camera calibrating methods with accuracy evaluation. Pattern Recognit. 2002, 35, 1617–1635. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the intersection.
Figure 1. Schematic diagram of the intersection.
Sensors 22 09821 g001
Figure 2. Iterative optimization calculation flow chart.
Figure 2. Iterative optimization calculation flow chart.
Sensors 22 09821 g002
Figure 3. Experimental flow chart.
Figure 3. Experimental flow chart.
Sensors 22 09821 g003
Figure 4. Images of the first experiment.
Figure 4. Images of the first experiment.
Sensors 22 09821 g004
Figure 5. Images of the second experiment.
Figure 5. Images of the second experiment.
Sensors 22 09821 g005
Figure 6. Images of the third experiment.
Figure 6. Images of the third experiment.
Sensors 22 09821 g006
Figure 7. Points selection of first experiment.
Figure 7. Points selection of first experiment.
Sensors 22 09821 g007
Figure 8. Points selection of second experiment.
Figure 8. Points selection of second experiment.
Sensors 22 09821 g008
Figure 9. Points selection of third experiment.
Figure 9. Points selection of third experiment.
Sensors 22 09821 g009
Figure 10. Error comparison.
Figure 10. Error comparison.
Sensors 22 09821 g010
Table 1. Camera parameters.
Table 1. Camera parameters.
f (mm)ResolutionPixel Size ( μ m) Δ x (mm) Δ y (mm) k 1 k 2
54.719504*63.3623.7563−0.02260.0984−0.06712.4593
Table 2. The object space coordinates.
Table 2. The object space coordinates.
PointsX (mm)Y (mm)Z (mm)
100400
230000
3000
403000
A00300
B00200
C2000100
D1000100
E0200100
F0100100
G00100
Table 3. Image plane coordinates of the three images.
Table 3. Image plane coordinates of the three images.
PointsFirst ImageSecond ImageThird Image
1(1.0354, −6.0228)(−0.5533, −6.2137)(0.8770, −14.1529)
2(−5.9271, 11.1107)(−9.8585, 9.8317)(−8.4733, 2.3829)
3(1.0595, 6.7681)(−0.6178, 7.0176)(0.5227, −0.5235)
4(9.7984, 10.0437)(6.1281, 11.2745)(8.0912, 3.5829)
A(1.0369, −2.6083)(−0.5691, −2.7196)(0.7723, −10.5186)
B(1.0329, 0.6688)(−0.6058, 0.6625)(0.6766, −7.0399)
C(−3.5038, 6.3139)(−6.6579, 5.5119)(−5.2699, −2.0842)
D(−1.1359, 4.9890)(−3.5317, 4.6911)(−2.2423, −2.9200)
E(6.7762, 5.7132)(3.7242, 6.3093)(5.4785, −1.3899)
F(3.8126, 4.7242)(1.4773, 5.0608)(2.9457, −2.5926)
Table 4. Exterior orientation elements of three images.
Table 4. Exterior orientation elements of three images.
ResultsFirst ImageSecond ImageThird Image
X S (mm)1158.1825848.6122939.7586
Y S (mm)901.33311207.05961110.4447
Z S (mm)801.9554713.2596674.1236
φ (rad)46.02462.0902−0.9331
ω (rad)0.62610.87832.3723
κ (rad)−39.55044.28901.0692
Table 5. Result of space point coordinates.
Table 5. Result of space point coordinates.
ResultsX (mm)Y (mm)Z (mm)
A0.0923−0.1268300.1892
B1.36200.9436201.4292
C200.93260.6234101.7127
D100.74340.0414101.2665
E1.7547200.8025101.8961
F−0.2896100.1147100.7337
Table 6. Camera parameters.
Table 6. Camera parameters.
f (mm)ResolutionPixel Size ( μ m) Δ x (mm) Δ y (mm) k 1 k 2
54.588688*57924.1437−0.0128−0.0482−0.10510.2600
Table 7. The object space coordinates.
Table 7. The object space coordinates.
PointsX (mm)Y (mm)Z (mm)
10020
200286
328600
402860
A00189
B00216
C18900
D21600
E01890
F02160
G000
Table 8. Image plane coordinates of the three images.
Table 8. Image plane coordinates of the three images.
PointsFirst ImageSecond ImageThird Image
1(0.2154, 1.3764)(−0.9096, 0.9425)(−1.9299, −5.5487)
2(0.3845, −8.3994)(−0.6524, −8.4335)(−1.6810, −15.1306)
3(−7.2398, 6.8973)(−6.2938, 6.4930)(−5.8707, 0.5994)
4(10.1582, 5.6338)(9.2210, 4.0027)(8.7269, −3.1898)
A(0.3609, −4.7909)(−0.6998, −4.9935)(−1.7236, −11.6007)
B(0.3562, −5.7249)(−0.7139, −5.9114)(−1.7236, −12.5518)
C(−4.5445, 5.1626)(−4.3307, 4.7193)(−4.4254, −1.4182)
D(−5.2525, 5.5931)(−4.8331, 5.1328)(−4.8069, −0.9235)
E(6.6608, 4.3270)(5.7754, 3.1182)(5.1525, −3.8039)
F(7.6039, 4.6696)(6.7049, 3.3669)(6.1154, −3.6430)
Table 9. Exterior orientation elements of three images.
Table 9. Exterior orientation elements of three images.
ResultsFirstSecondThird
X S (mm)1028.35021214.27861252.0560
Y S (mm)727.1096574.6095427.4130
Z S (mm)680.8420672.3328696.8904
φ (rad)11.539714.6075−13.5447
ω (rad)2.59250.38332.8993
κ (rad)1.2498−8.072332.8013
Table 10. Result of space points coordinates.
Table 10. Result of space points coordinates.
ResultsX (mm)Y (mm)Z (mm)
A1.70171.7258193.5912
B3.04461.8387218.8006
C189.3388−2.8676−2.0063
D215.2834−2.4519−0.7051
E2.2671194.32423.7206
F1.6158219.93602.9717
Table 11. Camera parameters.
Table 11. Camera parameters.
f (mm)ResolutionPixel Size ( μ m) Δ x (mm) Δ y (mm) k 1 k 2
59.648688*57924.1437−0.0067−0.0363−0.0910−0.2109
Table 12. The object space coordinates.
Table 12. The object space coordinates.
PointsX (mm)Y (mm)Z (mm)
10020
200300
330000
403000
A00100
B00200
C10000
D20000
E01000
F02000
G000
Table 13. Image plane coordinates of the three images.
Table 13. Image plane coordinates of the three images.
PointsFirst ImageSecond ImageThird Image
1(−2.7266, −1.0344)(1.1911, −0.2700)(1.0960, −4.8962
2(−2.9537, −9.2997)(1.3194, −8.4048)(1.1291, −13.0625)
3(−11.7818, 2.6239)(−8.4538, 1.6320)(−8.3276, −2.2037)
4(2.7505, 4.7265)(4.0894, 6.3217)(5.3035, 1.4577)
A(−2.7947, −3.1986)(1.2056, −2.4122)(1.0894, −7.0475)
B(−2.8676, −6.1531)(1.2587, −5.3099)(1.0960, −9.9291)
C(−5.6159, 0.4847)(−1.9918, 0.7257)(−1.9702, −3.6781)
D(−8.5921, 1.5352)(−5.1826, 1.1816)(−5.0989, −2.9534)
E(−1.0727, 1.1045)(2.0142, 2.0897)(2.3123, −2.6232)
F(0.7439, 2.8708)(2.9922, 4.1148)(3.7022, −0.6773)
Table 14. Exterior orientation elements of three images.
Table 14. Exterior orientation elements of three images.
ResultsFirst ImageSecond ImageThird Image
X S (mm)745.0881425.2814606.0294
Y S (mm)1333.75301475.00741396.6841
Z S (mm)1040.10131057.34791079.3126
φ (rad)−44.639565.6167−28.7047
ω (rad)8.65760.92520.7950
κ (rad)38.5326−59.260428.8441
Table 15. Result of space points coordinates.
Table 15. Result of space points coordinates.
ResultsX (mm)Y (mm)Z (mm)
A−0.14710.236598.7990
B0.88191.7787199.8051
C100.3537−0.42270.1450
D201.23862.69542.3575
E0.374098.96730.4328
F−0.0762197.7977−1.2015
Table 16. Results of first experiment.
Table 16. Results of first experiment.
Iteration1...910
P e ( 1 ) 0.27997144...0.279977110.27997711
P e ( 2 ) 0.36192177...0.361917840.36191784
P e ( 3 ) 0.35810679...0.358105050.35810505
X G (mm)0.18107110...−0.12525462−0.12525462
Y G (mm)0.77749965...0.611552270.611552273
Z G (mm)100.22591947...100.22466629100.2246663
Table 17. Results of second experiment.
Table 17. Results of second experiment.
Iteration1...910
P e ( 1 ) 0.31479429...0.315248670.31524867
P e ( 2 ) 0.31036264...0.309440420.30944042
P e ( 3 ) 0.37484307...0.375310910.37531091
X G (mm)−0.39151546...−0.88701586−0.88701526
Y G (mm)−2.27309467...−0.88701580−0.88701520
Z G (mm)−0.80592502...0.142652110.14265036
Table 18. Results of third experiment.
Table 18. Results of third experiment.
Iteration1...910
P e ( 1 ) 0.32706779...0.327010570.32701057
P e ( 2 ) 0.32135332...0.321434430.32143443
P e ( 3 ) 0.35157890...0.351555000.35155500
X G (mm)−0.11845095...−1.19857656−1.19857641
Y G (mm)−2.37925704...−1.19857649−1.19857635
Z G (mm)0.02247442...0.098039210.09803913
Table 19. Coordinates update.
Table 19. Coordinates update.
Point G TAWIPPOA
First experiment(0.1811, 0.7775, 0.2259)(−0.1253, 0.6116, 0.2247)
Second experiment(−0.3915, −2.2731, −0.8059)(−0.8870, −0.8870, 0.1427)
Third experiment(−0.1185, −2.3793, 0.0225)(−1.1986, −1.1985, 0.0980)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, C.; Xiao, H.; Zhao, Z.; Wang, G. Prediction and Optimization Algorithm for Intersection Point of Spatial Multi-Lines Based on Photogrammetry. Sensors 2022, 22, 9821. https://doi.org/10.3390/s22249821

AMA Style

Zhao C, Xiao H, Zhao Z, Wang G. Prediction and Optimization Algorithm for Intersection Point of Spatial Multi-Lines Based on Photogrammetry. Sensors. 2022; 22(24):9821. https://doi.org/10.3390/s22249821

Chicago/Turabian Style

Zhao, Chengli, Hao Xiao, Zhangyan Zhao, and Guoxian Wang. 2022. "Prediction and Optimization Algorithm for Intersection Point of Spatial Multi-Lines Based on Photogrammetry" Sensors 22, no. 24: 9821. https://doi.org/10.3390/s22249821

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop