Next Article in Journal
The Effectiveness of Dental Protection and the Material Arrangement in Custom-Made Mouthguards
Previous Article in Journal
Qualitative Preliminary Approach for the Development of a Sensory Vocabulary for Actinidia arguta Fruits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Detection Method for Center and Attitude Precise Positioning of Cross Laser-Pattern

State Key Laboratory of Precision Measuring Technology and Instruments, College of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(20), 9362; https://doi.org/10.3390/app11209362
Submission received: 16 August 2021 / Revised: 27 September 2021 / Accepted: 28 September 2021 / Published: 9 October 2021

Abstract

:
Optical metrology has experienced a fast development in recent years—cross laser-pattern has become a common cooperative measuring marker in optical metrology equipment, such as infrared imaging equipment or visual 3D measurement system. The rapid and accurate extraction of the center point and attitude of the cross-marker image is the first prerequisite to ensure the measurement speed and accuracy. In this paper, a cross laser-pattern is used as a cooperative marker, in view of the high resolution of the cross laser-pattern image in the project and the vulnerability to adverse environmental effects, such as stray light, smoke, water mist and other interference in the environment, resulting in poor contrast, low signal-to-noise ratio (SNR), uneven energy distribution. As a result, a method is proposed to detect the center point and attitude of cross laser-pattern image based on Gaussian fitting and least square fitting. Firstly, the distortion of original image is corrected in real time, the corrected image is smoothed by median filter, and the noise is suppressed while preserving the edge sharpness and detail of the image. In order to adapt to different environments, the maximum inter-class variance method of threshold automatic selection is used to determine the threshold of image segmentation to eliminate the background interference caused by different illumination intensities. To improve the real-time performance of the algorithm, the four cross laser edge pixels are obtained by line search, and then fitted by least square. With the edge lines, the transverse and portrait line of the cross-laser image are separated, then we calculate Gaussian center points of all Gaussian sections of transverse and portrait lines based on Gaussian fitting method, respectively. Based on the traditional line fitting method, the sub-pixel center of the transverse and portrait laser strip images are fitted by removing the Outlying Points, and the center coordinates and attitude information of the cross laser-pattern are calculated by using the center equation of the laser strip, realizing cross laser-pattern center and attitude accurate positioning. The results show that the method is robust, the center positioning accuracy is better than 0.6 pixels, the attitude positioning accuracy is better than ±15” under smoke and water mist environment and the processing speed is better than 0.1 s, which meets the real-time requirements of the project.

1. Introduction

During the past 30 years, optical metrology has found numerous applications in scientific and commercial fields because of its non-contact and inherent nonintrusive advantages [1], and its speed and accuracy has increased substantially in recent years [2]. Cross laser-pattern is often used as a cooperative measuring marker in optical metrology, with a laser plane projecting on the measured surface and digital processing of the acquired images [3], such as infrared viewing equipment [4] or three-dimensional visual measurement system [5].
There are two forms of cross laser, as shown in Figure 1a, it is formed of a single point light source. The laser beam emitted by the laser source propagates along the Z direction, and all light is divided into two perpendicular planes by refraction of the shaping lens. The intersection line between the two planes and the imaging plane is cross-shaped, that are distributed along the X and Y directions, respectively [6]; Figure 1b shows the other form. The laser beams emitted by two laser sources form two planes through the refraction of the shaping lens, and the cross laser is formed when the two planes are perpendicular. The propagation direction of the two planes is Z, and the two beams are distributed along the X and Y directions, respectively.
In general applications of scientific or commercial fields [7,8], the image acquisition model is near the laser, and the laser projection distance is within decimeters, so that the image quality is less affected by the environment. Existing methods may realize the digital processing of the acquired image. However, the proposed method is applied in a shipboard environment, and the cross laser-pattern projection distance is a dozen meters to the measured surface, therefore, adverse environmental effects, such as stray light, smoke, water mist [9] and other interference [10] in the environment, may result in poor contrast, low signal-to-noise ratio (SNR), uneven energy distribution to the acquired image, which brings challenges to digital processing.
Cross laser-pattern in the second form is used as a cooperative marker to establish optical reference [11] in this paper. Since the cross-laser projection direction is not perpendicular to the imaging plane, and the relative position between cross laser and imaging plane changes in real time, when the position of the measured point changes relative to the laser during measurement, this causes an angle change between the projection [12], images of the X axis line and the Y axis line of the cross laser-pattern on the imaging plane. The center and attitude (namely the angles of X and Y axis laser lines of cross laser-pattern in camera image coordinate) are the key parameters to be detected [13] in this paper, while existing cross marker image center positioning algorithms cannot meet the attitude positioning requirements.
At present, existing methods for central point coordinate extraction of cross-laser-pattern include Hough transform, projection statistical feature and k mean clustering, morphology and so on. Zhang Jingdan et al. [14] proposed a sub-pixel center detection algorithm for cross laser images, after binary processing of laser image, the pixel groups of the two-laser center line are detected by the method of line search and average, then the pixel groups are taken as fitting sample points, with which two laser lines are fitted by least square method. Finally, the coordinates of the center of the cross laser pattern can be obtained. However, this method assumes that the transverse line of cross laser is strictly horizontal. Due to the uncertainty of the cross-laser attitude in image coordinates in the project and the need for real-time detection, this method cannot meet the project requirements. Liu Bingqi et al. [15] proposed a center positioning method for infrared cross division line, this method uses the top hat operator of gray morphology to process the image, and uses the maximum inter-class variance method to segment the processing results. According to the characteristics of the cross laser-pattern, the cross-shaped brush is designed for image corrosion, achieving cross laser-pattern center positioning. The algorithm can effectively extract the cross-laser center with a poor contrast and blurred edges image, and it is not affected by random noise. However, this algorithm is only effective for low resolution image such as 263 × 290 pixels, for the cross laser-pattern image extraction with more pixels, there are too many points, it takes time and has poor real-time performance. Besides, it cannot meet the cross-attitude positioning requirements. Dong Dai et al. [16] proposed a sub-pixel cross microscopic image positioning algorithm based on Hough transform. The main idea of this method is to extract the four edge lines of cross laser-pattern with rough Hough transform, and then apply precise Hough transform around the four rough edge lines, and then four precise edge lines can be obtained. The average of the four precise edge lines intersection points is the calculated cross center. This method is good for the detection of ideal cross laser-pattern center. However, in view of the large amount of acquired cross laser-pattern image data, poor contrast, low signal-to-noise ratio and uneven energy distribution, this method cannot meet the real-time and precision requirements of the project. Miao Jiazhuang et al. [17] proposed a cross center positioning algorithm based on projection statistical features and K mean clustering. This algorithm is essentially a statistical method. First, acquire the projection values of all the pixels of cross laser-pattern in 180 directions from 0 to 180 degrees, and then determine the optical bar projection boundary by constructing the frequency histogram using the K mean clustering algorithm, finally, the central coordinates are obtained by coordinate transformation. However, this method is not suitable for a wide cross laser-pattern, and cannot achieve attitude positioning, therefore, it does not meet project requirements.
In this paper, cross laser-pattern is used as cooperative measuring marker in shipboard environment with stray light, smoke, water mist and other interference, a method is proposed to process the acquired digital image with poor contrast, low signal-to-noise ratio (SNR) and uneven energy distribution, detect the center point and attitude of cross laser-pattern image, based on the combination of Gaussian fitting and least square fitting. The results show that the method is robust, and meets the requirements of real-time and precision of the project.

2. System Composition

In this paper, the center and attitude positioning method for cross laser-pattern has been successfully applied to the position and attitude measurement system in the real-time measuring device of lever-arm vector in shipboard. Given the good directivity and energy concentration of a laser, the attitude measurement system takes the cross-laser beam fixed near the main inertial navigation system (INS) as the measurement datum, and measures the displacement and attitude of the measured position relative to the main inertial navigation system in real time. The system consists of two parts: laser emission system and image acquisition system, as shown in Figure 2.

2.1. Laser Emission System

The laser emission system is composed of a cross laser and an adapter. The cross laser is composed of two line lasers with a wavelength of 635 nm. The maximum output power of the two lasers is 50 mw. The two line lasers are fixed into a cross laser by the adapter and installed near the main INS as the measuring datum.

2.2. Image Acquisition System

The image acquisition system consists of three parts: laser receiving screen, industrial camera fixed behind screen and dust shield. The laser receiving screen is made of transparent acrylic board with the size of 5 mm × 200 mm × 200 mm. A band-pass filter film matching with the laser frequency is coated on the exterior surface of the receiving screen; this allows the cross laser-pattern to go through the screen, and together with the dust shield, it can effectively reduce the interference of stray light to the system. A thin layer of diffuse reflection coating is present on the interior surface of the screen; on this layer, the cross-laser beam forms a cross laser-pattern with Lambertian reflection. To acquire the cross laser-pattern image, the industrial camera in this paper uses AVT Mako-U503B with a frame rate of 14 FPS, a resolution of 2592 × 1944 pixels and pixel size of 2.2 μm × 2.2 μm. The camera lens used is a Computar M0814-MP; this is a megapixel fixed focus lens with a focal length of 8 mm, working distance of 0.1 m~∞ and aperture range of F1.4~F16C. The image acquisition system is fixed at the measured point, which consists position and attitude measurement system together with the laser emission system. The cross laser is used as the measurement datum, and the cross laser-pattern image on the receiving screen is acquired by the industrial camera in real time.
In order to verify the robustness of the algorithm to the uneven distribution of energy under the influence of smoke and water mist, this paper simulated the environmental impact of smoke and water mist as shown in Figure 3. A smoke generator and an ultrasonic atomizer were used to generate smoke and water mist, an acrylic tube was used to collect the smoke and water mist in the light path of the cross laser-pattern.
The acquired digital image of cross laser-pattern under simulated smoke and water mist environment is shown in Figure 4:

3. Methodology

The flow chart of the proposed algorithm is shown in Figure 5.

3.1. Distortion Correction

Due to the limitations of the equipment and various problems in the assembly of the system, the cross laser-pattern images collected by the industrial cameras in the position and attitude measurement system will produce various distortions. According to the characteristics of the project, perspective distortion [18] and radial distortion [19] are the two most important factors affecting the result, so this paper mainly considers perspective distortion and radial distortion, and corrects them in real time.
Before distortion correction, a thin Zhang Zhengyou [20] checkerboard grid was placed on the interior surface of the screen, and an image of the checkerboard grid was acquired by the industrial camera. This acquired checkerboard grid image was used to obtain distortion calibration parameters, and with these parameters distortion corrections can be realized in real time.

3.1.1. Perspective Distortion Correction

Due to the assembly error of the image acquisition system, the camera optical axis was not absolutely perpendicular to the receiving screen, thus introducing perspective distortion, which needs to be corrected by perspective transformation. The schematic diagram is shown in Figure 6.
Suppose that ( x , y , 1 ) is the coordinate point of the original image plane, ( X , Y , Z ) is the corresponding coordinate point after transformation, then:
[ X Y Z ] = A [ x y 1 ] , A = [ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ]
where [ X , Y , Z ] T and [ x , y , 1 ] T are the target point matrix and the source point matrix respectively, A is the perspective transformation matrix. This is a transformation from two-dimensional space to three-dimensional space, as the target image point is still a two-dimensional image, suppose ( X , Y , Z ) is the point on the target image plane, then:
{ X = X Z Y = Y Z Z = Z Z   ,   { X = a 11 x + a 12 y + a 13 a 31 x + a 32 y + a 33 Y = a 21 x + a 22 y + a 23 a 31 x + a 32 y + a 33 Z = 1
where a 33 = 1 . From the above formula, the following equations can be obtained:
{ a 11 x + a 12 y + a 13 a 31 x X a 32 X y = X a 21 x + a 22 y + a 23 a 31 x Y a 32 y Y = Y
There are eight unknowns in the equations. The correction of distorted image by perspective transformation requires obtaining the coordinates of a group of four points of distorted image and a group of four points of target image. The transformation matrix of perspective transformation can be calculated by two sets of coordinate points A, and then the transformation matrix of the whole original image can be implemented, realizing perspective distortion correction.
Since the position relationship between the laser receiving screen and the camera was fixed in this paper, once calibrated, the perspective transformation matrix was also constant. Therefore, the perspective transformation matrix can be obtained by prior calibration, and the perspective distortion can be corrected in real time by using the same matrix.

3.1.2. Radial Distortion Correction

Radial distortion is a distortion distributed along the radius due to the non-ideal properties of the lens itself, which can be corrected by the radial correction–distortion model.
It is assumed that the center of distortion is the origin of the coordinates, the coordinates of the corrected image is ( x u , y u ) , the coordinates corresponding to the distorted image is ( x d , y d ) , and the coordinates of the distorted center is ( x 0 , y 0 ) . The radial correction-distortion model is shown as follows:
{ x u = x d ( 1 + k 1 r d 2 + k 2 r d 4 ) y u = y d ( 1 + k 1 r d 2 + k 2 r d 4 )
where r d = x d 2 + y d 2 . In this paper, the radial distortion coefficient k 1 , k 2 was calibrated by Zhang Zhengyou checkerboard grid method, and once calibrated, k 1 , k 2 were also constant, and the real-time correction of radial distortion of cross laser-pattern image can be realized by using this formula.
Projected error [21] is often used to evaluate distortion correction accuracy. In this paper, after image distortion corrections, the projected error was 0.35 pixel, indicating it had good distortion correction effect.

3.2. Image Preprocessing

Image preprocessing refers to some image processing to the original image before doing the main algorithm operation, in order to make the features of interest parts more prominent or adapt to the subsequent algorithm. Common image preprocessing includes smoothing, image threshold segmentation and so on.
Image smoothing is a method to reduce the gray difference between some pixels and adjacent pixels in the image. The purpose of smoothing is to reduce image noise and image blur. In order to preserve the edge sharpness and detail of the image and suppress the noise at the same time, the median filter was used to smooth the original image and got the filtered image.
Image threshold segmentation is to divide the picture scene into two categories: target object and background object. Image segmentation is the basis of image recognition. The Gray threshold method is a common image segmentation method, it includes the direct threshold method and optimal threshold method. Optimal threshold method is a method to determine the optimal segmentation threshold by using statistical knowledge. Its representative method is the maximum inter class variance method (Otsu) [22]. For the laser image under interference in the environment, the maximum inter-class variance method, which determines segmentation threshold automatically, was used to binary the filtered image to eliminate the background interference caused by adverse environment. The gray histogram is divided into two groups at a certain threshold, and the threshold is determined when the variance between the two groups is maximum. Further, the method can effectively distinguish the cross laser-pattern from the background [23]. However, when the stray light is too bright, it brings noise to the acquired image, causing low discrimination to background and cross laser-pattern; this affects the accuracy of the segmentation threshold. In order to avoid this problem, the image acquisition system uses a band-pass filter film matching with the laser frequency and dust shield to form a darkroom, preventing the affects of stray light.

3.3. Cross Laser-Pattern Separation

The X and Y axis separation of cross laser is the premise to realize the center extraction and attitude positioning of the cross laser-pattern, and the foundation is to complete the extraction of the edge line of the cross laser-pattern. However, the traditional straight line detection algorithm, such as the Hough transformation method, takes a long time and has poor real-time performance, which does not meet the requirements of this project. With the characteristics of the project, the angle of the cross spot in the image coordinate system is less than ±5°, and the cross laser intersects the boundary of the field of view as shown in Figure 4a. Here the rising and falling edges of the image is defined firstly.
As shown in the Figure 7, the gray value curve of the image in the horizontal pixel direction is presented, here the segmentation threshold is the one that obtained by the maximum inter-class variance method mentioned above. In order to eliminate noise interference, a hysteresis is set on the basis of segmentation threshold. When the gray value of the searched pixel is greater than or equal to “ideal threshold plus hysteresis” and the gray value of the previous pixel is less than that pixel, the point is designated as the rising edge in that searching direction. When the gray value of the searched pixel is less than or equal to “ideal threshold minus hysteresis” and the gray value of the previous pixel is greater than that pixel, the point is designated as the falling edge in that searching direction.
Take the left edge straight line detection process of cross laser-pattern as an example, search each pixel line by line from left to right along horizontal direction, the algorithm is shown in Figure 8. L is 1.2 times the cross-line width in pixel.
  • If the first searched edge is raising edge, then keep searching:
    If the falling edge is searched, then stop searching:
    • If Δ X L , then store the raising edge coordinates into the feature data set;
    • If Δ X > L , then return null.
    If there is no falling edge in the searching direction, then return null;
  • If the first searched edge is falling edge, then keep searching:
    If the raising edge is searched, then stop searching, and store the raising edge coordinates into the feature data set;
    If there is no raising edge in the searching direction, then return null;
  • If there is no raising edge nor falling edge in the searching direction, then return null.
So far, the feature data set containing the pixel coordinates of the left edge of the cross spot could be obtained.
Least square method is a mathematical method to seek the best function matching of data, and its standard is the minimum sum of squares of errors. In this paper, the left edge feature data set was used as the sample to fit the most suitable edge line, realizing the left edge line extraction. For other cross laser-pattern edges in right, upper and lower direction, edge line extraction algorithm was similar to the left one, the difference is seen in the searching directions. Therefore, the cross-laser edge lines extraction was finished, and the result is shown in Figure 9.
On this basis, the separation of X and Y axis line of cross laser-pattern was conducted, the algorithm is introduced as follow.
Taking the Y axis separation as an example, set the left side pixels’ gray value of left reference line, which forms by shifting the left edge line a certain distance left-side, to be zero, and set the right-side pixels’ gray value of right reference line, which forms by shifting the right edge line a certain distance to the right-side, to be zero. X axis separation is similar. In order to eliminate the influence of the overlapping part of the cross laser on the Gaussian fitting, the corresponding overlapping area on the X and Y axis was removed. The final separation result is shown in Figure 10.

3.4. Gaussian Fitting

As shown in Figure 11, there are two fitting sections. Fitting Section 1 is perpendicular to the light strip, fitting Section 2 is the sampling direction along y axis of image coordinate. The gray value distribution in each fitting section accords with Gaussian distribution [24]. However, the section points are not located in integer pixel in fitting Section 1, the gray value of the section points need to be determined by interpolation method, and the interpolation error is introduced. While the section points are all located in integer pixel in fitting Section 2, there is no interpolation error and can significantly improve processing speed, so that fitting Section 2 was used in this paper.
Taking the cross section of Y axis laser line of cross laser-pattern as an example as shown in the Figure 12a, the gray distribution accords with Gaussian model as shown in Figure 12b.
Therefore, it can be used to describe with Gaussian function model:
f ( y ) = A exp [ ( y μ ) 2 2 δ 2 ]
where f ( y ) is the gray value of the pixel point of the cross section of Y axis laser line of cross laser-pattern in the image, and μ is the y pixel value of Gaussian center of the cross section, δ is the half width of the section. The Gaussian function model is used to fit the gray distribution of the cross section, and the control parameters ( A , μ , δ ) are solved to obtain the coordinate of the section.
The center points of all sections are used as the feature data set to fit the central line, and thus the central line of the laser line can be obtained.

3.5. Central Line Detection for Laser Line

To obtain the central line of the laser line, the proposed algorithm removes outlying points [25] based on traditional line fitting method as shown in Figure 13, the relevant algorithms are as follows:
  • Depending on the pixel radius, a subset of data is selected from the feature data set: the pixel radius is used to determine the effective pixel, which refers to the maximum distance between the effective pixel and the straight line. At the beginning of line fitting, the algorithm randomly makes a straight line at two points, and selects pixels in the radius range of the line to form a data subset;
  • Based on the data subset, a straight line is fitted and its Mean Square Distance (MSD) is calculated. The smaller the MSD value is, the closer the pixel is to the fitted line with higher quality;
  • A subset of the last fitted data is removed and the line is re-fitted based on the remaining data, and MSD is calculated at the same time;
  • Repeat the above procedure until all data in all feature data sets are fitted and then select the line with smallest MSD as the candidate line;
  • Optimize the candidate line and continuously remove the farthest outlying point, and re-fit the line based on the remaining point, and calculate the Line Fit Score (LFS):
L F S = ( 1 M S D P R 2 ) × 1000
where PR is pixel radius. When the LFS reaches the required minimum score, the current fitting line is returned.
Therefore, the equations of the central lines of X and Y axis line of the cross laser can be obtained separately. The coordinate of the intersection point of the two center lines is the coordinate of the center point of the cross laser-pattern. Besides, with the slopes of the two center lines, the angles of X and Y axis laser lines of cross laser-pattern in camera image coordinate can be obtained, namely the attitude information of the cross laser can be determined.

4. Experiments and Results

4.1. Principle Verification Experiment on Cross Center Positioning Accuracy of Cross Laser-Pattern

Since the center coordinates in theory of the cross laser-pattern cannot be obtained directly, the horizontal displacement measurement experiment was conducted to verify the positioning accuracy of the cross center indirectly. The image acquisition system was placed on a PI micro translation platform, and the accuracy of the translation platform is ±1 μm. The experimental schematic diagram and the experimental process diagram are shown in Figure 14.
The experiment data are shown in Table 1.
From the table, the conversion coefficient is 0.0359, that is, one pixel is equivalent to 0.0359 mm in the world coordinate system. Then the horizontal displacement measurement experiment was conducted. Before the measurement, the image coordinate system X axis was adjusted to be parallel to the shifting axis of PI micro translation platform by rotating the image coordinate system, that is, when the image acquisition system moved horizontally with the PI micro translation platform, the vertical coordinates of the cross laser-pattern in the image coordinate system remained constant.
After the calibration of conversion coefficient, the image acquisition system was ready to conduct displacement measurement experiment to verify cross center positioning accuracy. During the experiment, the cross laser was fixed as the measurement datum, and the PI micro translation stage provided displacement reference value to the image acquisition system, and with the measurement algorithm and conversion coefficient, the related displacement can be measured. The relationship between measurement error and displacement value is shown in Figure 15:
After compensating the measured data according to the error relation curve shown in the figure, the results are shown in Figure 16:
According to the experiment result, the accuracy of horizontal displacement measurement is better than ±3 μm, relative to the conversion coefficient of 0.0359, the center positioning accuracy is better than 0.2 pixels.

4.2. Principle Verification Experiment on the Attitude Positioning Accuracy of Cross Laser-Pattern

The attitude of cross laser refers to the angles of X or Y axis of cross laser line in the image coordinates. Since the theoretical angle value of X or Y axis laser line in image coordinates cannot be obtained directly, the attitude positioning accuracy was indirectly verified by a rolling angle measurement experiment.
The experimental schematic diagram and the experimental process is shown in Figure 17:
Before the experiment, the cross laser is fixed to a high precision turntable with adapting part, the accuracy of the turntable is ±3”, and to ensure the laser projection axis was coaxial with the turntable axis, the laser position is adjusted. That is, the center of cross laser image does not change in the image coordinate when rotating the turntable. The measurement principle is shown in Figure 18.
In order to express conveniently, the cross laser only draws the X axis laser line. Here, α 0 is angle of the laser line at the initial moment in the image coordinate system and α 1 is the angle of laser line in the image coordinate system after rotation. θ m represents the rotation angle measured by the image acquisition system and θ r represents reference value of the rotation angle obtained from the turntable system. Since the projection axis of cross laser cannot be guaranteed to be perpendicular to the receiving screen of the image acquisition system, the projection error will be introduced into the measurement [26], and e represents the measurement error. The correlation is shown in Table 2:
The repeatability experiment error curves are shown in Figure 19:
Based on the repeatability error curve, the error average value is obtained, and the error model is established by using linear fitting method. The related models is shown in Figure 20 and results is shown in Figure 21:
According to the results, after compensation, the system angle measurement accuracy is better than ±10” and the repeatability accuracy is better than ±10”.

4.3. Verification Experiment under Simulated Smoke and Water Mist Environment

In order to verify the robustness of the proposed method under smoke and water mist environment, the following experiments were conducted.

4.3.1. Verification Experiment on Cross Center Positioning Accuracy of Cross Laser-Pattern

The schematic diagram of this experiment is shown as Figure 22, the image acquisition system was placed on a PI micro translation platform, that was fixed on a tripod. A smoke generator and an ultrasonic atomizer were used to generate smoke and water mist, an acrylic tube was used to collect the smoke and water mist in the light path of the cross laser-pattern.
During the experiment, there was smoke and water mist in the light path of the cross laser-pattern. The cross laser was fixed as the measurement datum, and the PI micro translation platform moved at an interval of 5 mm as the reference displacement value. With the proposed measurement algorithm and the calibrated conversion coefficient, the related displacement can be measured. The relationship between measurement error and displacement value is shown as follows:
After compensating the measured data according to the error relation curve shown in Figure 23, the compensation results are shown in Figure 24:
According to the results, the proposed method still works under the environment of smoke and water mist. The accuracy of horizontal displacement measurement is better than ±10 μm, relative to the conversion coefficient of 0.0359, the center positioning accuracy is better than 0.6 pixels under the simulated smoke and water mist environment.

4.3.2. Verification Experiment on Attitude Positioning Accuracy of Cross Laser-Pattern

The experimental schematic diagram is shown in Figure 25a, and the experiment procedure is shown Figure 25b:
In this experiment, the image acquisition system was placed on an electrical turntable with an accuracy of ±4″. The cross laser-pattern was fixed as the measurement datum. The image acquisition system was rotated by the electrical turntable along the projection direction of the cross laser. With the proposed method, the measured rolling angles were obtained under the smoke and water mist environment, and the reference rolling angle values were obtained from the electrical turntable controlling software.
The repeatability experiment error curves under smoke and water mist environment are shown in Figure 26:
Based on the repeatability error curve under smoke and water mist environment, the error average value was obtained, and the error model was established by using linear fitting method. The related models and results are shown in Figure 27:
The related linear error model was used to determine error compensation; the compensated result is shown in Figure 28:
According to the result, the system angle measurement accuracy, under smoke and water mist environment, is better than ±15” and the repeatability accuracy is better than ±15”.

5. Discussions and Conclusions

In this paper, cross laser-pattern is used as a cooperative marker. A method is proposed to detect the center point and attitude of a cross-laser image based on the combination of Gaussian fitting and least square fitting. In the principal verification experiment without smoke and water mist in the light path of the cross laser-pattern, the center positioning accuracy is better than 0.2 pixels, the angle detection accuracy is better than ±10”, and the processing speed is better than 0.1 s.
In the verification experiment under smoke and water mist environment, the proposed method still works, indicating that the method is robust, the center positioning accuracy is better than 0.6 pixels, the angle detection accuracy is better than ±15”, and the processing speed is still better than 0.1 s, which meets the requirements of real-time and precision of the project. Besides, according to the results, under a smoke and water mist environment, the proposed method still works while some other methods failed to process the digital images with poor contrast, low signal-to-noise ratio and uneven energy distribution, indicating the robustness of the method. However, it affects the accuracy of the proposed method, therefore, in order to further improve the accuracy of the proposed method under smoke and water mist environment. Environment compensation study needs to be conducted in the future research.

Author Contributions

Investigation, Methodology, Designed the Experiments, Processed Experiment Data and Writing—original draft, H.L.; Supervision, Improved the Experiment and Writing—review and editing, Z.Q.; Developed System Software and Conducted Experiment, H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by National Natural Science Foundation of China (NSFC) (No: 51775378).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, F.; Brown, G.M.; Song, M. Overview of three-dimensional shape measurement using optical methods. Opt. Eng. 2000, 39, 10–22. [Google Scholar]
  2. Jezeršek, M.; Možina, J. High-speed measurement of foot shape based on multiple-laser-plane triangulation. Opt. Eng. 2009, 48, 933–956. [Google Scholar] [CrossRef]
  3. Jezeršek, M.; Možina, J. A laser anamorph profilometer. J. Mech. Eng. 2003, 49, 76–89. [Google Scholar]
  4. Bingqi, L.; Yunsheng, S.; Jiaju, Y.; Xueyi, X. A central location method of infrared cross-division. Opt. Instrum. 2012, 34, 44–48. [Google Scholar]
  5. Jin, S.; Fei, Y. 3D Visual Measurement of Sub-pixel Based on Cross-line structured. Light Tools Technol. 2007, 41, 71–74. [Google Scholar]
  6. He, Y. A Precision Cross Laser Module: China. CN201720181496.6. 27 February 2017. [Google Scholar]
  7. Qin, X.-P.; Ding, J.-X.; Dong, H.-Y.; Dong, S.-J. Calibration of cross structured light based on linear space rotation. Opt. Precis. Eng. 2021, 29, 1430–1439. [Google Scholar]
  8. Zhu, Z.M.; Guo, J.C.; Sun, B.W.; Yu, Y.F.; Fu, P.P.; Ma, G.R.; Tang, Y.Y.; Liu, B. Multifunctional Vision Sensor Device Based on Compound Laser Structured Light: China. CN201711060054.7. 1 November 2017. [Google Scholar]
  9. Hou, W. Research on shading attenuation characteristics of IR spectrum by water fog. Electro-Optic Technol. Appl. 2008, 23, 25–28. [Google Scholar]
  10. Qiu, Z.; Li, H.; Hu, W.; Wang, C.; Liu, J.; Sun, Q. Real-Time Tunnel Deformation Monitoring Technology Based on Laser and Machine Vision. Appl. Sci. 2018, 8, 2579. [Google Scholar] [CrossRef]
  11. Yao, J.; Zhang, G.; Qiu, Z.; Hu, W. Key techniques for large-scale spatial angle measurement based on public optical reference. China Mech. Eng. 2010, 91–94. [Google Scholar]
  12. Wang, D.; Xing, H. Discussion on the Relationship between the Plane Angle of Two Straight Lines in Space and its Projection Angle. J. North China Inst. Sci. Technol. 2014, 11, 45–48. [Google Scholar]
  13. Hu, W.; Qiu, Z.; Zhang, G. Measurement of large-scale space angle formed by non-uniplanar lines. J. Opt. Precis. Eng. 2012, 20, 1427–1433. [Google Scholar]
  14. Zhang, J.; Jiang, W.; Wang, R. Study on Sub-pixel Center Detection Algorithm for Cross Laser Image. J. Shenzhen Inst. Inf. Technol. 2013, 11, 56–60. [Google Scholar]
  15. Liu, B.; Shi, Y.; Ying, J.; Xie, X. A method for center location of infrared crosshair. Opt. Instrum. 2012, 4, 44–48. [Google Scholar]
  16. Dong, D.; Sun, M.; Shi, J.; Liu, R.; Zong, G. Sub-pixel localization algorithm of micro-vision based on Hough transform. Optoelectron. Eng. 2006, 33, 28–37. [Google Scholar]
  17. Miao, J.; Hu, X.; Wu, F.; Lan, G.; Wu, Y. Cross Laser Stripes Center Location Algorithm Based on Statistical Characteristics of Projection and K-Means Clustering Algorithm. J. Zhejiang Sci-Tech Univ. (Nat. Sci. Ed.) 2017, 37, 705–711. [Google Scholar]
  18. Shestak, S.A.; Son, J.Y.; Kim, J.S. Compensation of 3D image perspective distortion using a sliding-aperture multistereoscopic technique. In Proceedings of the Photonics West 98 Electronic Imaging, San Jose, CA, USA, 24–30 January 1998; International Society for Optics and Photonics: Bellingham, WA, USA, 1998. [Google Scholar]
  19. Hartley, R.; Kang, S.B. Parameter-Free Radial Distortion Correction with Center of Distortion Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1309–1321. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  21. Weng, J.; Cohen, P. Camera calibration with distortion models and accuracy evaluation. Pattern Anal. Mach. Intell. IEEE Trans. 1992, 14, 965–980. [Google Scholar] [CrossRef] [Green Version]
  22. Otsu, N.A. Threshold selection method from gray-1evel histogram. IEEE Traps. 1979, SMC-9, 62–66. [Google Scholar]
  23. Xu, F.; Lin, Y.; Zhao, M.; Huang, Y.; Jin, Z. Cross-line Extraction Technology Based on Visual Tracking Autocollimator. Laser Infrared 2011, 41, 1211–1214. [Google Scholar]
  24. Li, L.; Yu, Q.; Lei, Z.; Li, J. High-accuracy measurement of rotation angle based on image. Acta Opt. Sin. 2005, 25, 60–65. [Google Scholar]
  25. Yang, G. Image Processing, Analysis and Machine Vision (Based on Labview); Tsinghua University Press: Beijing, China, 2018. [Google Scholar]
  26. Zhou, W. Analysis of the magnitude of angular projection. J. Yanshan Univ. 2002, 26, 124–127. [Google Scholar]
Figure 1. Two forms of cross laser-pattern. (a) Cross laser formed of a single light source. (b) Cross laser formed of two laser sources.
Figure 1. Two forms of cross laser-pattern. (a) Cross laser formed of a single light source. (b) Cross laser formed of two laser sources.
Applsci 11 09362 g001
Figure 2. Schematic diagram of system composition.
Figure 2. Schematic diagram of system composition.
Applsci 11 09362 g002
Figure 3. The simulated smoke and water mist environment.
Figure 3. The simulated smoke and water mist environment.
Applsci 11 09362 g003
Figure 4. (a) Digital image of cross laser-pattern under simulated smoke and water mist environment. (b) Energy distribution of corss laser-pattern under the influence of smoke and water mist.
Figure 4. (a) Digital image of cross laser-pattern under simulated smoke and water mist environment. (b) Energy distribution of corss laser-pattern under the influence of smoke and water mist.
Applsci 11 09362 g004
Figure 5. Flowchart of the proposed algorithm.
Figure 5. Flowchart of the proposed algorithm.
Applsci 11 09362 g005
Figure 6. Schematic diagram of the perspective transformation.
Figure 6. Schematic diagram of the perspective transformation.
Applsci 11 09362 g006
Figure 7. Definition of rising and falling edges.
Figure 7. Definition of rising and falling edges.
Applsci 11 09362 g007
Figure 8. Flow chart of edge detection line search.
Figure 8. Flow chart of edge detection line search.
Applsci 11 09362 g008
Figure 9. Edge line extraction result.
Figure 9. Edge line extraction result.
Applsci 11 09362 g009
Figure 10. (a) Separation result of X axis line. (b) Separation result of Y axis line.
Figure 10. (a) Separation result of X axis line. (b) Separation result of Y axis line.
Applsci 11 09362 g010
Figure 11. Selection of fitting sections.
Figure 11. Selection of fitting sections.
Applsci 11 09362 g011
Figure 12. (a) Cross section of the cross laser pattern. (b) The gray distribution in the cross section accords with Gaussian model.
Figure 12. (a) Cross section of the cross laser pattern. (b) The gray distribution in the cross section accords with Gaussian model.
Applsci 11 09362 g012
Figure 13. Schematic diagram of central line detection.
Figure 13. Schematic diagram of central line detection.
Applsci 11 09362 g013
Figure 14. (a) Schematic diagram of the experiment. (b) Procedure picture of the experiment.
Figure 14. (a) Schematic diagram of the experiment. (b) Procedure picture of the experiment.
Applsci 11 09362 g014
Figure 15. Displacement measurement error curve.
Figure 15. Displacement measurement error curve.
Applsci 11 09362 g015
Figure 16. Displacement measurement error after compensation.
Figure 16. Displacement measurement error after compensation.
Applsci 11 09362 g016
Figure 17. (a) Schematic diagram of the experiment. (b) Procedure picture of the experiment.
Figure 17. (a) Schematic diagram of the experiment. (b) Procedure picture of the experiment.
Applsci 11 09362 g017
Figure 18. Schematic diagram of the experiment principle.
Figure 18. Schematic diagram of the experiment principle.
Applsci 11 09362 g018
Figure 19. Repetitive error curves.
Figure 19. Repetitive error curves.
Applsci 11 09362 g019
Figure 20. Linear error model.
Figure 20. Linear error model.
Applsci 11 09362 g020
Figure 21. Error after compensation.
Figure 21. Error after compensation.
Applsci 11 09362 g021
Figure 22. (a) Experimental schematic diagram. (b) Experiment procedure picture under smoke and water mist environment.
Figure 22. (a) Experimental schematic diagram. (b) Experiment procedure picture under smoke and water mist environment.
Applsci 11 09362 g022
Figure 23. Displacement error curve under smoke and water mist environment.
Figure 23. Displacement error curve under smoke and water mist environment.
Applsci 11 09362 g023
Figure 24. Displacement error distribution after compensation under smoke and water mist environment.
Figure 24. Displacement error distribution after compensation under smoke and water mist environment.
Applsci 11 09362 g024
Figure 25. (a) Experimental schematic diagram. (b) Experiment procedure picture under smoke and water mist environment.
Figure 25. (a) Experimental schematic diagram. (b) Experiment procedure picture under smoke and water mist environment.
Applsci 11 09362 g025
Figure 26. Error curves of repeatability experiment under smoke and water mist environment.
Figure 26. Error curves of repeatability experiment under smoke and water mist environment.
Applsci 11 09362 g026
Figure 27. The related linear error model.
Figure 27. The related linear error model.
Applsci 11 09362 g027
Figure 28. Error distribution after compensation.
Figure 28. Error distribution after compensation.
Applsci 11 09362 g028
Table 1. Experiment data.
Table 1. Experiment data.
Reference Displacement/mmX Value/PixelΔx Value/Pixel
0357.469---
5497.567140.098
5637.480139.913
5777.376139.896
5917.171139.795
51056.86139.689
51196.48139.620
51335.99139.510
51475.27139.280
51614.38139.110
51753.30138.920
51891.80138.500
52030.06138.260
52167.96137.900
Average139.2685385
Calculation5/Average
Coefficient0.035901863
Table 2. Correlation of the parameters.
Table 2. Correlation of the parameters.
Measurement ValueReference ValueMeasurement Error
CharacterFormulaCharacterCharacterFormula
θ m 1 α 1 α 0 θ r 1 e 1 θ m 1 θ r 1
θ m 2 α 2 α 0 θ r 2 e 2 θ m 2 θ r 2
...............
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, H.; Qiu, Z.; Jiang, H. Real-Time Detection Method for Center and Attitude Precise Positioning of Cross Laser-Pattern. Appl. Sci. 2021, 11, 9362. https://doi.org/10.3390/app11209362

AMA Style

Li H, Qiu Z, Jiang H. Real-Time Detection Method for Center and Attitude Precise Positioning of Cross Laser-Pattern. Applied Sciences. 2021; 11(20):9362. https://doi.org/10.3390/app11209362

Chicago/Turabian Style

Li, Haopeng, Zurong Qiu, and Haodan Jiang. 2021. "Real-Time Detection Method for Center and Attitude Precise Positioning of Cross Laser-Pattern" Applied Sciences 11, no. 20: 9362. https://doi.org/10.3390/app11209362

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop