Next Article in Journal
From Silver Nanoflakes to Silver Nanonets: An Effective Trade-Off between Conductivity and Stretchability of Flexible Electrodes
Next Article in Special Issue
Elastic Asymmetry of PLA Material in FDM-Printed Parts: Considerations Concerning Experimental Characterisation for Use in Numerical Simulations
Previous Article in Journal
Optimal Design of Plated/Shell Structures under Flutter Constraints—A Literature Review
Previous Article in Special Issue
Kerf Taper Defect Minimization Based on Abrasive Waterjet Machining of Low Thickness Thermoplastic Carbon Fiber Composites C/TPU
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Positioning Algorithm Using a Single Image in an LCD-Camera System by Mesh Elements’ Recalculation and Angle Error Orientation

by
Óscar de Francisco Ortiz
1,*,
Manuel Estrems Amestoy
2,
Horacio T. Sánchez Reinoso
2 and
Julio Carrero-Blanco Martínez-Hombre
2
1
Department of Engineering and Applied Technologies, University Center of Defense, San Javier Air Force Base, MDE-UPCT, 30720 Santiago de la Ribera, Spain
2
Mechanics, Materials and iManufacturing Engineering department, Technical University of Cartagena, 30202 Cartagena, Spain
*
Author to whom correspondence should be addressed.
Materials 2019, 12(24), 4216; https://doi.org/10.3390/ma12244216
Submission received: 31 October 2019 / Revised: 10 December 2019 / Accepted: 11 December 2019 / Published: 16 December 2019

Abstract

:
In this article, we present a method to position the tool in a micromachine system based on a camera-LCD screen positioning system that also provides information about angular deviations of the tool axis during its running. Both position and angular deviations are obtained by reducing a matrix of LEDs in the image to a single rectangle in the conical perspective that is treated by a photogrammetry method. This method computes the coordinates and orientation of the camera with respect to the fixed screen coordinate system. The used image consists of 5 × 5 lit LEDs, which are analyzed by the algorithm to determine a rectangle with known dimensions. The coordinates of the vertices of the rectangle in space are obtained by an inverse perspective computation from the image. The method presents a good approximation of the central point of the rectangle and provides the inclination of the workpiece with respect to the LCD screen reference system of coordinates. A test of the method is designed with the assistance of a Coordinate Measurement Machine (CMM) to check the accuracy of the positioning method. The performed test delivers a good accuracy in the position measurement of the designed method. A high dispersion in the angular deviation is detected, although the orientation of the inclination is appropriate in almost every case. This is due to the small values of the angles that makes the trigonometric function approximations very erratic. This method is a good starting point for the compensation of angular deviation in vision based micromachine tools, which is the principal source of errors in these operations and represents the main volume in the cost of machine elements’ parts.

1. Introduction

Positioning systems are increasingly present in all industrial processes. Furthermore, technology requires progressively more precise systems capable of positioning rapidly and robustly. The cost of those is one of the key factors to integrate high precision systems.
Thanks to the advances in screen and camera technology, positioning algorithms that analyze a pattern shown in a photographic image have been developed [1,2]. More recently, camera-screen positioning systems with dedicated artificial vision algorithms [3,4,5] have provided high precision at a very interesting cost compared to other positioning technologies such as encoders or resolvers.
Vision positioning systems are increasingly common in process automation [6,7,8,9,10], autonomous driving [11,12,13,14,15], or augmented reality assistants [16,17,18,19,20]. Indeed, this is one of the most promising elements in the Industry 4.0 revolution. However, the current positioning systems used in the machine tool industry based on high precision encoders and sensors are limited by their cost. Therefore, machine tools used for micro-manufacturing have very high prices and require large floor space. Due to this, in micro-manufacturing, the methods that use vision can be competitive by including high performance commercial elements and reducing space such as cameras and mobile phones’ LCD screens. In addition, such devices are increasing in definition and resolution, providing vision with much better accuracy.
The methodology used in this article to calculate the position and orientation of the camera in relation to the screen is based on pose determination [21,22,23,24], which is used to estimate the position and orientation of one calibrated camera. Several similar methods for calculating the position and orientation of a camera in space using a single image have been described and presented [22,25,26]. Nevertheless, pose estimation and marker detection are widely used tasks for many other technological applications such as autonomous robots [27,28,29], unmanned vehicles [30,31,32,33,34,35,36,37], and virtual assistants [38,39,40,41], among others.
Consequently, this article presents an enhanced method of recalculating the center of the image used by the positioning algorithm in an LCD-camera system, similar to that developed by de Francisco [4] and improved in subsequent studies [42], but being completely different from such previous studies regarding the procedure to calculate the positioning of the part with respect to the reference system of the screen. In previous works, the positioning was obtained through the global center of gravity of the 25 selected LEDs in the image. In this work, the position of the piece is calculated by previously determining an equivalent square obtained by means of regressions of the different lines that form the grid of the 25 LEDs.
In addition, this manuscript also presents the calculation and correction of the orientation angle, which, although very small, always influences the precision positioning due to the large distance between the location of the cutter and the screen. The new method is based on the calculation of the equivalent quadrangle that allows not only the positioning of the center of the image, but also the inclination. The method uses the treatment of an image to obtain the pixel coordinates of a 5 × 5 dot matrix that serves to locate the focus and orientation of the camera, where the error is due to the distance between the focus and the screen and can be assumed as sine error.

2. Materials and Methods

2.1. Experimental Setup and Measurements

The experimental study was applied to a two-dimensional control system (X and Y). Figure 1 shows the model of the Micromachine Tool (MMT) demonstrator developed for this research. Two stepper motors (ST28, 12, 280 mA) controlled and moved two precision guides (IKO BSR2080 50 mm stroke), which were connected to a M3 ball screw/nut. The LCD screen used provided a 1136 × 640 pixel resolution, 326 ppi, and 0.078 mm dot pitch. The screen size was 88.5 × 49.9 mm. Both stepper motors were controlled by the digital output signals provided by an NI 6001-USB data acquisition card connected to the USB port of a laptop computer. The output signal of the acquisition card was treated by a pre-amplification power station composed of two L293 H-bridges. The control was programmed in LabVIEW. It received the image captured by the camera and processed it according to an image enhancing process. It consisted of an image mask application with color plane extraction, fuzzy pixel removal, small object removal, and particle analysis of the mass center of each evaluated pixel. Once it was processed using the developed artificial vision algorithm, it provided the positioning feedback signals needed to move the X and Y axes.
The images were taken by the camera included in the MMT, a Model MITSAI 1.3M digital camera with a resolution of 1280 × 1024 pixels (1.3 MPixels). To analyze the position, a Coordinate Measuring Machine (CMM) Pioneer DEA 03.10.06 with measuring strokes 600 × 1000 × 600 mm was used (Figure 2). The maximum permissible error of the DEA in the measurements was 2.8 + 4.0 L/1000 μ m. The software used for the measurements was PC-DMIS.
Several tests were performed over a 2 × 2 gap pattern using the camera-LCD algorithm. The simulation consisted of testing a 5 mm X axis movement using 10 steps of 0.5 mm. Each travel was repeated 3 times in both the forward and backward direction, according to the the VDI/DGQ 3441 standard: Statistical Testing of the Operational and Positional Accuracy of Machine Tools - Basis.

2.2. Image Acquisition

Image acquisition was done using a procedure developed by the authors in VBA similar to that performed by software such as ImageJ© in its tool “Analyze Particles ...”.
The image may not be focused, although many webcams have autofocus mechanisms that make the focal length variable. In our case, it was unimportant because what matters was the bulk and its center of gravity. It should also be noted that if the extraction was from the complete image, the image usually contained the spherical errors of the lenses that focused the image onto the sensor.
In our case, to speed up the process and calculations, only the central area from the BMP image file that included all 25 LEDs was extracted. Only the red layer was analyzed because it was proven to be the most efficient and the only one used to generate the image. Given the size of the LEDs, an image size of 600 × 600 was sufficient to ensure the presence of at least 25 LEDs in the image.

3. Obtaining the Equivalent Quadrilateral

Once the 25 coordinates of the centers of the LEDs were obtained, as seen in Figure 3, these data had to be statistically treated to obtain four vertices of a quadrilateral that collected information about the coordinates of the 25 points. With this quadrilateral and knowing the real side dimensions given by the size of the pixels, the position and orientation of the camera with respect to this square were obtained.

3.1. Regression of Lines

From the analysis of the 5 × 5 grid, different horizontal lines could be segregated, rearranging the table of coordinates by values in y, obtaining 5 groups of 5 values corresponding to the horizontal lines. Reordering by the values in x, the vertical lines were obtained in the same way. The 5 horizontal lines must be translated into 2 lines, the same with the vertical lines, so that the intersection of the four lines gave rise to the 4 vertices of the quadrilateral that represented a square in conical perspective. The two vanishing points were obtained by the intersection of opposite sides.
Figure 4 shows the regression lines, vertical and horizontal, that represented the different groups of points. The slope and interception terms of the lines followed a tendency that could be anyway also found as shown in Figure 5. These tendencies allowed the calculation of the different slopes in the extreme lines of the rectangle that represented adequately the 25 points, as the border of a chessboard included the dimension and position of the interior squares. The correlated lines and the rectangle used to determine the position and inclination of the axis of the camera are represented in Figure 6.
Since the angles of the slopes had very small variation, the line equations had the form y = m i x + n i . The intersection of a horizontal line with another vertical line is given by Equation (1):
x = n j + m j n i 1 m i m j
where subindex i corresponds to horizontal lines, while subindex j corresponds to vertical lines.
The steps to obtain the two horizontal lines were the following:
  • Sort by coordinate the data of the grid table obtained.
  • Separate this into groups of 5 points as they belong to the same line by similarity in coordinate y.
  • Perform regression of the five groups obtaining the equations of the five lines y = m i x + n i , i = 1 5 .
  • In a similar way, the data were sorted by x coordinates, then separated into 5 groups of 5 points, and the regression was performed obtaining the equations y = m j x + n j , j = 6 10 .
  • Obtain intersection points ( x i , y i ) from the central vertical line y = m 8 y + n 8 with each of the horizontal lines.
  • Accomplish the regression of the slopes of the horizontal lines m i based on the vertical intersection coordinates y i ; thereby, the slope was obtained based on the vertical intersection.
  • In such a manner, we proceeded to select the slopes and the points through which the two horizontal lines indicated would be selected. The points of the extreme horizontal lines 1 and 5 were chosen. The two slopes of these two lines were calculated by means of the regression of the 5 slopes. The line was forced to pass through the intersection points of these extreme points, with the following remaining equations of the lines (Equations (2) and (3)):
    y = m ( y 1 ) x + ( y 1 m ( y 1 ) x 1 )
    y = m ( y 5 ) x + ( y 5 m ( y 1 ) x 5 )
A similar method was used for the calculation of the two vertical lines.
The intersection of the two almost parallel lines provided the 4 extreme vertices A, B, C, and D (Figure 6) that were introduced to the program of the inverse conical perspective to obtain the position and orientation of the camera in relation to the fixed coordinates located and oriented with the square that represented the grid of departure. The position and orientation of the contact point with respect to the screen reference system were obtained using an improvement of the method developed by Haralik for the rectangle reconstruction [22].

3.2. Example of the Calculation of Vertices

To obtain the straight lines, the slopes of the linear regression lines through the data point in the x and y arrays were calculated. In addition, the point at which a line intersected the y axis by using existing x and y values was also calculated for each vertex. The interception point was based on a best fit regression line drawn through the known x and y values, using an internal algorithm for the least squares regression procedure.
The starting point was the table of the centers of each of the zones sorted by the number of pixels comprising the area called mass (Table 1) as the number of pixels that included each zone.
Next, they were sorted by the y coordinates and were classified into groups that corresponded to the horizontal coordinates (Table 2).
As a result, the 5 horizontal lines were obtained y = m i x + n i , j = 1 5 (Table 3).
In the same way, we proceeded to obtain the vertical lines y = m j + n j , j = 6 10 (Table 4).
It is noted in Table 3 and Table 4 that all coefficients m and n had a tendency that could be also the object of a regression. This indicated that the lines were parallel as they had a vanishing point, and the plane that contained all the lines was not perpendicular to the focal line of the camera.
To find the two horizontal lines that represented the 5 lines, we proceeded to find the intersection points of the vertical center line with the 5 horizontal lines, obtaining the intersection points ( x i , y i ) . The slope m i was correlated with the vertical coordinates y i , obtaining the two slopes of the representative lines as lines that had a slope m ( y 1 ) and m ( y 5 ) and passing them through Points 1 and 5, respectively (Table 5).
Once having performed the regression of m with respect to y i , a function of the slope that varied regularly across the different heights ( m = 1.136 × 10 i 3 + 9.331 × 10 3 ) was obtained. As a result, the equations of the horizontal lines that passed through Points 1 and 5 could be calculated.
y = 6108 × 10 3 x + 540.807
y = 9533 × 10 4 x + 83.981
In the same way, we proceeded to obtain the 2 representative vertical lines:
x = 6444 × 10 3 y + 570.765
x = 1277 × 10 3 y + 112.547
The intersection of the opposite lines of this square provided the coordinates of the four vertices that represented the 25 LEDs (Table 6).
The vertices represented in Table 6 were treated using the photogrammetry method of the reconstruction of a rectangle described in Estrems [24], and the coordinates of the camera with respected to the square coordinate system were obtained, as well as the cosine direction of the focal line in this system.
a = d f 2 d f · c o s b 2 = d f 1 c o s 2 b
a 2 a 1 = d f 1 c o s 2 b 2 1 c o s 2 b 1
In Figure 7, the Abbe error a is represented and calculated by the focal distance d f and the cosine in the z direction cos b. The Abbe error is calculated in Equation (8), and the step error due to the variation of angle b during the movement is compensated at each point according to Equation (9).

4. Experimental Results and Discussion

The data obtained in the experimental test are described in Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12, where CMM is the distance measure by the Coordinate Measuring Machine in the movements done by the MMT in every step during the test; Image is the distance moved in every step analyzed by the vision system; Error Image is the error provided by the vision system in every step after a comparison with the distance moved provided by the CMM; Compensation is the distance compensated due to angle error calculated in every step; Image compensated is the distance measured by the CMM after the application of the compensation calculated; Error compensation is the error after the compensation is applied; and Coincidence provides information about the coincidence in the orientation calculated for the angle compensation (YES means orientation coincidence, and NO means the angle orientation calculated is opposite the compensation required to minimize the error).
Table 13 summarizes the mean errors ( e ¯ ) and the standard deviation errors ( σ ) calculated in each run of the experimental tests. The global mean (4.786 μ m) and standard deviation (5.698 μ m) were also calculated.
As is seen in the graphs of Figure 8, the precision of positioning depended strongly on the initial error function, so the variation of the error was less than ± 2 μ m, except several discrete points that were measured in a transition between columns of LEDs that were not so homogeneous in the LCD.
One remarkable result was that the orientation of the compensation error coincided with the sign of the error checked at almost all points. This indicated that there was variation of the orientation of the focal line with respect to the screen coordinate system, although this was not applied efficiently to improve the precision of measurement. This was probably due to the problems evaluating the trigonometric functions in angles with values less than 10 3 radians.
Therefore, due to the distance from the vanishing points of the lines, the estimation of the angular error did not turn out to be very precise in its quantification, although it provided qualitative descriptors on the direction and magnitude of the variation in the inclination of the camera.

5. Conclusions

A new method was developed to position the tool in a micromachine system based on a camera-LCD screen positioning system that also provided information on the angular deviations of the tool axis during operation.
The method gave a good approximation of the center point of the rectangle with a mean error of 0.96%, considering not only the vision algorithm, but also the mechanical test device, and provided the inclination of the workpiece with respect to the LCD-screen reference coordinate system.
The equivalent square was calculated as regression of the lines that could be drawn through the centers of gravity of each of the LEDs. The lack of parallelism between the sides of the square indicated an inclination of the camera axis relative to perpendicular to the screen. The variation of this inclination introduced errors in the displacements that were added to the simple displacement of the center of gravity and whose compensation was also calculated in this article.
A test of the method was designed with the assistance of a Coordinate Measurement Machine (CMM) to verify the accuracy of the positioning method. The test performed provided good accuracy in measuring the position of the designed method, but a high dispersion in the angular deviation was detected, although the orientation of the inclination was appropriate in almost all cases (85.7%). This was due to the small value of the angles that made the approximations of the trigonometric functions very erratic. With accurate formulas to approximate trigonometric functions for small angles, the method could help in obtaining more accurate measurements.

Author Contributions

Conceptualization, Ó.d.F.O. and M.E.A.; methodology, Ó.d.F.O.; software, Ó.d.F.O. and M.E.A.; validation, Ó.d.F.O., H.T.S.R., and J.C.-B.M.-H.; formal analysis, Ó.d.F.O., H.T.S.R., and J.C.-B.M.-H.; writing, original draft preparation, Ó.d.F.O.; writing, review and editing, M.E.A., H.T.S.R., and J.C.-B.M.-H.; visualization, Ó.d.F.O., M.E.A., H.T.S.R., and J.C.-B.M.-H.

Funding

This research was funded by Ingeniería Murciana SL by a private contract.

Acknowledgments

The authors want to thank the University Center of Defense at the Spanish Air Force Academy, MDE-UPCT, for financial support.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LCDLiquid Crystal Display
MMTMicromachine Tool
ppipoints per inch
CMMCoordinates Measuring Machine

References

  1. Leviton, D.B.; Kirk, J.; Lobsinger, L. Ultra-high resolution Cartesian absolute optical encoder. In Recent Developments in Traceable Dimensional Measurements II, Proceedings of the SPIE’s 48th Annual Meeting, San Diego, CA, USA, 2–4 August 2003; SPIE: Bellingham, WA, USA, 2003; pp. 111–121. [Google Scholar] [CrossRef]
  2. Leviton, D.B. Method and Apparatus for Two-Dimensional Absolute Optical Encoding. U.S. Patent 6765195 B1, 20 July 2004. [Google Scholar]
  3. Montes, C.A.; Ziegert, J.C.; Wong, C.; Mears, L.; Tucker, T. 2-D absolute positioning system for real time control applications. In Proceedings of the Twenty-Fourth Annual Meeting of the American Society for Precision Engineering, Rosemont, IL, USA, 13 September 2010. [Google Scholar]
  4. De Francisco-Ortiz, O.; Sánchez-Reinoso, H.; Estrems-Amestoy, M. Development of a Robust and Accurate Positioning System in Micromachining Based on CAMERA and LCD Screen. Procedia Eng. 2015, 132, 8. [Google Scholar] [CrossRef] [Green Version]
  5. De Francisco Ortiz, O.; Sánchez Reinoso, H.; Estrems Amestoy, M.; Carrero-Blanco Martinez-Hombre, J. Position precision improvement throughout controlled led paths by artificial vision in micromachining processes. Procedia Manuf. 2017, 13, 197–204. [Google Scholar] [CrossRef]
  6. Byun, S.; Kim, M. Real-Time Positioning and Orienting of Pallets Based on Monocular Vision. In Proceedings of the 2008 20th IEEE International Conference on Tools with Artificial Intelligence, Dayton, OH, USA, 3–5 November 2008; Volume 2, pp. 505–508. [Google Scholar] [CrossRef]
  7. Zhang, B.; Wang, J.; Rossano, G.; Martinez, C.; Kock, S. Vision-guided robot alignment for scalable, flexible assembly automation. In Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, Karon Beach, Phuket, Thailand, 7–11 December 2011; pp. 944–951. [Google Scholar] [CrossRef]
  8. Zhou, K.; Wang, X.J.; Wang, Z.; Wei, H.; Yin, L. Complete Initial Solutions for Iterative Pose Estimation from Planar Objects. IEEE Access 2018, 6, 22257–22266. [Google Scholar] [CrossRef]
  9. Lyu, D.; Xia, H.; Wang, C. Research on the effect of image size on real-time performance of robot vision positioning. EURASIP J. Image Video Process. 2018, 2018, 112. [Google Scholar] [CrossRef]
  10. Montijano, E.; Cristofalo, E.; Zhou, D.; Schwager, M.; Sagüés, C. Vision-Based Distributed Formation Control Without an External Positioning System. IEEE Trans. Robot. 2016, 32, 339–351. [Google Scholar] [CrossRef]
  11. Yang, S.; Jiang, R.; Wang, H.; Ge, S.S. Road Constrained Monocular Visual Localization Using Gaussian-Gaussian Cloud Model. IEEE Trans. Intell. Transp. Syst. 2017, 18, 3449–3456. [Google Scholar] [CrossRef]
  12. Guo, D.; Wang, H.; Leang, K.K. Nonlinear vision-based observer for visual servo control of an aerial robot in global positioning system denied environments. J. Mech. Robot. 2018, 10, 061018. [Google Scholar] [CrossRef]
  13. Vivacqua, R.P.D.; Bertozzi, M.; Cerri, P.; Martins, F.N.; Vassallo, R.F. Self-Localization Based on Visual Lane Marking Maps: An Accurate Low-Cost Approach for Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2018, 19, 582–597. [Google Scholar] [CrossRef]
  14. Fang, J.; Wang, Z.; Zhang, H.; Zong, W. Self-localization of Intelligent Vehicles Based on Environmental Contours. In Proceedings of the 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM), Singapore, 18–20 July 2018; pp. 624–629. [Google Scholar] [CrossRef]
  15. Islam, K.T.; Wijewickrema, S.; Pervez, M.; O’Leary, S. Road Trail Classification using Color Images for Autonomous Vehicle Navigation. In Proceedings of the 2018 Digital Image Computing: Techniques and Applications (DICTA), Canberra, Australia, 10–13 December 2018; pp. 1–5. [Google Scholar] [CrossRef]
  16. You, S.; Neumann, U.; Azuma, R. Hybrid inertial and vision tracking for augmented reality registration. In Proceedings of the IEEE Virtual Reality (Cat. No. 99CB36316), Houston, TX, USA, 13–17 March 1999; pp. 260–267. [Google Scholar] [CrossRef]
  17. Kim, J.; Jun, H. Vision-based location positioning using augmented reality for indoor navigation. IEEE Trans. Consum. Electron. 2008, 54, 954–962. [Google Scholar] [CrossRef]
  18. Samarasekera, S.; Oskiper, T.; Kumar, R.; Sizintsev, M.; Branzoi, V. Augmented Reality Vision System for Tracking and Geolocating Objects of Interest. U.S. Patent 9,495,783, 15 November 2016. [Google Scholar]
  19. Suenaga, H.; Tran, H.H.; Liao, H.; Masamune, K.; Dohi, T.; Hoshi, K.; Takato, T. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: A pilot study. BMC Med. Imaging 2015, 15, 51. [Google Scholar] [CrossRef] [Green Version]
  20. Rajeev, S.; Wan, Q.; Yau, K.; Panetta, K.; Agaian, S.S. Augmented reality-based vision-aid indoor navigation system in GPS denied environment. In Proceedings of the SPIE 10993, Mobile Multimedia/Image Processing, Security, and Applications; SPIE: Baltimore, MA, USA, 2019. [Google Scholar] [CrossRef]
  21. Wefelscheid, C.; Wekel, T.; Hellwich, O. Monocular Rectangle Reconstruction Based on Direct Linear Transformation. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP 2011), Vilamoura, Portugal, 5–7 March 2011; pp. 271–276. [Google Scholar]
  22. Haralick, R.M. Determining camera parameters from the perspective projection of a rectangle. Pattern Recognit. 1989, 22, 225–230. [Google Scholar] [CrossRef]
  23. Quan, L.; Lan, Z. Linear N-point camera pose determination. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 774–780. [Google Scholar] [CrossRef]
  24. Estrems Amestoy, M.; de Francisco Ortiz, O. Global Positioning from a Single Image of a Rectangle in Conical Perspective. Sensors 2019, 19, 5432. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Abidi, M.A.; Chandra, T. A new efficient and direct solution for pose estimation using quadrangular targets—Algorithm and evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 534–538. [Google Scholar] [CrossRef] [Green Version]
  26. Haralick, R.M. Using perspective transformations in scene analysis. Comput. Graph. Image Process. 1980, 13, 191–221. [Google Scholar] [CrossRef]
  27. Sim, R.; Little, J. Autonomous vision-based robotic exploration and mapping using hybrid maps and particle filters. Image Vis. Comput. 2009, 27, 167–177. [Google Scholar] [CrossRef]
  28. Valencia-Garcia, R.; Martinez-Béjar, R.; Gasparetto, A. An intelligent framework for simulating robot-assisted surgical operations. Expert Syst. Appl. 2005, 28, 425–433. [Google Scholar] [CrossRef]
  29. Pichler, A.; Akkaladevi, S.; Ikeda, M.; Hofmann, M.; Plasch, M.; Wögerer, C.; Fritz, G. Towards Shared Autonomy for Robotic Tasks in Manufacturing. Procedia Manuf. 2017, 11, 72–82. [Google Scholar] [CrossRef]
  30. Broggi, A.; Dickmanns, E. Applications of computer vision to intelligent vehicles. Image Vis. Comput. 2000, 18, 365–366. [Google Scholar] [CrossRef]
  31. Patterson, T.; McClean, S.; Morrow, P.; Parr, G.; Luo, C. Timely autonomous identification of UAV safe landing zones. Image Vis. Comput. 2014, 32, 568–578. [Google Scholar] [CrossRef]
  32. González, D.; Pérez, J.; Milanés, V. Parametric-based path generation for automated vehicles at roundabouts. Expert Syst. Appl. 2017, 71, 332–341. [Google Scholar] [CrossRef] [Green Version]
  33. Sanchez-Lopez, J.; Pestana, J.; De La Puente, P.; Campoy, P. A reliable open-source system architecture for the fast designing and prototyping of autonomous multi-UAV systems: Simulation and experimentation. J. Intell. Robot. Syst. 2015, 84, 779–797. [Google Scholar] [CrossRef] [Green Version]
  34. Olivares-Mendez, M.; Kannan, S.; Voos, H. Vision based fuzzy control autonomous landing with UAVs: From V-REP to real experiments. In Proceedings of the 2015 23rd Mediterranean Conference on Control and Automation (MED), Torremolinos, Spain, 16–19 June 2015; pp. 14–21. [Google Scholar] [CrossRef]
  35. Romero-Ramirez, F.J.; Muñoz-Salinas, R.; Medina-Carnicer, R. Speeded up detection of squared fiducial markers. Image Vis. Comput. 2018, 76, 38–47. [Google Scholar] [CrossRef]
  36. Germanese, D.; Leone, G.R.; Moroni, D.; Pascali, M.A.; Tampucci, M. Long-Term Monitoring of Crack Patterns in Historic Structures Using UAVs and Planar Markers: A Preliminary Study. J. Imaging 2018, 4, 99. [Google Scholar] [CrossRef] [Green Version]
  37. An, G.H.; Lee, S.; Seo, M.W.; Yun, K.; Cheong, W.S.; Kang, S.J. Charuco Board-Based Omnidirectional Camera Calibration Method. Electronics 2018, 7, 421. [Google Scholar] [CrossRef] [Green Version]
  38. Pflugi, S.; Vasireddy, R.; Lerch, T.; Ecker, T.; Tannast, M.; Boemke, N.; Siebenrock, K.; Zheng, G. Augmented marker tracking for peri-acetabular osteotomy surgery. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2017, 2017, 937–941. [Google Scholar] [CrossRef]
  39. Lima, J.P.; Roberto, R.; Simões, F.; Almeida, M.; Figueiredo, L.; Teixeira, J.M.; Teichrieb, V. Markerless tracking system for augmented reality in the automotive industry. Expert Syst. Appl. 2017, 82, 100–114. [Google Scholar] [CrossRef]
  40. Chen, P.; Peng, Z.; Li, D.; Yang, L. An improved augmented reality system based on AndAR. J. Vis. Commun. Image Represent. 2016, 37, 63–69. [Google Scholar] [CrossRef]
  41. Khattak, S.; Cowan, B.; Chepurna, I.; Hogue, A. A real-time reconstructed 3D environment augmented with virtual objects rendered with correct occlusion. In Proceedings of the 2014 IEEE Games Media Entertainment, Toronto, ON, Canada, 22–24 October 2014; pp. 1–8. [Google Scholar]
  42. De Francisco Ortiz, O.; Sánchez Reinoso, H.; Estrems Amestoy, M.; Carrero-Blanco, J. Improved Artificial Vision Algorithm in a 2-DOF Positioning System operated under feedback control in micromachining. In Euspen’s 17th International Conference & Exhibition Proceedings; Billington, D., Phillips, D., Eds.; Euspen: Northampton, UK, 2017; pp. 460–461. [Google Scholar]
Figure 1. Model of the micromachine tool demonstrator used during the experimental test.
Figure 1. Model of the micromachine tool demonstrator used during the experimental test.
Materials 12 04216 g001
Figure 2. Setup used during the experimental test for the measurement with the Micromachine Tool (MMT) and the Coordinate Measuring Machine (CMM).
Figure 2. Setup used during the experimental test for the measurement with the Micromachine Tool (MMT) and the Coordinate Measuring Machine (CMM).
Materials 12 04216 g002
Figure 3. The 5 × 5 mesh captured by the camera with numbered elements.
Figure 3. The 5 × 5 mesh captured by the camera with numbered elements.
Materials 12 04216 g003
Figure 4. Regression lines in the 5 × 5 elements used in the image analysis.
Figure 4. Regression lines in the 5 × 5 elements used in the image analysis.
Materials 12 04216 g004
Figure 5. Regression lines (compensation) to optimize the position of the lines.
Figure 5. Regression lines (compensation) to optimize the position of the lines.
Materials 12 04216 g005
Figure 6. Final distribution of the rectangle used for the position and angle correction.
Figure 6. Final distribution of the rectangle used for the position and angle correction.
Materials 12 04216 g006
Figure 7. Position error in the vision system due to camera inclination for axis direction movement.
Figure 7. Position error in the vision system due to camera inclination for axis direction movement.
Materials 12 04216 g007
Figure 8. Error due to the vision system before and after angle compensation was applied.
Figure 8. Error due to the vision system before and after angle compensation was applied.
Materials 12 04216 g008
Table 1. Table of the center of gravity example for Image 1.
Table 1. Table of the center of gravity example for Image 1.
Element #xyMass
1112.591195.2421024
2113.36682.2651015
3226.495196.041990
4112.105309.020982
5227.18983.003968
6226.096309.625961
7340.272197.033947
8340.89283.472943
9112.080423.237938
10567.863198.706925
11453.996197.528917
12454.82584.746897
13339.734310.306890
14568.34986.025873
15225.773423.599872
16453.313311.012857
17567.139311.749854
18111.869536.665833
19339.310423.985813
20566.355425.027786
21225.561537.419776
22452.760424.705757
23339.075537.371717
24452.296537.806682
25565.838538.361673
Table 2. Groups of points corresponding to horizontal lines.
Table 2. Groups of points corresponding to horizontal lines.
xy
565.838538.361
452.296537.806
225.561537.419
339.075537.371
111.869536.665
566.355425.027
452.760424.705
339.310423.985
225.773423.599
112.080423.237
567.139311.749
453.313311.012
339.734310.306
226.096309.625
112.105309.020
567.863198.706
453.996197.528
340.272197.033
226.495196.041
112.591195.242
568.34986.025
454.82584.746
340.89283.472
227.18983.003
113.36682.265
Table 3. Coefficients of horizontal lines.
Table 3. Coefficients of horizontal lines.
mn
3.331 × 10 3 536.395
4.127 × 10 3 422.710
6.018 × 10 3 308.298
7.393 × 10 3 194.394
8.142 × 10 3 81.126
Table 4. Coefficients vertical lines.
Table 4. Coefficients vertical lines.
mn
−5.774 × 10 3 568.910
−5.553 × 10 3 455.166
−4.050 × 10 3 341.114
−3.500 × 10 3 227.307
−3.080 × 10 3 113.355
Table 5. Intersection points of the central vertical line with horizontal lines.
Table 5. Intersection points of the central vertical line with horizontal lines.
Point x i y i
1338.937537.525
2339.396424.111
3339.857310.344
4340.316196.911
5340.77483.901
Table 6. Quadrilateral points to deal with the reverse perspective program.
Table 6. Quadrilateral points to deal with the reverse perspective program.
Pointxy
1111.857540.124
2567.302537.343
3570.22783.437
4112.43983.874
Table 7. Data for Run #1 forward with a mesh of 5 × 5 LEDs (values in μ m).
Table 7. Data for Run #1 forward with a mesh of 5 × 5 LEDs (values in μ m).
CMMImageError ImageCompensationImage CompensatedError CompensationCoincidence
501−501.897−0.8970.603−501.294−0.294YES
1000−995.0314969−4.432−999.4630.537YES
1487−1483.3893611−0.714−1484.1022.898YES
1998−1984.54613.454−2.164−1986.71011.290YES
2489−2489.081−0.0810.471−2488.6100.390YES
2988−2986.1381.8621.062−2985.0762.924NO
3490−3487.3232.6770.802−3486.5213.479NO
Table 8. Data for Run #1 backward with a mesh of 5 × 5 LEDs (values in μ m).
Table 8. Data for Run #1 backward with a mesh of 5 × 5 LEDs (values in μ m).
CMMImageError ImageCompensationImage CompensatedError CompensationCoincidence
496−496.099−0.0992.394−493.7052.295YES
992−992.400−0.400−1.744−994.144−2.144NO
1482−1493.937−11.9370.505−1493.432−11.432YES
1991−1993.130−2.1300.632−1992.498−1.498YES
2494−2495.707−1.7070.912−2494.795−0.795YES
2993−2994.327−1.3272.214−2992.1130.887YES
3494−3493.9690.031−1.950−3495.918−1.918YES
Table 9. Data for Run #2 forward with a mesh of 5 × 5 LEDs (values in μ m).
Table 9. Data for Run #2 forward with a mesh of 5 × 5 LEDs (values in μ m).
CMMImageError ImageCompensationImage CompensatedError CompensationCoincidence
490−498.217−8.2171.671−496.545−6.545YES
985−989.868−4.8682.260−987.608−2.608YES
1481−1489.347−8.347−0.803−1490.150−9.150NO
1976−1993.442−17.4421.153−1992.290−16.290YES
2493−2499.212−6.2120.604−2498.608−5.608YES
2991−2996.982−5.9822.728−2994.254−3.254YES
3485−3494.890−9.8901.978−3492.912−7.912YES
Table 10. Data for Run #2 backward with a mesh of 5 × 5 LEDs (values in μ m).
Table 10. Data for Run #2 backward with a mesh of 5 × 5 LEDs (values in μ m).
CMMImageError ImageCompensationImage CompensatedError CompensationCoincidence
495−498.695−3.6951.631−497.065−2.065YES
991−993.169−2.1692.490−990.6790.321YES
1486−1497.023−11.0234.948−1492.075−6.075YES
2003−1994.7678.233−0.105−1994.8728.128YES
2501−2493.7567.244−0.142−2493.8987.102YES
2995−2995.898−0.8981.446−2994.4520.548YES
3494−3494.915−0.9152.048−3492.8671.133YES
Table 11. Data for Run #3 forward with a mesh of 5 × 5 LEDs (values in μ m).
Table 11. Data for Run #3 forward with a mesh of 5 × 5 LEDs (values in μ m).
CMMImageError ImageCompensationImage CompensatedError CompensationCoincidence
496−500.617−4.6171.720−498.897−2.897YES
994−998.341−4.3411.814−996.527−2.527YES
1497−1496.8650.135−0.990−1497.855−0.855YES
1997−1997.961−0.961−0.007−1997.968−0.968NO
2500−2502.260−2.2600.168−2502.092−2.092YES
2996−3007.135−11.1352.567−3004.567−8.567YES
3498−3503.156−5.1560.660−3502.496−4.496YES
Table 12. Data for Run #3 backward with a mesh of 5 × 5 LEDs (values in μ m).
Table 12. Data for Run #3 backward with a mesh of 5 × 5 LEDs (values in μ m).
CMMImageError ImageCompensationImage CompensatedError CompensationCoincidence
498−497.0740.926−0.532−497.6060.394YES
999−991.4027.598−4.628−996.0302.970YES
1492−1493.747−1.7472.333−1491.4140.586YES
1992−1992.997−0.9971.684−1991.3130.687YES
2487−2491.310−4.3101.007−2490.303−3.303YES
2985−2993.930−8.9300.791−2993.139−8.139YES
3486−3493.567−7.567−3.367−3496.934−10.934NO
Table 13. Summary of the errors, in absolute value, provided by the proposed vision positioning algorithm.
Table 13. Summary of the errors, in absolute value, provided by the proposed vision positioning algorithm.
#1 Forward#1 Backward#2 Forward#2 Backward#3 Forward#3 BackwardGlobal
Mean ( μ m)3.9362.5198.7084.8834.0864.5824.786
Mean (%)0.790.501.740.980.80.920.96
σ ( μ m)4.7714.2384.2126.5863.6995.5675.698
σ (%)0.950.850.841.320.741.111.19

Share and Cite

MDPI and ACS Style

de Francisco Ortiz, Ó.; Estrems Amestoy, M.; Sánchez Reinoso, H.T.; Carrero-Blanco Martínez-Hombre, J. Enhanced Positioning Algorithm Using a Single Image in an LCD-Camera System by Mesh Elements’ Recalculation and Angle Error Orientation. Materials 2019, 12, 4216. https://doi.org/10.3390/ma12244216

AMA Style

de Francisco Ortiz Ó, Estrems Amestoy M, Sánchez Reinoso HT, Carrero-Blanco Martínez-Hombre J. Enhanced Positioning Algorithm Using a Single Image in an LCD-Camera System by Mesh Elements’ Recalculation and Angle Error Orientation. Materials. 2019; 12(24):4216. https://doi.org/10.3390/ma12244216

Chicago/Turabian Style

de Francisco Ortiz, Óscar, Manuel Estrems Amestoy, Horacio T. Sánchez Reinoso, and Julio Carrero-Blanco Martínez-Hombre. 2019. "Enhanced Positioning Algorithm Using a Single Image in an LCD-Camera System by Mesh Elements’ Recalculation and Angle Error Orientation" Materials 12, no. 24: 4216. https://doi.org/10.3390/ma12244216

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop