Next Article in Journal
Sandwiched Magnetic Coupler for Adjustable Gear Ratio
Next Article in Special Issue
Development of an Eco-Cruise Control System Based on Digital Topographical Data
Previous Article in Journal
Optimal Design and Control of a z-Tilt Piezoelectric Based Nano-Scale Compensation Stage with Circular Flexure Hinges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lane Detection via Object Positioning Systems Based on CCD Array Geometry

Institute of Vehicle Engineering, National Changhua University of Education, Changhua 500, Taiwan
Inventions 2016, 1(3), 16; https://doi.org/10.3390/inventions1030016
Submission received: 28 July 2016 / Accepted: 18 August 2016 / Published: 26 August 2016
(This article belongs to the Special Issue New Inventions in Vehicular Guidance and Control)

Abstract

:
This paper presents an approach to lane detection for a vehicle. The positions of the lane marks can be evaluated by visual information of the image captured from a single charge-coupled device (CCD) camera. This proposed approach originally utilizes the properties of the CCD array in a camera to achieve the aim of objects positioning, since one pixel information produces two equations and increases one unknown variable. After camera calibration, this approach can therefore evaluate the intrinsic parameters of a camera from more pixel information. The configuration of the CCD chip cells is the key factor in this approach. The pixels of a resulting image directly reflect the geometry of the CCD cell, or the CCD array, in the camera. According to the attitude of the camera, this paper constructs the coordinate transformation that can resolve the geometrical relations between the film coordinate (the CCD array) and a fixed coordinate. This paper also provides associated techniques to facilitate the proposed approach, including image geometry analysis, distribution analysis of the CCD array, least mean square error (LMS) algorithm, etc. A down scaled experiment for lane detection is used to verify the feasibility of the proposed approach. The results show that the proposed approach is able to achieve object positioning.

Graphical Abstract

1. Introduction

Over the past decades, considerable effort has been put towards the study of object position evaluation by computer visual information. Various approaches and techniques have been developed throughout manifold applications, including both applications and theories. Some used more than one camera so as to establish the position information of an object [1,2,3,4]. Rankin et al. addressed a stereo vision-based terrain mapping method for the off-road autonomous navigation of an unmanned ground vehicle (UGV) [1]. Chiang et al. developed a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm [2]. Richa et al. presented an efficient approach for heart surgery by using stereo images from a calibrated stereo endoscope in medical application [3]. Luna et al. have studied a sensor system to measure the 2-D position of an object that intercepts a plane in space [5]. However, it may be laden to equip with two or more cameras in some applications. For instance, Zhou presented an approach to geo-locate the ground-based object from video stream by a single camera equipped in an unmanned aerial vehicle (UAV) [6]. The measuring system from the vision information may not be one of the main functions in some applications, but the system can provide more advantageous functions for those applications without implementing any extra hardware or device.
Camera calibration of is one of the crucial issues in computer vision. Camera parameters which require calibration include intrinsic and extrinsic ones [7]. The internal camera geometric and optical characteristics are intrinsic, while the 3D position and orientation of the camera frame relative to a certain world coordinate system are extrinsic. The intrinsic parameters of a camera are sometimes fixed, such as mounted orientation and position, distortion, CCD chip position, chip cell distance, etc. Although not all the techniques need any calibration object, the calibration of a camera is indispensable for some techniques of computer vision applications [8,9,10,11].
A sensor system which evaluates the positions of objects by visual information with a single CCD camera is presented. Some techniques of this proposed approach were adopted from the invention patented in Taiwan [12]. It utilizes the properties of the CCD array in a camera in order to measure the positions of objects that are on regular geometric lines, curves, or surfaces. This paper not only adopts the approach to solve the lane detection problems, but provides the evaluation procedure and discusses its errors. The technique in this study may not have to utilize any calibration object and can be regarded as self-calibration, or the so-called 0D approach [13]. The intrinsic parameters of the camera need to be obtained. This will result in a mathematical problem if all intrinsic parameters are evaluated by an image taken from the camera, although no calibration objects are necessary. An overview of this area can be found in [14] and the references therein. For example, the distortion calibration must be considered for most cameras. The distortion parameters—which are the intrinsic parameters of a camera—are constant for all pixels in an image taken by this camera [9]. This paper takes into account the radial distortion, since the radial distortion contributes the major errors to the distortions of the camera in this study. This paper details the mathematical model of the camera system in regard to the coordinate transformation between a fixed coordinate and a CCD chip coordinate. It also constructs the computational procedure for the object positioning system.
The notations in this paper are as follows: [.] denotes a matrix; P denotes the position vector of point P, while O A denotes the origin position vector of A-coordinate; ( x A y A z A ) A denotes the x, y, and z components of a position vector in A coordinate, that is,
( x A y A z A ) A O A + [ i A j A k A ] [ x A y A z A ] = O A + x A i A + y A j A + z A k A
where the unit vectors i A , j A , and k A are the orthogonal bases of A coordinate in x, y, and z axes, respectively. Hereafter in this paper, the position vector is in the fixed coordinate as the symbol A is omitted. It is intuitive that a vector P can be represented both in A coordinate and B coordinate. That is,
P = ( x A y A z A ) A = ( x B y B z B ) B ,
or
P = O A + [ i A j A k A ] [ x A y A z A ] = O B + [ i B j B k B ] [ x B y B z B ] .

2. Coordinate Transformation

A spatial object can map on the corresponding chip cell of the CCD array, or the pixel in an image taken by a camera. From the pinhole phenomenon, the object—the mapped pixel on the CCD array—and the lens center should be collinear after calibration for the camera distortion. The mapped pixel on the CCD array and the lens center can formulate the line equation in algebra intuitively, and the other one can satisfy the line equation as long as their position vectors are represented in the same coordinate system. In general, we can use the fixed coordinate to represent these position vectors. Therefore, the coordinate transformation from the CCD array coordinate (or the film chip coordinate) to the fixed coordinate is significant by this approach. Figure 1 shows the relations between the fixed coordinate and the film chip coordinate, where O and O F are the origins of the fixed coordinate and the film chip (CCD array) coordinate, respectively. For convenience, as shown in Figure 1, we can assign the orientations of the x ¯ -axis and the y ¯ -axis in the film chip coordinate to conform to the pixel orientations of the x-axis and y-axis in an image. Let ( x y z ) T and ( x ¯ y ¯ z ¯ ) F T denote any position represented in the fixed coordinate and in the film chip coordinate, respectively; i.e.,
( x y z ) = ( x ¯ y ¯ z ¯ ) F
According to the definitions,
( x y z ) O + x i + y j + z k = O + [ i j k ] [ x y z ] ,
and
( x ¯ y ¯ z ¯ ) F O F + x ¯ i F + y ¯ j F + z ¯ k F = O F + [ i F j F k F ] [ x ¯ y ¯ z ¯ ] ,
then,
O + [ i j k ] [ x y z ] = O F + [ i F j F k F ] [ x ¯ y ¯ z ¯ ]
In general, O = ( 0 0 0 ) T and O F = ( 0 0 0 ) F T . That is, the positions of objects can refer to the position of the camera. O F can also be represented in fixed coordinate; i.e., O F = ( x F 0 y F 0 z F 0 ) T . In Figure 1, the mounted position for the camera, O M = ( 0 y ¯ M z ¯ M ) F T , where y ¯ M and z ¯ M can be obtained from the specifications of the camera. O M can be assigned just above the origin of the fixed coordinate with zero x-axis and zero y-axis components. O M has only the z component in the fixed coordinate, or O M = ( 0 0 H ) T , where H is the height of O M and H can be measured directly. According to (7),
[ i j k ] [ 0 0 H ] = [ i j k ] [ x F 0 y F 0 z F 0 ] + [ i F j F k F ] [ 0 y ¯ M z ¯ M ]
The origin of the film chip coordinate represented in the fixed coordinate can be
O F = ( x F 0 y F 0 z F 0 ) = ( 0 0 H ) T ( 0 y ¯ M z ¯ M ) ,
where
[ i F j F k F ] = [ i j k ] T ,
T = [ cos ψ sin ψ 0 sin ψ cos ψ 0 0 0 1 ] [ cos θ 0 sin θ 0 1 0 sin θ 0 cos θ ] [ 1 0 0 0 cos ϕ sin ϕ 0 sin ϕ cos ϕ ] [ 0 0 1 1 0 0 0 1 0 ] ,
and ψ , θ , and ϕ denote the Euler angles of the camera for the yaw, pitch, and roll angles, respectively. Consequently, the coordinate transformation from the film chip coordinate to the fixed coordinate defined in (4) can become
[ x y z ] = [ x F 0 y F 0 z F 0 ] + T [ x ¯ y ¯ z ¯ ]
In Figure 1, let L be the lens center position where
L ( 0 0 f ) F = ( x L y L z L )
Substituting (13) into (12), the lens center position in the fixed coordinate becomes
[ x L y L z L ] = [ x F 0 y F 0 z F 0 ] + T [ 0 0 f ]
The CCD array, or the film chip of a camera, consists of the CCD cells which sense the light energy to make an image collected by the corresponding pixels. Figure 2 sketches the configuration of the CCD array, where U is the width of the CCD array, V is the height of the CCD array, U c is the width of the CCD cell, V c is the height of the CCD cell, and N u and N v are the numbers of the CCD cells on the film chip in the width and height directions, respectively. Let ( P x P y ) P x y T in which P x = 1 , 2 , , N u and P y = 1 , 2 , , N v denote the index of the CCD cells on the chip. ( P x P y ) P x y T is indeed the same as the pixel index of the image. Let P P x y ( x P x y y P x y z P x y ) T stand for the position vector at the center of the chip cell for a pixel indexed by P x and P y in the image. Then, the chip cell center position can be as follows.
P P x y = ( x P x y y P x y z P x y ) = ( ( U U c ) ( 2 P x N u 1 ) 2 ( N u 1 ) ( V V c ) ( 2 P y N v 1 ) 2 ( N v 1 ) 0 ) F ,
Substituting (15) into (12), the position at the center of the chip cell in the fixed coordinate in terms of the pixel index of the image becomes
[ x P x y y P x y z P x y ] = [ x F 0 y F 0 z F 0 ] + T [ ( U U c ) ( 2 P x N u 1 ) 2 ( N u 1 ) ( V V c ) ( 2 P y N v 1 ) 2 ( N v 1 ) 0 ]
This section has derived the positions of some significant points represented in the fixed coordinate from the film chip coordinate by the appropriate coordinate transformation in (12). According to (16), it can be intuitive to obtain the chip cell center position indexed by ( P x P y ) P x y T , which is also the pixel index of the image. Based on the pinhole model of a camera, an object from the scene mapped onto the CCD array, or the image, can be formulated as a line equation that makes two equations in three-dimensional (3D) space. In addition, the object in the scene, the lens center, and the center position vector of the corresponding chip cell mapped from the object must be collinear. If there are n objects, there should be n lines, or 2n equations. In the case that a set of the evaluated objects is on a regular curve (e.g., a line, a circle, etc.), n objects will increase n + 2 unknowns, including two regular curve parameters. Theoretically, the position evaluations for the objects are feasible since the equations increased are more than the unknowns increased. The Euler angles of the camera are not intrinsic parameters. They are sometimes fixed for images taken by the same camera in applications. However, the Euler angles will make the position evaluations inaccurate for a slight misalignment of the attitude for a camera.

3. The Evaluations of Camera Parameters and Object Positions

To exemplify the proposed approach, this paper assumes a case wherein the evaluated objects in the scene are collinear on the ground. According to the pinhole phenomenon, a point P of the objects which are collinear with the equation y = m x + y 0 on the ground (z = 0) maps on the CCD chip cell at a pixel indexed by ( P x P y ) P x y T . Therefore, points collinear on P = ( x m x + y 0 0 ) T satisfy the line equation as follows.
x x L x P x y x L = m x + y 0 y L y P x y y L = z L z P x y z L
or
( x x L ) ( y P x y y L ) ( m x + y 0 y L ) ( x P x y x L ) = 0 ,
( x x L ) ( z P x y z L ) + z L ( x P x y x L ) = 0 ,
where x L , y L , z L , x P x y , y P x y , and z P x y are defined in (14) and (15). If a set of points P i = ( x i m x i + y 0 0 ) T in a fixed coordinate maps onto a set of pixels ( P x , i P y , i ) P x y T in the image, or the CCD array, where i = 1 , , n , there will be 2n equations according to (18) and (19). In this evaluation approach, ψ , θ , ϕ , and H are the fixed variables for an image, while the line parameters m and y 0 are also fixed, but only x i is a variable. One pixel information in an image produces two equations and one variable. Therefore, as for the objects on a regular curve, there should be at least N F pixel information in an image to solve the additional N F variables and N F fixed variables theoretically. In Figure 1, y ¯ M , z ¯ M , and the CCD geometry are known. If there is more than N F pixel information, the evaluations of camera parameters and positions of objects can apply the least square approach from the quadratic performance index. That is, if there are N j ( N j 2 ) pixel information on the j-th line (i.e., m j and y 0 j are both constants for any j = 1, 2, ..., N L , where N L denotes the number of regular lines), the least square approach can be defined as min J , where J is the quadratic performance index and
J = j = 1 N L i = 1 N j [ ( x i x L ) ( y P x y , i y L ) ( m j x i + y 0 j y L ) ( x P x y , i x L ) ] 2 + [ ( x i x L ) ( z P x y , i z L ) + z L ( x P x y , i x L ) ] 2
The optimal evaluated values of the unknown variables, including fixed and additional ones, can be obtained if J is minimized. The solution of minimization in (20) might not converge to a unique one by numerical methods or mathematical algorithms. The solution highly depends on the initial guess values of the variables. There are two ways to improve the solution accuracy of (20) in calculations. One is to increase the number of objects on the same line. The other is to choose more accurate initial guess values of the variables.
Figure 3 shows an evaluation system with a DH-HV2003UC camera, while this paper uses the 3DM-GX1 gyro mounted on the camera in order to measure the exact attitude, or the Euler angles, of the camera. Regarding the verification purpose of this proposed approach, this paper compares the exact attitudes with the evaluated ones. It additionally utilizes a laser distance meter to measure the height more precisely from the ground (platform) to the mounted position of the camera on its stand. According to the specifications of this camera, the CCD chip parameters are as follows. U = 6.4  mm , V = 4.8  mm , U c = 4.2   μ m , V c = 4.2   μ m , N u = 800 , N v = 600 , y ¯ M = 27.25  mm , and z ¯ M = 2.53  mm . These parameters are crucial to the evaluation of the positions of objects by the proposed approach, since they play the key role in the accuracy of the position evaluation approach. As for the camera distortion calibrations, this paper only takes into account the radial distortion of the camera, rather than other types of distortions in this application, such as the decentering distortion, the thin prism distortion, etc. Based on the images taken by the camera, the radial distortion of this camera is of the barrel type, with a negative distortion constant equal to −8.3042 × 10−7 in pixels.
The position evaluation of the lane marks is a practical example, for instance, of the lane departure warning system of vehicles [15,16,17]. Figure 4 shows the downscaled simulation scenario for the position evaluation of the lane marks, while the geometry of the platform is also illustrated in Figure 4b. In this example, m 1 = m 2 = 0 . There are four cases used to verify the feasibility of the proposed approach. They are the conditions as follows: (a) ψ = 0.0 , θ = 0.0 , ϕ = 0.0 ; (b) ψ = 10.0 , θ = 0.0 , ϕ = 0.0 ; (c) ψ = 0.0 , θ = 10.0 , ϕ = 0.0 ; (d) ψ = 10.0 , θ = 10.0 , ϕ = 15.0 , respectively. The nonzero Euler angles simulate the situations in which the position evaluations of the landmarks are on the road if there are misalignments for the attitudes of the camera equipped in a vehicle. Figure 5 shows the pictures taken by the camera for these four different cases. We can pick out the specified points as in Figure 4b with their corresponding pixel indices in the pictures after some adequate image processes. Table 1 lists the values of the corresponding pixel index, or P x and P y values, of the assigned lane marks in these four different pictures. The axes defined in the pixel coordinate conform to those defined in images. For instance, the definitions of y-axes in these different coordinates coincide with each other but in opposite directions in real space, because of the pinhole effect. Therefore, the directions of the x-axis and y-axis are, respectively, rightward and downward in pictures conventionally, but rightward and upward in film coordinate with front view. Figure 6 sketches the position evaluation results of the simulations in the case studies. The solid lines and the dashed lines stand for the exact positions and the evaluated positions of the assigned lane marks, respectively. The evaluated positions of those marks are all in the arithmetic progressions, and can be solved by numerical methods recursively. These results show that the proposed approach can evaluate the positions of specified collinear objects, even though there is a slight misalignment of the attitude for the camera. However, there are still errors between the exact positions and the evaluated ones. The errors may come from the accuracy of the camera geometry, the aspherical lens of camera, the accuracy of the pixel indexes chosen in the result pictures, etc.
The accuracy of the position evaluations depends on the mapped-on positions of the pixels of objects. The vehicle vibrations from rough roads, and the external image disturbances from ill environment such as rain, light, shadow, etc., are indeed the key factors which affect the recognition accuracy. In practice, the effects of vehicle vibrations are relatively small because of the shock absorbers on the vehicle. A camera stabilizer could be equipped for the sake of capturing a high quality image if the vehicle vibrations are tremendous. The effects of the image disturbances from the environment are one of the most significant issues. Some image process may overcome the image noise and disturbance problems such as medium filter, vector median filter, vector directional filter, adaptive median filter, adaptive nearest neighbor filter, etc.

4. Conclusions

This paper presents an approach to evaluate the positions of objects by utilizing a single camera. An image can provide the information of the objects captured on the CCD chip cells. For instance, a pixel can produce two equations and an additional variable. Besides, some variables for the camera are fixed. If the objects are in regular geometry, a limited amount of pixel information can form enough equations to solve the position evaluation problems via the coordinate transformation. The properties of the CCD array serve as the main reference dimensions for the evaluation of the positions of objects in an image. The accuracy of the position evaluations depends on the pixels of objects picked out in an image, while it is sometimes not easy to discern the exact pixels of objects in the image. That is, if the image position of a point is not exact, the results which depend on its image coordinate will cause erroneousness in the position evaluations of objects. The image disturbances from vehicle vibrations or image background are also significant for the position evaluation accuracy. It is suggested to equip a camera stabilizer if the vibrations are serious, and to apply some image filter to reduce the effect of image disturbances before the position evaluation. The initial guess of the evaluated variables in (20) is crucial, and will affect the approached solutions. Future study can focus on the number of pixels selected in an image that can abate the effects of the initial guess in the position evaluations of objects.

Acknowledgments

This research work is sponsored by Ministry of Science and Technology under Grand NSC 100-2221-E-018-017 and MOST 103-2221-E-018-23.

Author Contributions

The author contributes full of this work.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Rankin, A.L.; Huertas, A.; Matthies, L.H. Stereo vision based terrain mapping for off-road autonomous navigation. In Proceedings of the SPIE 7332, Unmanned Systems Technology XI, Orlando, FL, USA, 30 April 2009; pp. 733210–733217. [CrossRef]
  2. Chiang, M.-H.; Lin, H.-T.; Hou, C.-L. Development of a stereo vision measurement system for a 3D three-axial pneumatic parallel mechanism robot arm. Sensors 2011, 11, 2257–2281. [Google Scholar] [CrossRef] [PubMed]
  3. Richa, R.; Poignet, P.; Liu, C. Three-dimensional motion tracking for beating heart surgery using a thin-plate spline deformable model. Int. J. Robot. Res. 2010, 29, 218–230. [Google Scholar] [CrossRef]
  4. Dornaika, F.; Chung, C.R. Stereo geometry from 3-d ego-motion streams. IEEE Trans. Syst. Man Cybern. 2003, 33, 308–323. [Google Scholar] [CrossRef] [PubMed]
  5. Luna, C.A.; Lázaro, J.L.; Mazo, M.; Cano, A. Sensor for high speed, high precision measurement of 2-D positions. Sensors 2009, 9, 8810–8823. [Google Scholar] [CrossRef] [PubMed]
  6. Zhou, G. Geo-referencing of video flow from small low-cost civilian uav. IEEE Trans. Autom. Sci. Eng. 2010, 7, 156–166. [Google Scholar] [CrossRef]
  7. Tsai, R.Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–343. [Google Scholar] [CrossRef]
  8. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  9. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar] [CrossRef]
  10. Maybank, S.J.; Faugeras, O.D. A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 1992, 8, 123–152. [Google Scholar] [CrossRef]
  11. Gibbins, D.; Roberts, P.; Swierkowski, L. A Video Geo-Location and Image Enhancement Tool for Small Unmanned Air Vehicles (Uavs). In Proceedings of the 2004 Intelligent Sensors, Sensor Networks and Information Processing Conference, Melbourne, Australia, 14–17 December 2004; pp. 469–473.
  12. Young, J.-S. Object positioning determinated by the characteristic of image capturing devices and numerical methodology. I424373, 21 January 2014. [Google Scholar]
  13. Zhang, Z. Camera calibration with one-dimensional objects. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 161–174. [Google Scholar] [CrossRef] [PubMed]
  14. Heyden, A.; Pollefeys, M. Multiple view geometry. In Emerging Topics in Computer Vision; Medioni, G., Kang, S.B., Eds.; Prentice Hall: Upper Saddle River, NJ, USA, 2003; pp. 45–108. [Google Scholar]
  15. Wang, Y.; Teoh, E.K.; Shen, D. Lane detection and tracking using B-snake. Image Vis. Comput. 2004, 22, 269–280. [Google Scholar] [CrossRef]
  16. Nedevschi, S.; Schmidt, R.; Graf, T.; Danescu, R.; Frentiu, D.; Marita, T.; Oniga, F.; Poco1, C. 3D lane detection system based on stereovision. In Proceedings of the IEEE Intelligent Transportation Systems Conference, Washington, DC, USA, 3–6 October 2004.
  17. Jung, C.R.; Kelber, C.R. A lane departure warning system based on a linear-parabolic lane model. In Proceedings of the 2004 IEEE Intelligent Vehicles Symposium Conference, Sao Leopoldo, Brazil, 14–17 June 2004; pp. 891–895.
Figure 1. Definitions of the fixed coordinate and the film chip coordinate.
Figure 1. Definitions of the fixed coordinate and the film chip coordinate.
Inventions 01 00016 g001
Figure 2. The configuration of the chip cells for the film chip (CCD chip) in front view.
Figure 2. The configuration of the chip cells for the film chip (CCD chip) in front view.
Inventions 01 00016 g002
Figure 3. The verification of the evaluation for the measurement system with gyroscope.
Figure 3. The verification of the evaluation for the measurement system with gyroscope.
Inventions 01 00016 g003
Figure 4. The downscaled simulations for the position evaluation of the lane marks. (a) The scenario platform; (b) the geometry of the assigned 10 lane marks with alphabets on the scenario platform.
Figure 4. The downscaled simulations for the position evaluation of the lane marks. (a) The scenario platform; (b) the geometry of the assigned 10 lane marks with alphabets on the scenario platform.
Inventions 01 00016 g004
Figure 5. The result pictures of the 4 scenarios as (a) ψ = 0.0 , θ = 0.0 , ϕ = 0.0 ; (b) ψ = 10.0 , θ = 0.0 , ϕ = 0.0 ; (c) ψ = 0.0 , θ = 10.0 , ϕ = 0.0 ; (d) ψ = 10.0 , θ = 10.0 , ϕ = 15.0 .
Figure 5. The result pictures of the 4 scenarios as (a) ψ = 0.0 , θ = 0.0 , ϕ = 0.0 ; (b) ψ = 10.0 , θ = 0.0 , ϕ = 0.0 ; (c) ψ = 0.0 , θ = 10.0 , ϕ = 0.0 ; (d) ψ = 10.0 , θ = 10.0 , ϕ = 15.0 .
Inventions 01 00016 g005
Figure 6. The evaluation results of the 4 scenarios as (a) ψ = 0.0 , θ = 0.0 , ϕ = 0.0 ; (b) ψ = 10.0 , θ = 0.0 , ϕ = 0.0 ; (c) ψ = 0.0 , θ = 10.0 , ϕ = 0.0 ; (d) ψ = 10.0 , θ = 10.0 , ϕ = 15.0 .
Figure 6. The evaluation results of the 4 scenarios as (a) ψ = 0.0 , θ = 0.0 , ϕ = 0.0 ; (b) ψ = 10.0 , θ = 0.0 , ϕ = 0.0 ; (c) ψ = 0.0 , θ = 10.0 , ϕ = 0.0 ; (d) ψ = 10.0 , θ = 10.0 , ϕ = 15.0 .
Inventions 01 00016 g006aInventions 01 00016 g006b
Table 1. P x and P y values of the assigned 10 lane marks in four different pictures of the 4 scenarios.
Table 1. P x and P y values of the assigned 10 lane marks in four different pictures of the 4 scenarios.
MarkABCDEFGHIJ
(a) P x 191223246263276571539514497484
P y 501470448430416509476452434419
(b) P x 301334360377389666638618602590
P y 495465441425411511475450432418
(c) P x 194227249266279565534511496481
P y 403370344327313413378351333318
(d) P x 334356374385394691653626606590
P y 427387356334317344316296283269

Share and Cite

MDPI and ACS Style

Young, J.-S. Lane Detection via Object Positioning Systems Based on CCD Array Geometry. Inventions 2016, 1, 16. https://doi.org/10.3390/inventions1030016

AMA Style

Young J-S. Lane Detection via Object Positioning Systems Based on CCD Array Geometry. Inventions. 2016; 1(3):16. https://doi.org/10.3390/inventions1030016

Chicago/Turabian Style

Young, Jieh-Shian. 2016. "Lane Detection via Object Positioning Systems Based on CCD Array Geometry" Inventions 1, no. 3: 16. https://doi.org/10.3390/inventions1030016

Article Metrics

Back to TopTop