Next Article in Journal
An Incremental Target-Adapted Strategy for Active Geometric Calibration of Projector-Camera Systems
Next Article in Special Issue
Dynamic Obstacle Avoidance Using Bayesian Occupancy Filter and Approximate Inference
Previous Article in Journal
On the Effects of the Lateral Strains on the Fiber Bragg Grating Response
Previous Article in Special Issue
Low-Cost MEMS Sensors and Vision System for Motion and Position Estimation of a Scooter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autonomous Docking Based on Infrared System for Electric Vehicle Charging in Urban Areas

IMARA Team at INRIA Research Center, Domaine de Voluceau-Rocquencourt, BP 105, 78153 Le Chesnay, France
*
Author to whom correspondence should be addressed.
Sensors 2013, 13(2), 2645-2663; https://doi.org/10.3390/s130202645
Submission received: 11 December 2012 / Revised: 24 January 2013 / Accepted: 5 February 2013 / Published: 21 February 2013
(This article belongs to the Special Issue New Trends towards Automatic Vehicle Control and Perception Systems)

Abstract

: Electric vehicles are progressively introduced in urban areas, because of their ability to reduce air pollution, fuel consumption and noise nuisance. Nowadays, some big cities are launching the first electric car-sharing projects to clear traffic jams and enhance urban mobility, as an alternative to the classic public transportation systems. However, there are still some problems to be solved related to energy storage, electric charging and autonomy. In this paper, we present an autonomous docking system for electric vehicles recharging based on an embarked infrared camera performing infrared beacons detection installed in the infrastructure. A visual servoing system coupled with an automatic controller allows the vehicle to dock accurately to the recharging booth in a street parking area. The results show good behavior of the implemented system, which is currently deployed as a real prototype system in the city of Paris.

1. Introduction

Today, a wide variety of Advanced Driver Assistance Systems (ADAS) are available in conventional vehicles. These systems allow multiple improvements in driving assistance and some partial control such as: blind angle detection systems [1], lane departure warning [2], speed limit warning [3], pedestrian collision avoidance [4] and parking assistance [5] (among others). Nevertheless, some other intelligent transportation systems (ITS) topics are improved for research groups around the world. The European Union, specifically the Directorate-General for Mobility and Transport of the European Commission (EC), develops transport policies by integrating citizen needs, environmental policy and competitiveness [6].

One of the main objectives of the EC is to decrease the use of gas-propelled vehicles in 2050, reducing transport sector emissions of greenhouse gas (GHG) by about 60%. Therefore, electric vehicles (EV) will improve urban transportation, because of their efficiency and the absence of CO2 gas emissions and noise. In fact, some fully automated electric vehicles are already in use in airports, private tracks and pedestrian zones in urban areas [7,8]. However, the market penetration of EV depends on the improvement of electric vehicle batteries, in terms of battery costs, operational autonomy and the distribution of charging point availability in the cities.

With the growth of the EV industry, more charging points will appear at motorway service stations and in major cities. This year, in the United States, there are more than eight thousand public charging stations [9]. However, there are still some things lacking in this solution due to the slow charging times and parking problems. Some authors have applied inductive power transfer (IPT) techniques for EV recharging [10]. Although this system offers a safe, convenient and reliable solution, its implementation depends on the performance of the power pads, and this technology is unavailable for all types of EV's. Other works are focusing on the fast-charging electric issues, performing simulations of a recharging station with different platforms [11]. These first results suggest a quick implementation of charging stations for EVs in urban and inter-urban scenarios.

In the last few years, autonomous vehicles, chiefly using EV, have been gradually improved in terms of safety and redundancy. Cybercars are a good example of this evolution, since they allow fully autonomous driving capabilities for specific scenarios in order to provide an on-demand door-to-door service [7]. These vehicles use a GPS sensor for positioning and wireless communications for interaction with other vehicles and the infrastructure [12]. The IMARA group of INRIA (National Institute for Research in Computer Science and Control, France) is working on the development of perception and control strategies for Cybercars [8,13].

Other works propose the control of autonomous EV with mathematical modeling of the motion dynamics and drivability control to optimize the operating freedom of two power trains in hybrid electric vehicles [14]. In [15], a real vehicle modified with a steer-by-wire system and global positioning system (GPS) for localization is proposed. Moreover, artificial intelligence (AI) techniques, such as fuzzy logic [12] and neural-networks (NN) [16], have been used to control real vehicles in urban and highway scenarios, based on human experiences and cooperative GPS and inertial systems [17].

About the localization problem in autonomous vehicles, the limitations of the GPS systems, caused by GPS outages and the interferences in urban and indoor scenarios, are widely known (because of buildings, trees, bridges and parking, among others) [18]. For this reason, other approaches focusing on perception solutions for localization and environment mapping have been suggested, such as SLAM and SLAMMOT, among others [19,20]. A survey of the most important algorithms for autonomous vehicles, based on vision and laser, proposed in the last decade, has been presented in [21]. They claim that, even if many navigation systems are based on heterogeneous sensor data fusion, the most robust algorithms are based on visual target tracking, since the position and velocity of the vehicle and the target relative position can be established by processing the image streams of the cameras.

Autonomous charging is a well identified issue for electric autonomous systems. This problem has been historically addressed in robotics [22], and several approaches were proposed based on a wide range of techniques, such as range lights [23] or vision and artificial landmarks [24]. It is really close to the problem of docking autonomous underwater vehicles (AUVs) for charging purposes [25]. Concerning autonomous urban vehicles, systems using inductive charging were already proposed [26], but are not really energy efficient, even if they are easy to handle. Systems that consider docking for charging electric vehicles require highly accurate localization and control (a few centimeters), uncommonly treated in the literature [27,28]. The aim of the research is to design and develop a control system for automatic recharge docking for EVs in urban parking areas. The vehicle is equipped with an infrared camera, able to detect infrared diodes placed in the infrastructure. These diodes are used as landmarks in order to provide a highly accurate position and velocity to the control stage. The camera is placed behind the rear-view mirror (looking ahead), and the vehicle is an electric Citroën C1, instrumented to enable autonomous driving.

This paper is organized as follows: a description of the system architecture and of the AMARE project objectives are provided in Section 2. The perception algorithms, signal filtering and control stages, followed by an explanation on the control strategies used in the lateral control law, are explained in Section 3. Experimental demonstrations and results obtained with the real facilities are described in Section 4. The paper ends with conclusions and future work in Section 5.

2. System Architecture

The automatic docking, recharging, billing and payment system proposed in this paper is composed of three main elements: an automated vehicle, a docking and recharging station and a wireless communication system.

Once the vehicle is properly parked by the driver a few meters from the station, the perception system identifies the infrared LEDs placed in the recharging station, and then, the connection procedure is initiated by the vehicle. The first connection is performed by wireless communications. The vehicle sends to the station its intention to park and to recharge its batteries. Once accepted by the station, the vehicle autonomously docks with the station, and the recharging starts without any human intervention (Figure 1). When the vehicle intends to leave the station, billing is calculated given the energy consumed by the vehicle and the total parking time. The payment of the charge can be performed via a contactless payment system or sent to the driver's billing address for an a posteriori payment.

2.1. Automated Vehicle

The vehicle used is an electrified Citroёn C1 instrumented for autonomous driving. Figure 2 shows the different components used in the autonomous vehicle. The vehicle is equipped with two docking plugs—front and back—for battery charging and wired communications. This design allows the connection of several vehicles in series to a unique recharging station, reducing the number of recharging stations needed. In the meantime, the physical link between the vehicles can be used to displace all the vehicles in a platoon configuration with a unique driver, thus facilitating the redistribution of vehicles between stations [29].

A wireless communication link is established before the docking procedure with the recharging station. The automated vehicle uses the information from the installed infrared camera and the odometry to guide the vehicle into its final parking position or docking spot. The perception system (Section 3.1) starts by estimating the pose of the vehicle relative to the pattern of infrared LEDs in the recharging station.

The control of the vehicle and driving task is supported by the on-board automation system until the vehicle reaches its docking spot. The throttle and brake pedals, with integrated potentiometers, are commanded by the longitudinal controller, and the electric power-assisted steering actuator is commanded by the lateral controller (Section 3.2).

Information about the automatic docking procedures are provided to the driver via the on-board HMI (Human Machine Interface) and stored in a remote server. Once the docking is established and the vehicle is plugged into the automated arm, a wired connection is established between the vehicle and the station. At the same time, the recharging of the vehicle batteries starts and the consumed energy is registered. This information is used later by the billing process together with the parking time costs.

2.2. Recharging and Docking Station

The station, equipped with a docking arm, is used to charge up to five vehicles in series. The communication with the vehicle and prior plug-in connection is done via wireless WiFi communications. The station allows/rejects connection requests from the vehicles that want to dock and recharge. The status of the station can easily be accessed by the lights' interface information (in the station): green: the station is available; yellow blinking: the docking arm is being deployed; and red: the station is occupied. Once the plug is connected to a vehicle, the power supply is activated and the energy consumption is registered (Figure 3). An infrared LED pattern installed in the station is detected by the vehicle on-board camera in order to determine its relative position. The station controller manages the electronic interface that controls the docking arm, the power supply and the infrared LED pattern. The controller is connected directly to a payment back office through a local network (intranet) and handles the communications with the vehicle.

2.3. Communication System

The action coordination between the vehicle automation (supervisor) and the station controller are performed via an IPv6 wireless link based on embedded Linux boxes (4G Cubes) [8]. This communication system is a Vehicle Web Service Communication Framework (VWSCF) that handles service discovery, exposition and fetching of data through the network. For practical reasons, the payment procedure is performed via a different wireless connection using a standard highway contactless payment system. Once the vehicle is plugged into the docking arm, a wired connection is established, and diagnostics data are exchanged between the vehicle and the station.

3. Onboard Algorithms: Autonomous Docking

Figure 4 shows the control scheme of the autonomous vehicle docking proposed in this work. It considers an infrared camera for the localization of the vehicle in the reference frame of the charging station. After pattern processing, the relative position is given to the control stage. Then, this position is filtered and translated to the center front axis of the vehicle, to improve the control accuracy. Finally, a reference command is sent to the action stage. The explanation of each module is described below.

3.1. Perception

A standard charge-coupled device (CCD) camera, equipped with an IR filter, was provided by our industrial partner in the project and was used in this work, placed behind the rear-view mirror, looking forward. In our experiment, infrared LEDs were used on the docking station, instead of visible beacons, to simplify their detection from the background and make the system invisible from passers-by.

The docking station is equipped with eight infrared LEDs, their positions being precisely known in the station referential. This rather high number of LEDs was chosen to allow the detection of several patterns, in case one or several lights were obstructed or failing. Our experiments showed that six LEDs were enough in practice to accurately determine the vehicle position with regard to its docking station.

Thanks to the camera information, the perception stage computes the position, in Cartesian coordinates, and the heading with respect to the reference line, then sends it to the control stage.

3.1.1. Vision Detection Algorithms

This section describes the several steps used in the vision pipeline to get the relative position and the information needed by the control node. From the input picture, the following steps are applied:

  • Maxima selection. We assume that the LED candidates on the picture are among the brightest points and that they correspond to a local maximum. This is very common for the detection of bright features on a picture.

  • Region growing. From the previously selected extrema, region growing is applied to get the bright area. Region growing halting criteria are based on brightness gradient and absolute brightness level. The LED models being previously known, a fast model-based selection is used to remove an initial set of outliers, i.e., bright areas physically too big to be our LEDs. This rejection effectively handles major light sources, such as car lights, sun light or most secular reflections.

  • Model fitting. A list of vertical and horizontal lines stem from the set of LED candidates previously detected. Knowing the 3D base model of the station, simple heuristics are used to remove candidates leading to an improper form-factor. In our case, several constants in the LEDs relative position (only two LEDs on top) are easy to use to remove trivial misfitting candidates. Moreover, this step can be simplified for the following detections, a rough initial position for the projected pattern being given by a previous iteration. Several LED sub-models can be tracked on the station, for an extra robustness against occlusions. In our case, three sub-models can be used while keeping the POSIT algorithm running (defined by six to eight LEDs), while a rough position can be computed from the top four LEDs (Figure 5).

  • POSIT algorithm (detailed in Figure 6). This algorithm, detailed in Section 3.1.2, provides an estimation of the 6D position and attitude as regards the model from its projection into the camera plane. This gives a complete determination of the car attitude. POSIT can be run separately on each of the four models detected.

3.1.2. The POSIT Algorithm

This algorithm was first published in [30] by DeMenthon et al., its purpose is to find the pose of an object with regard to the camera referential from a single image. This is not a simple task, due to the loss of information consecutive to the projection process from the 3D model to the picture plane. Extensive pose information typically transfers into six degrees of freedom, degrees which are not necessarily visible after the projection onto a 2D plane consecutive to the imaging process. In other words, the projection matrix stemming from the standard pinhole camera model is not invertible.

It is then compulsory to find an estimation of the pose, with an approach robust enough to handle these ambiguities gracefully. Several methods were developed over time (see the references for an extensive review), but the POSIT algorithm is now commonly used for this task, due to its very low coding and computing complexity and its iterative nature. POSIT does not require an initial pose estimate and can run in real-time on low-power hardware.

Summing up some of its key ideas, POSIT can be split into two steps: pose computation from an approximated scaled orthographic projection and an iteration procedure that allows the algorithm to converge without an initial guess. The scaled orthographic projection is close to a perspective projection, but differs in that the depth coordinates of the model features get the same value in the projection computation, thus neglecting intra-object depth differences compared to camera-to-object distance. This effectively linearizes the projection process. The iteration procedure consists in computing the mismatch between the observation (“true” projection of the 3D model onto the image plane) and the computed scaled orthographic projection, which gives the pose correction step.

In practice, POSIT converges within 10 iterations, and its reliability can be assessed by computing the model feature positions onto the image plane from the computed pose, camera pin-hole model and the known geometry of the model. A limit of the POSIT approximations can, however, be observed at a very close range, when the model depth dimension is not negligible compared to the camera-to-object distance.

Figure 7 shows a typical view from the perception system. The docking station is correctly identified and positioned, as shown by the back projected features of a six-LED model (white circles) piled onto the detected LEDs (end of the white lines). This scene shows a difficult situation, because of the low position of the sun (in front of the camera), and some reflections are detected as a “worst-case scenario”. Following the proposed algorithm, an initial region growing algorithm restrained to the LEDs' reasonable size allows us to create a list of LED candidates, pictured in red squares on Figure 7. The knowledge of the positions of the LEDs relative to one another is then used to remove improbable LED configurations. The recognized configuration is depicted by the white segments in the figure. Finally, the POSIT algorithm can be applied on this recognized projected pattern, in this case on the six-LEDs sub-model, and the 3D configuration is back-projected on the picture, as shown by the white disks. The correspondence between detected LED positions and 3D projections from the known model and pose is used as quality check criteria.

3.1.3. Filtering

Since the information coming from the camera signal is noisy, signal filtering is required. To this end, a digital filter implementation in terms based on classical finite impulse response and numerical differentiation is used. This technique has been developed in the framework of the project ALIEN ( http://raweb.inria.fr/rapportsactivite/RA2010/alien/uidl.html), which is devoted to the study and to the development of new techniques in identification and estimation [31].

The signal coming from the camera is approximated as a truncated Taylor expansion at order N and for t = 0.

x ( t ) = i 0 N x ( i ) ( 0 ) t i i !

Then, each processed signal can be extended in a polynomial function of higher degree and the derivative coefficient can be calculated by using the Laplace transform. Here, the x(t) and y(t) positions over time are locally represented as a first order polynomial function, ∀(a0, a1) ∈ ℝ2:

x ( t ) = a 0 + a 1 × t

In order to smooth the signal, coefficient a0 for x(t) and y(t) signals must be estimated. Using the Laplace transform and successive calculus transformation, Equation (2) can be expressed in the Laplace domain as:

a 0 s 2 = 2 s X ( s ) s 2 + 1 s dX ( s ) ds
where X(s) is the operational expression of x(t) (and, respectively, Y(s) with y(t)). Using classical operational to time domain transformation rules and the Cauchy formula, estimation of the coefficient a0 can be limited to one integral:
a 0 = 2 T 2 0 T ( 2 T 3 δ ) x ( δ ) d δ
where T is the length of the integration window. More details on this technique are provided in [3133].

3.2. Control

Different environments and conditions (speed and data available, among others) determine the control law used for autonomous vehicles. Since a real vehicle is a multipart system, some works consider complex models or AI techniques to control the vehicle [34,35]. However, under stringent conditions, such as low constant speed and the absence of dynamic forces (lateral acceleration is zero), a simple kinematic model can be used. Moreover, it is well accepted in the literature to separate the control in the lateral (steering wheel) and longitudinal (throttle and brake) for driverless vehicles, both in hardware and software. Consequently, each system can run independently.

For the longitudinal controller, we used a proportional integral (PI) to reach the reference speed, and then to reduce the speed when the vehicle is reaching the docking point. Both controllers, lateral and longitudinal, were tested in previous simulations, showing good results [29]. However, in the real implementation, only the longitudinal control worked appropriately due to the information coming from the camera, and that is always available. The bang-bang control law, proposed for the lateral control in [29], has been discarded, because the maximum vision range of the camera is limited to [−20, 15] degrees, and there is no odometry integrated in the vehicle. Moreover, in this simulation, the footpath, where the charging station is placed, was not considered; therefore, the overshoot of this control law can crash the front right wheel into the infrastructure. In this section, a new solution for the lateral control in autonomous docking for electric vehicles is presented.

3.2.1. Kinematic Model

Due to the low speed of our application, the centrifugal force is considered as despicable; the wheel slipping and the forces transferred between wheels of the same axle track are approximated to zero. Moreover, the radius of curvature is assumed to be bigger than the wheel base. Therefore, the kinematic model is estimated by the standard bicycle or Ackerman model [36,37], considering that the two front wheels turn without different speed and the rotation center is the medium between them. The differential equations, describing the movement in a Cartesian plane (x, y), are as follow:

dX dt = V ( t ) × cos ( θ )
dY dt = V ( t ) × sin ( θ )
d θ dt = V ( t ) L × tan ( α )
where θ is the orientation angle with respect to plane XY, α is the steering angle of the front wheel, L is the wheel base and V(t) is the longitudinal speed. The point X and Y are defined with respect to the center of the rear axle of the vehicle. The simulation presented in [29] shows good results controlling the rear point (Figure 8). However, due to the high precision needed in our application (the vehicle has to reach the docking point with an error of ±5 cm), it is necessary to translate the control point to the front. The bottom left part of Figure 8 shows a block diagram with the input variables used in the control stage, as well as the steering angle output, which reaches the docking point. The next module explains the considerations to this end.

3.2.2. Front Control Point

Information coming from the camera provides the position (in Cartesian coordinates) and the angular error from the reference line (in radians) regarding the camera position (Figure 8). The aim of this new module is to calculate the coordinates of the control position from the coordinates of the camera, and also to fit the angle error.

Table 1 shows the properties measured from the docking point to the “reference LED” to calculate the position (Figure 8). The Angoffset is the balance of the camera with respect to the reference line, since it is slightly turned to capture more LEDs on the right side of the vehicle. The Disttarget is the distance from the camera (in the rear-view mirror) to the nose of the vehicle (where the front control point is placed). The Xoffset and Yoffset are offset distances from the reference LED to the docking point (Figure 8).

The new control points and the angular error are calculated as follows:

Angle new = Angle camera Ang Offset × π 180
X front = X camera cos ( Angle new ) × Dist target X offset
Y front = Y camera + sin ( Angle new ) × Dist target Y offset

3.2.3. Lateral Control

Two control variables that were used for the lateral control law are the lateral and angular errors, as proposed in previous works [37,38]. Both errors are calculated in the front control point (in meters) and the reference line (in degrees), respectively. K1 and K2 are the gains fixed manually on the vehicle. The first has a proportional effect in control action, since it is associated to the error in Y. Otherwise, K2 has a derivative influence in the control behavior dY dt. From Equation (6), two facts can be assumed: the speed is constant in our experiments, and the orientation angle (θ) is small (constraints of the camera information). Then, Equation (6) can be rewritten as follows:

dY dt = V × θ
where θ is proportional to dY dt (angular error). Therefore, K2 has a derivative action in our system. According to the control systems controlled by a PD, K1 reduces the lateral error (meters) and K2 helps to avoid oscillations and allows a faster and softer output. The final values used—not normalized steering wheel output—are 700 and 45, respectively. Finally, an explicit form of the control law used, showing the proportional and derivative terms—according to the reference line (Laterror)—is rewritten as follows:
U ( t ) = K 1 × Lat error K 2 / V × dLat error dt

4. Results and Discussion

After the authentication of the perception system, a validation of the entire system implemented in our electric vehicle is described (Figure 1). They illustrate the performance from different X and Y starting points (from 3 to 5 m, and 0 to 50 f, respectively). Due to the footpath, the negative values of the Y axes are not considered for real implementation. However, one experiment was completed from −25 cm to validate our control architecture. All the experiment performance in the subsection were carried out in the INRIA facilities with the same vehicle, charging station and perception system described in Section 2.1. Figure 9 shows four different validation tests. Every experiment was executed three times around the same starting reference. This figure shows the position in Cartesian coordinates, coming from the front control point module, described in Section 3.2.2. In the lower middle part of the same figure, a reference square shows that the vehicle arrives with a small error to the docking point (≤±50 mm).

The upper picture in Figure 10 shows the steering wheel control output according to each experiment. The light blue graphic (departure point 5 m and 50 cm for X and Y axes, respectively) shows that the steering wheel is turning around −400 degrees, and then softly, it returns to the center. The trajectory is continuous and without overshoot due to the filtering of the input variables (Section 3.1.3). The middle and the lower pictures show the evolution of both input variables: the lateral and angular error. Both have a tendency to zero, and the error in both the lateral and angular (yaw) are small (Table 2), creating a good docking between the vehicle and infrastructure charging arm. The lateral error has been measured with an external distance measure laser in order to have real values concerning the distance between the vehicle and the docking point.

Table 2 shows the departure and arrival points in every experiment in millimeters, as well as the yaw of the vehicle. Both lateral and longitudinal controllers have reached the minimal error permitted in our application. The averages of the lateral and longitudinal errors, considering the set of experiments, are 24.7 and 9.61 millimeters, respectively. Both errors are low, and the vehicle docks inside the valid range (≤±50 mm). Moreover, the arrival yaw is also low (the root mean square error is 1.05 degrees). It is important in order to have better docking in the charging station.

Finally, an experiment from a greater distance, both in X and Y axes (7.5 and 1.25 m, respectively), has been performed. Figure 11 shows the position in Cartesian coordinates from the perception system, filtered and translated to the front control point. As in the previous experiments, the vehicle reaches the docking point with an error lower than 5 cm.

Figure 12 shows the evolution of the steering position and both input variables. The control action is soft and continuous, and the vehicle never overpasses the reference line (zero in the Y axis). Around 20 s of the experiment, the vehicle arrived to the center of the docking line, but it is not completely straight; then the angular action turned the steering wheel until (35 s) the vehicle reaches the docking point with a lateral and angular error of 2 cm and 0.4 degrees, respectively. Then, the automatic charging arm is ready to charge the batteries.

5. Conclusions and Future Work

In this work, a control architecture for autonomous docking systems, based on an embedded perception system in an autonomous electric vehicle and a recharging station for urban parking areas, is presented. Our approach has been developed under the framework of the AMARE project, using the information provided by an infrared camera and diodes installed in the recharging station. The information from this sensor had been processed and filtered and then sent to the control stage for automatic docking of the vehicle. The proposed architecture is easily adaptable to any commercial electric vehicle.

Different experiments, departing from different points, show good behavior of the proposed system. Both lateral and longitudinal errors are lower than the limits of the charging station. The proposed controller is easy and intuitive for tuning, and the gains can be adapted according to the different vehicles' characteristics. This technology assists human drivers in the charging and docking process of electric vehicles in cities.

The system presented in this paper is actually working in a real scenario in the city of Paris ( http://www.modulowatt.com/Modulowatt_video_Mobilier_Urbain_Intelligent_fr.html) as a permanent demonstrator of the AMARE project.

The proposed work relies solely on the information from the camera on board the vehicle. When the charging station is out-of-range, the camera is obstructed, the signal is too noisy or is lost (e.g., if the steering wheel turns a lot), the autonomous docking maneuver is stopped until the signal is perceived again. For this reason, other sensors and data information may be added to the control architecture proposed, such as CAN frame data or odometer data, in order to increase the redundancy and the robustness of the system in future work. Moreover, actions over the gear shift can be considered for more constrained scenarios.

Acknowledgments

This work was supported by the project AMARE, financed by ADEME (Agence de l'Environnement et de la Maîtrise de l'Energie).

References

  1. Lin, B.F.; Chan, Y.M.; Fu, L.C.; Hsiao, P.Y.; Chuang, L.A.; Huang, S.S. Incorporating Appearance and Edge Features for Vehicle Detection in the Blind-Spot Area. Proceedings of the 13th International IEEE Conference on Intelligent Transportation Systems (ITSC), Madeira Island, Portugal, 19–22 September 2010; pp. 869–874.
  2. Bansal, M.; Das, A.; Kreutzer, G.; Eledath, J.; Kumar, R.; Sawhney, H. Vision-based Perception for Autonomous Urban Navigation. Proceedings of the IEEE International Conference on Intelligent Transportation Systems (ITSC), Beijing, China, 12–15 October 2008; pp. 434–440.
  3. Puthon, A.S.; Nashashibi, F.; Bradai, B. A Complete System to Determine the Speed Limit by Fusing a GIS and a Camera. Proceedings of 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 1686–1691.
  4. Llorca, D.F.; Milanés, V.; Alonso, I.P.; Gavilán, M.; Daza, I.G.; Pérez, J.; Sotelo, M.A. Autonomous pedestrian collision avoidance using a fuzzy steering Controller. IEEE Trans. Intell. Transp. Syst. 2011, 12, 390–401. [Google Scholar]
  5. Paromtchik, I.; Laugier, C. Motion Generation and Control for Parking an Autonomous Vehicle. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Minneapolis, MN, USA, 22–28 April 1996; Volume 4. pp. 3117–3122.
  6. Final Report: Study on Clean Transport Systems; Technical Report; European Commission, Directorate-General for Mobility and Transport: Athens, Greece, 2011.
  7. Xia, T.; Yang, M.; Yang, R.; Wang, C. CyberC3: A prototype cybernetic transportation system for urban applications. IEEE Trans. Intell. Transp. Syst. 2011, 11, 142–152. [Google Scholar]
  8. Bouraoui, L.; Boussard, C.; Charlot, F.; Holguin, C.; Nashashibi, F.; Parent, M.; Resende, P. An On-Demand Personal Automated Transport System: The CityMobil Demonstration in La Rochelle. Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 1086–1091.
  9. Harris, A. Charge of the electric car-[power electric vehicles]. Eng. Technol. 2009, 4, 52–53. [Google Scholar]
  10. Budhia, M.; Covic, G.; Boys, J. A New IPT Magnetic Coupler for Electric Vehicle Charging Systems. Proceedings of the 36th Annual Conference on IEEE Industrial Electronics Society (IECON), Glendale, AZ, USA, 7–10 November 2010; pp. 2487–2492.
  11. Etezadi-Amoli, M.; Choma, K.; Stefani, J. Rapid-charge electric-vehicle stations. IEEE Trans. Power Deliv. 2010, 25, 1883–1887. [Google Scholar]
  12. Naranjo, J.; Bouraoui, L.; Garcia, R.; Parent, M.; Sotelo, M. Interoperable control architecture for cybercars and dual-mode cars. IEEE Trans. Intell. Transp. Syst. 2009, 10, 146–154. [Google Scholar]
  13. Bouraoui, L.; Petti, S.; Laouiti, A.; Fraichard, T.; Parent, M. Cybercar Cooperation for Safe Intersections. Proceedings of the IEEE International Conference on Intelligent Transportation Systems Conference (ITSC), Toronto, Canada, 17–20 September 2006; pp. 456–461.
  14. Moriwaki, K. Mathematical Modeling and Control of an Autonomous Electric Vehicle for Navigation and Guidance. Proceedings of the IEEE International Electric Vehicle Conference (IEVC), Greenville, SC, USA, 4–8 March 2012; pp. 1–8.
  15. Yih, P.; Gerdes, J.C. Modification of vehicle handling characteristics via steer-by-wire. IEEE Trans. Control Syst. Techn. 2005, 13, 965–976. [Google Scholar]
  16. Pérez, J.; Gajate, A.; Milanés, V.; Onieva, E.; Santos, M. Design and Implementation of a Neuro-Fuzzy System for Longitudinal Control of Autonomous Vehicles. Proceedings of the IEEE World Congress on Computational Intelligence WCCI 2010, Barcelona, Spain, 18–23 July 2010; pp. 1–5.
  17. Milanés, V.; Naranjo, J.; Gonzalez, C.; Alonso, J.; de Pedro, T. Autonomous vehicle based in cooperative GPS and inertial systems. Robotica 2008, 26, 627–633. [Google Scholar]
  18. Mao, X.; Wada, M.; Hashimoto, H. Nonlinear GPS Models for Position Estimate Using Low-Cost GPS Receiver. Proceedings Intelligent Transportation Systems Conference (ITSC), Shanghai, China, 12–15 October 2003; Volume 1. pp. 637–642.
  19. Xie, J.; Nashashibi, F.; Parent, M.; Favrot, O. A Real-Time Robust Global Localization for Autonomous Mobile Robots in Large Environments. Proceedings of the 11th International Conference on Control Automation Robotics Vision (ICARCV), Singapore, 7–10 December 2010; pp. 1397–1402.
  20. Zhang, X.; Rad, A.; Wong, Y. Sensor fusion of monocular cameras and laser rangefinders for line-based simultaneous localization and mapping (SLAM) tasks in autonomous mobile robots. Sensors 2012, 12, 429–452. [Google Scholar]
  21. Jia, Z.; Balasuriya, A.; Challa, S. Autonomous vehicles navigation with visual target tracking: Technical approaches. Sensors 2008, 1, 153–182. [Google Scholar]
  22. Silverman, M.; Nies, D.; Jung, B.; Sukhatme, G. Staying Alive: A Docking Station for Autonomous Robot Recharging. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Washington, DC, USA, 11–15 May 2002; Volume 1. pp. 1050–1055.
  23. Cassinis, R.; Tampalini, F.; Bartolini, P.; Fedrigotti, R. Docking and Charging System for Autonomous Mobile Robots; Department of Electronics for Automation, University of Brescia: Brescia, Italy, 2005. [Google Scholar]
  24. Luo, R.; Liao, C.; Su, K.; Lin, K. Automatic Docking and Recharging System for Autonomous Security Robot. Proceedings of the IEEE /RSJ International Conference on Intelligent Robots and Systems (IROS), Edmonton, AB, Canada, 2–6 August 2005; pp. 2953–2958.
  25. Singh, H.; Bellingham, J.; Hover, F.; Lemer, S.; Moran, B.; von der Heydt, K.; Yoerger, D. Docking for an autonomous ocean sampling network. IEEE J. Ocean. Eng. 2001, 26, 498–514. [Google Scholar]
  26. Bleijs, C.; Normand, O. A Fully Automatic Station Using Inductive Charging Techniques. Proceedings of the Thrirteeth International Symposium on Electric Vehicle, Osaka, Japan, 13–16 October 1996.
  27. Wong, J.; Nejat, G.; Fenton, R.; Benhabib, B. A neural-network approach to high-precision docking of autonomous vehicles/platforms. Robotica 2007, 25, 479–492. [Google Scholar]
  28. Martin, J. Design for Implementation: Fully Integrated Charging & Docking Infrastructure Used in Mobility-on-Demand Electric Vehicle Fleets. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2012. [Google Scholar]
  29. Petrov, P.; Boussard, C.; Ammoun, S.; Nashashibi, F. A Hybrid Control for Automatic Docking of Electric Vehicles for Recharging. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), St. Paul, MN, USA, 14–18 May 2012; pp. 2966–2971.
  30. DeMenthon, D.F.; Davis, L.S. Model-based object pose in 25 lines of code. Int. J. Comput. Vision 1995, 15, 123–141. [Google Scholar]
  31. Fliess, M.; Join, C.; Sira-Ramirez, H. Non-linear estimation is easy. Int. J. Model. Identif. Control 2008, 4, 12–27. [Google Scholar]
  32. Fliess, M.; Sira-Ramírez, H. An algebraic framework for linear identification. ESAIM Control Optim. Calc. Var. 2003, 9, 151–168. [Google Scholar]
  33. Mboup, M.; Join, C.; Fliess, M. Numerical differentiation with annihilators in noisy environment. Numer. Algorithms 2009, 50, 439–467. [Google Scholar]
  34. Villagrá, J.; Milanes, V.; Pérez, J.; Godoy, J. Smooth path and speed planning for an automated public transport vehicle. Robot. Auton. Syst. 2012, 60, 252–265. [Google Scholar]
  35. Pérez, J.; Milanés, V.; Onieva, E. Cascade architecture for lateral control in autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2011, 12, 73–82. [Google Scholar]
  36. Ackermann, T.B.J.; Bünte, T. Robust prevention of limit cycles for robustly decoupled car steering dynamics. Kybernetika 1999, 35, 105–116. [Google Scholar]
  37. Sotelo, M.A. Lateral control strategy for autonomous steering of Ackerman-like vehicles. Robot. Auton. Syst. 2003, 45, 223–233. [Google Scholar]
  38. Pérez, J.; Milanes, V.; de Pedro, T.; Vlacic, L. Autonomous Driving Manoeuvres in Urban Road Traffic Environment: A Study on Roundabouts. Proceedings of the 18th World Congress of the International Federation of Automatic Control (IFAC), Milano, Italy, 28 August–2 September 2011; pp. 1–5.
Figure 1. Elements of the system and docking maneuver of the AMARE project.
Figure 1. Elements of the system and docking maneuver of the AMARE project.
Sensors 13 02645f1 1024
Figure 2. Automated vehicle on the AMARE project.
Figure 2. Automated vehicle on the AMARE project.
Sensors 13 02645f2 1024
Figure 3. Recharging and docking station on the AMARE project.
Figure 3. Recharging and docking station on the AMARE project.
Sensors 13 02645f3 1024
Figure 4. Control architecture for autonomous vehicles based on IR camera information.
Figure 4. Control architecture for autonomous vehicles based on IR camera information.
Sensors 13 02645f4 1024
Figure 5. Detection algorithms using different models.
Figure 5. Detection algorithms using different models.
Sensors 13 02645f5 1024
Figure 6. Summary of the perception pipeline.
Figure 6. Summary of the perception pipeline.
Sensors 13 02645f6 1024
Figure 7. Typical view from the system, on the side of a busy road.
Figure 7. Typical view from the system, on the side of a busy road.
Sensors 13 02645f7 1024
Figure 8. Variables used in the autonomous docking and experiments.
Figure 8. Variables used in the autonomous docking and experiments.
Sensors 13 02645f8 1024
Figure 9. Validation tests: positioning.
Figure 9. Validation tests: positioning.
Sensors 13 02645f9 1024
Figure 10. Validation tests: input variables and action lateral controller.
Figure 10. Validation tests: input variables and action lateral controller.
Sensors 13 02645f10 1024
Figure 11. Positions given from the perception system in the final experiment.
Figure 11. Positions given from the perception system in the final experiment.
Sensors 13 02645f11 1024
Figure 12. Lateral command and variables of the control in the last experiment.
Figure 12. Lateral command and variables of the control in the last experiment.
Sensors 13 02645f12 1024
Table 1. Parameters to calculate the front control point.
Table 1. Parameters to calculate the front control point.
ParametersValues
Disttarget1.17 m
Xoffset1 m
Yoffset1.36 m
AngOffset2.3 degrees
Table 2. Departure and arrival points in different situations.
Table 2. Departure and arrival points in different situations.
Actual positions and; yaw

ExperimentsDepartureArrival

XmmYmmYawXmmYmmYaw
X = 5 m and Y = 50 cm4,988.6481.4−2.827.126.0−3.2

4,963.7479.20.637.0−8.2−2.1

4,980.4442.14.931.5−2.9−2.0

X = 5 m and Y = 25 cm5,010.5308.9−0.217.5−5.1−1.8

5,011.6299.32.833.76.7−1.1

4,996.7250.2−1.840.0−3.9−1.9

X = 5 m and Y = 0 cm5,018.9−14.51.420.814.5−0.6

5,002.2−125.21.1−25.816.2−0.2

5,004.0−24.12.237.128.91.1

X = 5 m and Y = −25 cm5,022.0−251.41.324.032.90.7

5,012.9−202.7−2.717.121.3−0.1

5,015.5−260.8−0.923.425.30.3

X = 3 m and Y = 25 cm3,026.4191.70.640.0−1.0−1.4

3,041.8261.11.124.59.4−2.2

3,122.5260.7−1.122.8−15.9−2.6

Share and Cite

MDPI and ACS Style

Pérez, J.; Nashashibi, F.; Lefaudeux, B.; Resende, P.; Pollard, E. Autonomous Docking Based on Infrared System for Electric Vehicle Charging in Urban Areas. Sensors 2013, 13, 2645-2663. https://doi.org/10.3390/s130202645

AMA Style

Pérez J, Nashashibi F, Lefaudeux B, Resende P, Pollard E. Autonomous Docking Based on Infrared System for Electric Vehicle Charging in Urban Areas. Sensors. 2013; 13(2):2645-2663. https://doi.org/10.3390/s130202645

Chicago/Turabian Style

Pérez, Joshué, Fawzi Nashashibi, Benjamin Lefaudeux, Paulo Resende, and Evangeline Pollard. 2013. "Autonomous Docking Based on Infrared System for Electric Vehicle Charging in Urban Areas" Sensors 13, no. 2: 2645-2663. https://doi.org/10.3390/s130202645

Article Metrics

Back to TopTop