Next Article in Journal
Using Reflection Symmetry to Improve the Protection of Radio-Electronic Equipment from Ultrashort Pulses
Previous Article in Journal
Koszulity and Point Modules of Finitely Semi-Graded Rings and Algebras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Vehicle License Plate Extraction Using Region-Based Convolutional Neural Networks and Morphological Operations

Department of Computer and Software, Sejong Cyber University, Seoul 04992, Korea
Symmetry 2019, 11(7), 882; https://doi.org/10.3390/sym11070882
Submission received: 25 May 2019 / Revised: 13 June 2019 / Accepted: 24 June 2019 / Published: 5 July 2019

Abstract

:
The number and range of the candidate vehicle license plate (VLP) region affects the result of the VLP extraction symmetrically. Therefore, in order to improve the VLP extraction rate, many candidate VLP regions are selected. However, there is a problem that the processing time increases symmetrically. In this paper, we propose a method that allows detecting a vehicle license plate in the real-time mode. To do this, the proposed method makes use of the region-based convolutional neural network (R-CNN) method and morphological operations. The R-CNN method is a deep learning method that selects a large number of candidate regions from an input image and compares them to determine whether objects of interest are included. However, this method has limitations when used in real-time processing. Therefore, to address this limitation in the proposed method, while selecting a candidate vehicle region, the selection range is reduced based on the size and position of the vehicle in the input image; hence, processing can be performed quickly. A vehicle license plate is detected by performing a morphological operation based on the edge pixel distribution of the detected vehicle region. Experimental results show that the detection rate of vehicles is approximately 92% in real road environments, and the detection rate of vehicle license plates is approximately 83%.

1. Introduction

Recently, IT technology is applied symmetrically for autonomous vehicle safety. The application of many safety technologies to autonomous vehicles has a symmetrical meaning that the safety of the driver can be guaranteed. Autonomous vehicles have recently grown in number, due to rapid technology development and investment in autonomous vehicles. Autonomous driving technology is divided into four levels (0–4) [1,2]. Level 0 is performing 100% of the driving operation. Level 1 serves to support the development function in automobile operation of the blind zone alarm, land keeping assistance, automatic adaptive cruise, and the emergency braking assistance system [3,4,5,6]. Level 2 means that semiautonomous driving is possible by combining functions. For example, combining safe driving support features results in the advanced driver assistance system (ADAS) support, highway driving assistance, and advanced smart cruise control [7]. Level 3 is a self-driving stage with the exception of emergency situations. Finally, level 4 is a vehicle that is 100% autonomous, fully automated, and responsible for safe operation. Consequently, the ADAS function, driving assistance function, and vehicle distance maintenance function are included in this option [8]. However, most of these functions are currently only possible by combining hardware sensors and vision sensors. Hardware sensors include laser sensors, ultrasonic sensors, infrared sensors, and Lidar sensors. Improved environmental perception is possible by attaching these sensors to the vehicle. However, these sensors increase the price of vehicles, and they frequently fail or need to be replaced due to slight accidents involving vehicle contact [9]. Therefore, if measures more effective than the current vision-sensor-based vehicle detection function can be applied to the ADAS system in various ways, more vehicle drivers will be able to benefit from safe driving support. In recent years, it has become common to install a black box (video recording device) in vehicles.
According to a survey [10] on vehicle black boxes, 84.2% of people in 2013, 91.5% in 2015, and 93.2% in 2017 said they require a black box in vehicles. Taking into consideration the increasing number of drivers installing black boxes, a method to prevent vehicular accidents using current black-box devices can be considered; these are used for reading after a vehicle traffic accident. A safe driving support system based on a black box is more economical than one based on sensor hardware. Therefore, it is necessary to provide a driver with a variety of information to support safe driving in advance, by installing an artificial intelligence function in a black box for a vehicle that focuses only on current image recording. Research has been conducted on image-processing-based safe driving support systems such as vehicle detection, lane recognition, pedestrian recognition, traffic light recognition, and distance estimation [4]. There are various applications for safe driving support information through vehicle detection during road driving. They may serve to maintain distance between vehicles, platoon connected vehicles, and avoid collisions. In addition, by recognizing license plates in a vehicle, safe communication between vehicles can be improved. There is a problem in connected cars that can cause other safety vehicles to be hindered by false safety driving support information shared between vehicles. To prevent the occurrence of such a problem, it is possible to prevent inaccurate information from being shared if the identity of the vehicle providing the safe driving information can be recognized. In other words, it is essential to collect reliable information on the driving of the vehicle, which is transmitted from connected cars on the road.
To collect reliable road information, it is important to identify the information carriers. It is necessary to obtain accurate information that can be provided as safe driving support to other vehicles. As the license plate information of a vehicle is information belonging to an object, it is considered that there will be no related legal conflict with the collection of a driver’s personal information. It would be advantageous to the autonomous driving service to identify the license plates of the vehicles running on the road and providing the information as autonomous driving support.
For example, in the case of receiving driving-related recognition information of a leading vehicle from a vehicle driving in all directions in connected driving, it is necessary to establish a criterion to determine whether the vehicle has reliable information. In this case, by combining the vehicle’s license plate identification information with the received driving information, the trustworthiness of the information can be ascertained. Many researchers have conducted research on license plate detection and recognition. License plate detection is especially important. The process of recognizing a license plate requires an accurate detection of the license plate, which can yield effective results. However, there are many limitations to license plate detection such as lighting, weather, distance, and motion.
In the proposed method, the R-CNN [10,11,12,13] is used to detect vehicles in real time on the input image, and the license plate region is detected by using edge distribution based on the morphological operation in the detected vehicle area. R-CNN is a method used to find windows of various sizes that are connected to pixels of similar texture, color, and brightness through a selective search. To speed up the selection of R-CNN candidates in the proposed method, it is necessary to provide preliminary information on the location, shape, and size of the vehicle in advance. The proposed method can be applied to autonomous vehicles by extracting the license plate area in real time, detecting the omnidirectional vehicle on the road, and applying it to the black-box system to prevent traffic accidents.
The remainder of this paper is organized as follows: In Section 2, we describe the research related to vehicle and license plate detection, and in Section 3, we describe the proposed method. Section 4 describes the experimental results and conclusions of the proposed method.

2. Related Works

Various computer vision algorithms have been used to detect vehicle license plates. W. Nie et al. [14] used a taxi license plate detection method based on the Adaboost algorithm. In complex urban environments, the license plate area is detected through the adaptive thresholding and Adaboost method and the use of the morphological operation to detect the taxi’s license plate area. C. M. Wang and J. H. Liu [15] used a neural network to detect the candidate region of the license plate and perform the detection process through pixel color changes and the binarization process. N. Wang et al. [16] used a back propagation neural network to divide the license plate and limit the character recognition method. In the proposed method, edges are detected through the edge detection process in the input image. Then, a morphological operation is applied to the detected edge pixels and license plate candidate areas are selected. Then, the license plate is detected using the Back-Propagation Neural Network (BPNN) to determine whether the license plate is included in the detected candidate region. C. T. Nguyen et al. [17] filtered horizontal and vertical edge pixels through an edge detector and selected a license plate candidate area according to the distribution of filtered edge pixels. They proposed a method to detect license plates effectively, even when they were rotated at various angles, by acquiring images using a pan tilt camera. Y. Luo et al. [18] proposed a single-shot multibox detector (SSD) on the input image to detect the vehicle license plate region. The SSD creates a learning device that allows the VGG learning model to select license plates using a multistage detector for detection. Various research has been carried out to detect license plates, such as vertical edge [19], block-based processing [20], wavelet transform [21], local Haar-like feature, AdaBoost [22], and texture and color features [23,24]. In these methods, the process of detecting whether a license plate is directly detected without detecting the vehicle in the input image is performed. Whether or not the vehicle license plate is included can be more accurately determined if the license plate area is detected after confirming whether the vehicle is included in the image. In addition, as the license plate is acquired from various environments depending on the characteristics of the input image, it is necessary to generate a detector that is robust to the external environment.

3. Proposed Methods

To detect vehicles in the input image, a rectangular area which includes the vehicle is detected using the R-CNN classifier and Support Vector Machines (SVM). We create a vehicle detection model through the R-CNN-based learning process in advance for vehicle learning. For the learning of R-CNN, we use learning images that manually select the position of the vehicle in road images that contain the vehicle in advance. This method creates rectangular candidate regions through the selective search of regions where the vehicle is likely to be included in the input image. Then, the candidate squares are adjusted to the same size and classified into whether the vehicle is included in the corresponding rectangular area through the R-CNN object classifier. Finally, SVM is used to detect the final rectangle area containing the vehicle and combine the rectangles classified as vehicles into one. It carries out a tracking process for each of the detected vehicle regions through a pixel matching process. Each traced vehicle rectangular area extracts the license plate area through the distribution of edge pixels and performs the morphological operation to extract the license plate area. Generally, the vehicle license plate is composed of a color clearly distinguishable from the background to improve the readability and visibility of the characters, and the area of the license plate is extracted using these characteristics. Figure 1 is a flow chart of the proposed method. The proposed method consists of a vehicle detection step, a detected vehicle tracking step, and a license plate detection step. In addition, R-CNN learning is used in advance to generate a vehicle detector and apply it to the vehicle detection process.

3.1. Vehicle Detection

To detect the vehicle in the input image, the vehicle detector is generated from the learning images containing the vehicle in advance. The position of the vehicle in the learning images is specified in advance using the ground-truth method. The vehicle detector detects the vehicle area using the vehicle detector generated through the R-CNN learning machine. The R-CNN learning machine extracts region proposals from the input image using a selective search method and computes the characteristics of each region proposal with CNN. Each proposal region calculates the classification result and bounding box regression of CNN. The selection of the final rectangular area containing the vehicle is determined using SVM.
In the vehicle detection step, preprocessing is performed to reduce the size of the input image, remove noise, and improve image quality. In the proposed method, the input image is acquired from the black-box device installed in the vehicle. Generally, an HD-quality image is recorded to read the object in the image acquired by the black box accurately. Therefore, a disadvantage is that it takes a long time to apply the vehicle detection step to the size of the input image, which is 1920 × 1080 pixels. In this step, it is required to detect the vehicle area quickly (with relative accuracy) by detecting the area where the vehicle is located in the input image. To do this, we reduce the size of the image to 900 × 500 pixels using the bilinear interpolation method and perform noise reduction by applying a [3 × 3] pixel median filter for noise reduction. Figure 2 shows the result of preprocessing in the vehicle detection step.
It is a time-consuming task to search the entire image to select the area containing the vehicle in the resulting image after preprocessing. That is, it takes a long time to analyze the entire image input from the R-CNN to select the candidate region in the proposed method. In order to detect vehicles using the R-CNN, the detection processing time increases symmetrically according to the search range of the input image. Even if the search range for the vehicle detection is reduced to a minimum, it is required to propose a method of increasing the vehicle detection success rate asymmetrically. Therefore, it is required to reduce the calculation time by predefining the area that includes the vehicle in the input. In the vehicle candidate region, when the input image is horizontally divided into five halves, there is no vehicle in the top and bottom portions. In addition, even if the input image is divided into five vertically, it is assumed that only the middle portion of the input image is regarded as the candidate region including the vehicle, and the detection process is performed on the assumption that the vehicle is not included in the left and right portions. Figure 3 shows the result of specifying the candidate region for calculating whether the vehicle is detected. It is necessary to select only the vehicle located in front of the black-box camera in order to detect the vehicle license plate. As a result of the experiment, it was very difficult to detect the vehicle license plates on the left or right side of the input frame. The reason is that the region size of the vehicle license plate is too small or distorted to detect the license plate. Therefore, in the proposed method, we only limit the case where the license plate is located at the center of the input frame. Also, the vehicle region is detected only when it exists in more than four subwindows, and the vehicle license plate extraction process is performed only when the minimum detected region of the vehicle is 120 x 120 pixels. Thus, in order to detect the vehicle in the input frame, it is divided into 5 x 5 windows to detect only the vehicle located in the middle of the input frame.
The vehicle area is detected using the R-CNN vehicle detector in the candidate area of the vehicle. For vehicle detection, the learning specifies a rectangular area containing the vehicle in a ground-truth method on a road image containing 2340 vehicles. The R-CNN detector generated through the learning process uses the edge boxes algorithm [11] to generate potential regions of region proposals that are likely to contain vehicles. Each region is then extracted and resized to the same size. CNN categorizes each area into vehicle areas and nonvehicle areas. Finally, the region proposal bounding boxes are refined by the SVM that is trained using CNN features. Figure 4 shows the R-CNN-based vehicle detection process. Figure 5 shows the R-CNN structure and result of positioning the vehicle area manually in the input image.
For learning, the size of the input image is reduced by 50% for the experiment in the 900 × 505 pixels image. The input of the deep learning system designed for learning the vehicle detector is [32 × 32], the filter size of the convolution layer is [5 × 5], the number of filters is 32, the size of the step the filter moves is 2, and the size of the padding is set to 2. The pool size of the maximum pooling layer is set to [3 × 3], the size of the step to which the pool moves is set to 2, and 15 layers of the learning machine are designed as shown in Figure 5b. The training options of the deep learning neural network are applied with the stochastic gradient descent method using momentum. The learning rate is reduced by 0.2 times every epoch, the maximum epoch is set to 20, the minibatch size for performing the network weight update of the stochastic gradient descent algorithm is 32, and the initial learning rate is set to 0.00001. Figure 6 shows the output of the first convolutional layer filter weights after learning through the R-CNN learning machine. Further, Figure 6b shows the results of applying the filter weights of the first convolutional layer. Each tile of the activation grid is the output value of each channel of the first convolutional layer. White pixels show a strong positive activation, while black pixels show strong negative activation. The channel in gray does not show strong activation reaction to the input image. The position of the activated pixels in the channel corresponds to the same position in the original image. If there is a white pixel in a position in one channel, that position in the channel is strongly activated.
Figure 7 shows the results of channel 5, 16, and 28 activation of the first convolution layer. It is seen that the lighter white pixel in the 5th channel corresponds to the red region in the original image, so that this channel is activated in the red pixel, the 16th channel corresponds to the vertical edge pixel, and the 28th channel corresponds to the edge with the same texture pixel. Verification is performed by comparing with the template image accumulated in average in the vehicle areas extracted from the vehicle learning process through the R-CNN detection. To extract the vehicle’s license plate, it is necessary to detect the vehicle area, and to minimize the error in the R-CNN process, the verification of the detected vehicle area is performed. Figure 8 displays the average image of the vehicle area performed for vehicle area learning. A similarity value of 0.4 or more is derived from the comparison with the template region of the vehicle to detect the final vehicle area. Figure 9 shows the results of R-CNN-based vehicle detection. In the input image, the vehicle is located on the right side, but it is outside of the candidate region defined in the predefined area. To extract the license plate of the detected vehicle, the license plate must be located in the area that can be clearly distinguished. Therefore, the detection process is performed on the vehicle located at the center of the input image.

3.2. Vehicle Tracking

Vehicle tracking is performed through the feature information matching process applied for the previously detected vehicle areas identified during the vehicle detection step. The identified vehicle detects an area having characteristics similar to ones of the vehicle detected in a previously defined vehicle detection candidate area such as to determine a position belonging to the next frame. Generally, the vehicle areas detected in adjacent frames are located around the detection area. Therefore, the vehicle tracking process can also provide a better performance result in terms of speed because the characteristic information comparison process needs to be executed only around the detected area. To track the vehicle, it is important to recognize the descriptor of the vehicle in real time. In general, the scale invariant feature transform (SIFT) method has been widely used to detect descriptor objects, but its computational complexity is a considerable disadvantage. To solve this problem, the proposed method makes use of the speeded-up robust features (SURF) detector to determine descriptor and track vehicles [25]. The SURF expresses the object of interest using the integral image and uses fast Hessian detection. Consequently, the Laplacian code that computes the Hessian matrices can be used to match objects easily. If the vehicle detection process is repeatedly executed for successive frames, it will result in a waste of computation time. Therefore, if the vehicle is present in the image frame, by tracking the detected vehicle, if it is in the vehicle area detected in the last frame, the area is processed so as not to perform additional vehicle detection processes. Figure 10 shows the extracted SURF feature information of the detected vehicle, and SURF feature information to track the area of the vehicle detected in the next frame.

3.3. License Plate Extraction

The distribution of the edge pixels and the position information of the license plate attached to the rear of the vehicle are used to extract license plates from detected vehicle areas. Typically, in the vehicle area, license plates are located at the bottom of the vehicle’s rear. Only the lower end region corresponding to 1/2 of the detected image of the rear side of the vehicle is selected. Then, edge pixels of the vertical component are extracted using an image filter with a Sobel edge mast of [−1 0 1, −2 0 2, −1 0 1] size, and are subjected to a normalization process. Then, pixel binarization is performed by applying an adaptive threshold scheme. The histogram distribution of the binary horizontal pixels is then calculated, and only the region where the edge pixel concentration constitutes more than 70% among the regions where the binary pixels are located is extracted. Extension and reduction processes are repeated for the morphological operation to remove unnecessary regions, and the region of the final license plate is detected by extracting the pixel region having the maximum size. Table 1 shows the process for extracting the license plate. The license plate extraction results are shown in Figure 11 through the processing steps shown in Table 1.

4. Experimental Results

The proposed method used Windows OS PC (3.2GHz, 32G RAM, GPU 8GB) and the Matlab tool for the experiment. The experiment learned from 2,340 images including vehicles. The learning images used in the experiments were reduced to 900 × 505 pixels size to reduce learning time and detection processing time with 1920 × 1080 HD black-box images. Experimental results show that it takes about 0.52 s to detect an actual vehicle and 0.21 s to detect a license plate. However, to recognize a license plate in the future, it is necessary to maintain a resolution of a certain size or more so that license plate recognition can be performed. Therefore, the size of the input image is limited. It is possible to reduce the size of the input image to obtain fast vehicle detection results, but it is necessary to maintain a readable resolution for license plate character recognition. Therefore, to detect the position of the license plate area, it is necessary to examine a method to acquire a high-resolution image for character recognition by reducing the resolution of the input image, and mapping the detected license plate loci to the original image. Experimental results of the proposed method are shown in Table 2.
The performance of the vehicle was evaluated as successful if the area defined by the ground-truth method and the degree of overlap were 50% or more, and if the overlap information of the license plate area was 70% or more, it was determined to be successful. Experiments were conducted on highway and city road sections. The detection of the license plate area was performed by extracting the license plate area of a vehicle with high readability on the preacquired stationary image. There were two main reasons for the failure of vehicle detection: misdetection when two or more vehicles overlapped, and the detection of large vehicles such as trucks or buses. The license plate was detected using the edge region where the diffuse reflection was high due to the sunlight on the vehicles acquired during the daytime. In addition, advertisement stickers or LED brakes attached to the rear part of the vehicle were classified into high edge pixels, and the plate area was erroneously detected. Table 3 shows the results of comparing the extraction performance with various measures to evaluate the performance of the license plate extraction method. The first method is the extraction using only the morphological operations without the vehicle detection process [26]. The second is a vehicle license detection method using the maximally stable extremal regions (MSERs) and SVM [27]. As a result of applying the morphology operations to the license plate detection, the detection accuracy was lowered by about 9.4% than the proposed method. In addition, their method had a disadvantage of very high misdetection rate in license plate detection. These problems were caused by trying to detect based on the edge distribution in the whole image. In addition, the extraction of the vehicle license plate using the MSER and SVM showed relatively high accuracy. However, it can be seen that the execution time for license plate detection was consumed by about 19.6% or more compared with the proposed method.
Figure 12 shows the result of detection of the vehicle and extraction of license plate area using the proposed method. Experiments on the proposed black-box image obtained from the real road show that it is possible to detect vehicles and extract the license plate area. However, while it is possible to detect the vehicle as the distance of the vehicle in the forward direction increases, an error in the detection of the license plate area is somewhat generated. The emblems, stickers, and LED brakes located in the rear part of the vehicle have an especially large brightness value, which resulted in errors during detection. Therefore, in future research, we intend to estimate the license plate area by using a deep learning machine for robust license plate area detection.

5. Conclusions

In this paper, we proposed the method, which implies using black-box images from vehicles to detect other vehicles in their vicinity and extract the license plate areas of these vehicles. In our method, vehicles are detected using the R-CNN deep learning method and the license plate area is extracted based on the distribution of edge pixels. The proposed method is intended for use as a method for detecting and recognizing license plates of vehicles for the exchange of reliable road environment recognition information between autonomous vehicles. The proposed method can be applied to resolve such tasks as vehicle tracking, surveillance, and violation vehicle detection. In particular, it has been described that the proposed method can be utilized in the field of Vehicle to Infrastructure (V2I) [28]. For the purposes of further research, we will study vehicles such as buses and trucks, and it is planned to consider robust sparse detectors through edge comparison and color comparison to detect the plate area in our future work.

Funding

This work was supported by the Ministry of Education, Science and Technology (NRF-2016R1D1A1B03931986).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Donald, L.F.; Maura, L.; David, M.; Eric, D.N.; John, K.P. Humans and Intelligent Vehicles: The Hope, the Help, and the Harm. IEEE Trans. Intell. Veh. 2016, 1, 56–67. [Google Scholar] [Green Version]
  2. Todd, L. Autonomous Vehicle Implementation Predictions: Implications for Transport Planning; Victoria Transport Policy Institute: Victoria, BC, Canada, 2019. [Google Scholar]
  3. Ryosuke, O.; Yuki, K.; Kazuaki, T. A survey of technical trend of ADAS and autonomous driving. In Proceedings of the International Symposium on VLSI Technology, Systems and Application, Hsinchu, Taiwan, 28–30 April 2014; pp. 1–4. [Google Scholar]
  4. Vipin, K.K.; Jordan, T.; Sudeep, P.; Thomas, B. Advanced Driver-Assistance Systems: A Path Toward Autonomous Vehicles. IEEE Consum. Electron. Mag. 2018, 7, 18–25. [Google Scholar]
  5. Seyed, M.I.; Hossein, N.M.; Hadi, K.; Yaser, P.F. An Adaptive Forward Collision Warning Framework Design Based on Driver Distraction. IEEE Trans. Intell. Transp. Syst. 2018, 19, 3925–3934. [Google Scholar]
  6. Vijay, G.; Shashikant, L. Lane Departure Identification for Advanced Driver Assistance. IEEE Trans. Intell. Transp. Syst. 2015, 16, 910–918. [Google Scholar]
  7. Kim, J.B. Efficient Vanishing Point Detection for Advanced Driver Assistance System. Adv. Sci. Lett. 2017, 23, 4115–4118. [Google Scholar] [CrossRef]
  8. Kim, J.B. Detection and recognition of road markings for advanced driver assistance system. LNEE 2015, 354, 325–331. [Google Scholar]
  9. Kim, J.B. Traffic Lights Detection Based on Visual Attention and Spot-Lights Regions Detection. J. Inst. Electron. Eng. Korea 2014, 51, 1260–1270. [Google Scholar] [CrossRef]
  10. Vehicle Black-Box Related Investigation. Available online: https://trendmonitor.co.kr/tmweb/trend/allTrend/detail.do?bIdx=1549&trendType=CKOREA (accessed on 1 July 2019).
  11. Zitnick, C.L.; Dollár, P. Edge boxes: Locating object proposals from edges. Comput. Vis. ECCV Springer 2014, 391–405. [Google Scholar] [CrossRef]
  12. Ross, G.; Jeff, D.; Trevor, D.; Jitendra, M. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 580–587. [Google Scholar]
  13. Ross, G. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Las Condes, Chile, 11–18 December 2015; pp. 1440–1448. [Google Scholar]
  14. Shaoqing, R.; Kaiming, H.; Ross, G.; Jian, S. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [Green Version]
  15. Wenzhen, N.; Pengyu, L.; Kebin, J.; Huimin, L.; Xunping, H. Taxi License Plate Block Detection Based on Complex Environment. In Proceedings of the IEEE 3rd International Conference on Image, Vision and Computing, Chongqing, China, 27–29 June 2018; pp. 81–85. [Google Scholar]
  16. Chuin, M.W.; Jian, H.L. License plate recognition system. In Proceedings of the Conference on Fuzzy Systems and Knowledge Discovery, Zhangjiajie, China, 15–17 August 2015; pp. 1708–1710. [Google Scholar]
  17. Naiguo, W.; Xiangwei, Z.; Jian, Z. License Plate Segmentation and Recognition of Chinese Vehicle Based on BPNN. In Proceedings of the Conference on Computational Intelligence and Security, Wuxi, China, 16–19 December 2016; pp. 403–406. [Google Scholar]
  18. Chi, T.N.; Thanh, B.N.; Sun, T.C. Reliable detection and skew correction method of license plate for PTZ camera-based license plate recognition system. In Proceedings of the Conference on Information and Communication Technology Convergence, Jeju, Korea, 28–30 October 2015; pp. 1013–1018. [Google Scholar]
  19. Yiwen, L.; Yu, L.; Shaobing, H.; Fangjian, H. Multiple Chinese Vehicle License Plate Localization in Complex Scenes. In Proceedings of the IEEE 3rd International Conference on Image, Vision and Computing, Chongqing, China, 27–29 June 2018; pp. 745–749. [Google Scholar]
  20. Bai, H.; Liu, C. A hybrid license plate extraction method based on edge statistics and morphology. In Proceedings of the Pattern Recognition, Cambridge, UK, 26 August 2004; pp. 831–834. [Google Scholar]
  21. Hsi, J.L.; Si, Y.C.; Shen, Z.W. Extraction and recognition of license plates of motorcycles and vehicles on highways. In Proceedings of the Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004; pp. 356–359. [Google Scholar]
  22. Kaushik, D.; Hyun, U.C.; Kang, H.J. Vehicle license plate detection method based on sliding concentric windows and histogram. J. Comput. 2009, 4, 771–777. [Google Scholar]
  23. Kaushik, D.; Kang, H.J. A vehicle license plate detection method for intelligent transportation system applications. Int. J. Cybern. Syst. 2009, 40, 689–705. [Google Scholar]
  24. Chen, Z.X.; Liu, C.Y.; Chang, F.L.; Wang, G.Y. Automatic license plate location and recognition based on feature salience. IEEE Trans. Veh. Technol. 2009, 58, 3781–3785. [Google Scholar] [CrossRef]
  25. Herbert, B.; Tinne, T.; Luc, V.G. SURF:Speeded Up Robust Features. LNCS 2006, 3951, 404–417. [Google Scholar]
  26. Anishiya, P.; Joans, S.M. Number plate recognition for indian cars using morphological dilation and erosion with the aid of ocrs. Int. Conf. Inf. Netw. Technol. 2011, 4, 115–119. [Google Scholar]
  27. Kim, J.B. MSER and SVM-Based Vehicle License Plate Detection and Recognition System, Communications in computer and information. LNCS 2012, 310, 529–535. [Google Scholar]
  28. Croce, A.I.; Musolino, G.; Rindone, C.; Vitetta, A. Transport system models and big data: Zoning and graph building with traditional surveys. Int. J. Geoinf. 2019, 8, 187. [Google Scholar] [CrossRef]
Figure 1. Process flow of the proposed method.
Figure 1. Process flow of the proposed method.
Symmetry 11 00882 g001
Figure 2. Results of preprocessing steps; (a) input, (b) resized, and (c) noise-removed image.
Figure 2. Results of preprocessing steps; (a) input, (b) resized, and (c) noise-removed image.
Symmetry 11 00882 g002
Figure 3. Results of specifying the vehicle detection candidate region.
Figure 3. Results of specifying the vehicle detection candidate region.
Symmetry 11 00882 g003
Figure 4. Vehicle detection process based on R-CNN object detector.
Figure 4. Vehicle detection process based on R-CNN object detector.
Symmetry 11 00882 g004
Figure 5. Results of vehicle detection; (a) vehicle detection result, and (b) R-CNN model.
Figure 5. Results of vehicle detection; (a) vehicle detection result, and (b) R-CNN model.
Symmetry 11 00882 g005
Figure 6. R-CNN: The first convolutional layer’s filter, (a) results of R-CNN: the first convolutional layer’s values, and (b) the filter weights of the first convolutional layer.
Figure 6. R-CNN: The first convolutional layer’s filter, (a) results of R-CNN: the first convolutional layer’s values, and (b) the filter weights of the first convolutional layer.
Symmetry 11 00882 g006
Figure 7. Results of applying the activation filter to R-CNN first convolutional layer’s 5th, 16th, and 28th channel weights and the results of applying this filter.
Figure 7. Results of applying the activation filter to R-CNN first convolutional layer’s 5th, 16th, and 28th channel weights and the results of applying this filter.
Symmetry 11 00882 g007
Figure 8. Template for verification of vehicle detection.
Figure 8. Template for verification of vehicle detection.
Symmetry 11 00882 g008
Figure 9. Results of vehicle detection.
Figure 9. Results of vehicle detection.
Symmetry 11 00882 g009
Figure 10. Vehicle tracking results based on SURF feature information.
Figure 10. Vehicle tracking results based on SURF feature information.
Symmetry 11 00882 g010
Figure 11. Process of vehicle license plate extraction using the morphological operation.
Figure 11. Process of vehicle license plate extraction using the morphological operation.
Symmetry 11 00882 g011
Figure 12. Extraction result of license plate area, (a) input image, (b) vehicle detection, (c) vehicle region, (d) candidate LP, (e) Sobel edge filtering, (f) normalization of (e), (g) binarization of (f), (h) thresholding of (g), (i) vertical morphology, (j) horizontal morphology, (k) common region with (i) and (j), (l) dilation of (k), (m) erosion of (l), and (n) LP region.
Figure 12. Extraction result of license plate area, (a) input image, (b) vehicle detection, (c) vehicle region, (d) candidate LP, (e) Sobel edge filtering, (f) normalization of (e), (g) binarization of (f), (h) thresholding of (g), (i) vertical morphology, (j) horizontal morphology, (k) common region with (i) and (j), (l) dilation of (k), (m) erosion of (l), and (n) LP region.
Symmetry 11 00882 g012
Table 1. Extraction process of vehicle license plates.
Table 1. Extraction process of vehicle license plates.
1. Convert the detected vehicle region into gray.
2. Select the lower half region of the detected vehicle region.
3. Calculate vertical edge pixels using a Sobel edge mask [−1 0 1; −2 0 2; −1 0 1].
4. Normalize the vertical edge pixels to [0 1].
5. Obtain the binary image using Otsu’s threshold method.
6. Calculate the horizontal edge histogram.
7. Set a threshold if (horizontal edge histogram >= 0.7) to 1 otherwise 0.
8. Apply the vertical and horizontal dilation morphological operation using [80, 4] and [4, 80] rectangle mask
9. Extract overlapped region from the morphological operation results.
10. Fill holes using dilation with [4, 10] rectangle mask and erosion with [20] line mask.
11. Extract the biggest binary region.
12. Extend the region by 5 pixels.
Table 2. Results of vehicle detection (VD) and license plate extraction (LPE).
Table 2. Results of vehicle detection (VD) and license plate extraction (LPE).
ImageVD Rate (%)LPE Rate (%)Time (sec.)
SuccessMissSuccessMiss
Highway941285150.89
City road911883190.91
Table 3. Results of the comparison of accuracy of the license plate extraction methods
Table 3. Results of the comparison of accuracy of the license plate extraction methods
MethodsAccuracy (%)Misdetection (%)Time (sec.)
Morphology [26]78690.31
MSER and SVM [27] 90121.43
Our Method83170.73

Share and Cite

MDPI and ACS Style

Kim, J. Automatic Vehicle License Plate Extraction Using Region-Based Convolutional Neural Networks and Morphological Operations. Symmetry 2019, 11, 882. https://doi.org/10.3390/sym11070882

AMA Style

Kim J. Automatic Vehicle License Plate Extraction Using Region-Based Convolutional Neural Networks and Morphological Operations. Symmetry. 2019; 11(7):882. https://doi.org/10.3390/sym11070882

Chicago/Turabian Style

Kim, JongBae. 2019. "Automatic Vehicle License Plate Extraction Using Region-Based Convolutional Neural Networks and Morphological Operations" Symmetry 11, no. 7: 882. https://doi.org/10.3390/sym11070882

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop