Next Article in Journal
Analysis of the Use of Artificial Intelligence in Software-Defined Intelligent Networks: A Survey
Previous Article in Journal
Tongue Disease Prediction Based on Machine Learning Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating Factors Shaping Real-Time Internet-of-Things-Based License Plate Recognition Using Single-Board Computer Technology

by
Paniti Netinant
1,
Siwakron Phonsawang
1 and
Meennapa Rukhiran
2,*
1
College of Digital Innovation Technology, Rangsit University, Phathum Thani 12000, Thailand
2
Faculty of Social Technology, Rajamangala University of Technology Tawan-ok, Chanthaburi 22210, Thailand
*
Author to whom correspondence should be addressed.
Technologies 2024, 12(7), 98; https://doi.org/10.3390/technologies12070098
Submission received: 31 May 2024 / Revised: 29 June 2024 / Accepted: 30 June 2024 / Published: 1 July 2024
(This article belongs to the Section Information and Communication Technologies)

Abstract

:
Reliable and cost-efficient license plate recognition (LPR) systems enhance security, traffic management, and automated toll collection in real-world applications. This study addresses optimal unique configurations for enhancing LPR system accuracy and reliability by evaluating the impact of camera angle, object velocity, and distance on the efficacy of real-time LPR systems. The Internet of Things (IoT) LPR framework is proposed and utilized on single-board computer (SBC) technology, such as the Raspberry Pi 4 platform, with a high-resolution webcam using advanced OpenCV and OCR–Tesseract algorithms applied. The research endeavors to simulate common deployment scenarios of the real-time LPR system and perform thorough testing by leveraging SBC computational capabilities and the webcam’s imaging capabilities. The testing process is not just comprehensive, but also meticulous, ensuring the system’s reliability in various operational settings. We performed extensive experiments with a hundred repetitions at diverse angles, velocities, and distances. An assessment of the data’s precision, recall, and F1 score indicates the accuracy with which Thai license plates are identified. The results show that camera angles close to 180° significantly reduce perspective distortion, thus enhancing precision. Lower vehicle speeds (<10 km/h) and shorter distances (<10 m) also improve recognition accuracy by reducing motion blur and improving image clarity. Images captured from shorter distances (approximately less than 10 m) are more accurate for high-resolution character recognition. This study substantially contributes to SBC technology utilizing IoT-based real-time LPR systems for practical, accurate, and cost-effective implementations.

1. Introduction

License plate recognition (LPR) technologies have attracted significant attention in recent years due to their extensive utility in a broad range of application domains, including facility security [1], traffic management [2], and automated toll collection [3]. LPR systems employ a combination of pattern recognition methods, machine learning algorithms [4], and image processing techniques [1] to extract and decipher alphanumeric characters from vehicle license plates that have been electronically captured. However, traditional LPR systems often rely on specialized infrastructure, making them expensive and unsuitable for small-scale deployments. This highlights the pressing need for innovation in the field. Advances and cost-effectiveness in single-board computers (SBCs) and computer vision technologies have spurred interest in developing real-time LPR systems using flexible and cost-effective platforms for concurrently processing complex image tasks [5,6]. Despite significant advancements in LPR systems, it remains challenging, particularly when using resource-constrained platforms using SBC. The SBC system’s capabilities for accuracy, energy efficiency, communication, and computing delay have been verified and tested with various license plate datasets during daytime and evening and in real-time [7], such as Raspberry Pi. A notable concern is the performance constraints intrinsic to SBC designs and implementations based on Raspberry Pi, which originates from the platform’s restricted computational capabilities [8].
Existing real-time LPR solutions remain in practical evaluation and dissemination in several domains. The LPR systems have yet to become commonly utilized in the real world. The ability to detect vehicle license plates with precision and efficacy has emerged as an essential capability for numerous industries and governmental entities on a global scale. LPR constraints frequently lead to decreased processing speeds [9] and compromised accuracy [7], especially in real-time situations, which refer to scenarios where immediate processing and response are required and environments with high traffic volumes. The LPR system exhibits the acquired image by precisely aiming the camera at the license plate, followed by the execution of image processing and recognition tasks via a graphic card. Nevertheless, the proposed method is not feasible for the practical implementation of flexibility in real-world scenarios, where clear visibility of license plates is critical [10]. Furthermore, the current real-time LPR algorithms implemented on Raspberry Pi may lack adequate optimization for resource efficiency, resulting in suboptimal utilization of hardware resources and inefficiencies in processing capability. Moreover, environmental variability frequently undermines the resilience and precision of LPR systems implemented on Raspberry Pi. An LPR system’s efficacy depends on several crucial and accurate factors [7,11], which comprise a wide range of recognition process facets [9,12]. LPR must be performed precisely for automated vehicle identification and tracking systems to operate dependably and effectively. Factors influencing LRP accuracy include fluctuations in lighting conditions, tilt angles, inclement weather [7,13], vehicle velocities [14], occlusions [11], plate orientation [15], international character segmentation [16], and stepwise, algorithmic recognition [12,17,18].
Despite the rigorous performance demands of real-world applications, there are research gaps concerning the comprehension and resolution of the fundamental accuracy factors among angles, speed, and distance for image captures, thus influencing real-world implementation and testing. This study, which offers a compact, low-cost solution capable of processing complex image tasks concurrently, making it ideal for real-time LPR in resource-limited environments, takes an innovative approach that could potentially reshape the field of computer vision recognition. The performance evaluation and implementation of an LPR system utilizing SBC technology like a Raspberry Pi platform are distinctively investigated in this study. The feasibility and effectiveness of employing Raspberry Pi are evaluated to analyze the performance and precision of the real-time LPR recognition system. Through the utilization of pre-trained machine learning algorithms and open-source libraries, this research endeavors to construct an LPR system that is both economical and scalable, with potential applications extending from parking management to law enforcement.
Understanding how different spatial parameters affect the performance of these systems is critical for their optimal design, implementation, and operation. The objectives are to determine the optimal configurations for maximizing accuracy and reliability in various operational settings of features on the Raspberry Pi platform by answering the following research questions:
(1)
How does changing the camera angle affect the accuracy of detecting license plates in real time?
(2)
What is the relationship between camera-to-object distance and the performance of real-time license plate detection?
(3)
Are there optimal camera configurations in terms of angle, object speed, and distance that affect the effectiveness of real-time license plate detection using Raspberry Pi 4 (Cytron Technologies, Bangkok, Thailand)?
This study provides several significant contributions to the domain of LPR systems explicitly concerning the implementation capability of SBC technology on the Raspberry Pi 4 platform.
  • By systematically exploring the effects of camera angle, object speed, and distance on real-time license plate detection accuracy, this study provides effective perceptions of the parameters that influence the performance and accuracy of LPR systems in practical SBC with web camera settings.
  • The results obtained from this study provide pragmatic recommendations for enhancing the configuration of web cameras in practical implementations, thereby facilitating a real-time LPR technology that is more dependable, cost-effective, and efficient.
  • By promising the affordability, efficacy, and accessibility of the Raspberry Pi 4 platform, this study demonstrates the feasibility of implementing SBC technology for real-time LPR systems using cost-effective hardware, thereby expanding the accessibility of such technologies to a broader range of applications and users.
This broadens the scope of applications and users benefiting from these technologies. In addition, future research to assess and enhance the performance and efficiency of real-time LPR systems using SBC technology on Raspberry Pi and comparable platforms will find the experimental framework and methodology established in this study valuable.

2. Literature Review

2.1. License Plate Recognition

Automatic number plate recognition (ANPR), also called LPR, is a technological advancement that facilitates the automated identification, detection, and interpretation of vehicle license plates [19]. Image acquisition, preprocessing, license plate localization, character segmentation, optical character recognition (OCR) [20], and postprocessing are all components of LPR. Each stage depends on distinct algorithms and techniques to extract relevant information from images [21] captured by cameras affixed to vehicles, roadside infrastructure, or stationary surveillance systems. The tenets of LPR comprise an extensive variety of approaches, algorithms, and methods derived from multiple areas of study. LPR systems continue to be revolutionized by developments in hardware technology, computer vision, and machine learning, which enable ever more precise, dependable, and effective vehicle identification solutions with a wide range of practical applications. For numerous steps, the LPR method can be expressed as follows.
During the image acquisition phase of LPR, cameras capture digital images of vehicles and their license plates. Variations in camera angles, lighting conditions, image quality, and resolution can present obstacles for subsequent processing stages. Preprocessing methods are implemented to optimize license plate detection and recognition conditions by removing noise, correcting distortions, and normalizing illumination levels from captured images.
The procedure of license plate localization involves the identification of specific areas of interest within acquired images that have the potential to contain license plates. Typical image processing techniques utilized to accomplish this matter include edge detection [22], contour analysis [12], and morphological operations [21,23]. For accurate and dependable license plate localization, numerous algorithms have been implemented, such as template matching [1,24], sliding window approaches [7,25], and deep-learning-based object detection models [25,26].
Once candidate regions containing license plates have been identified, character segmentation is performed to isolate and extract individual license plate characters for recognition. This critical phase is contingent upon the precise decipherment of the alphanumeric characters present on the license plate [27]. In addition to fonts, character segmentation algorithms must contend with variable character sizes, spacing, and distortions introduced by perspective effects or image noise.
OCR is the concluding phase of LPR, during which the license plate number is retrieved by recognizing and interpreting the extracted characters. OCR algorithms accurately classify and decode characters through pattern recognition techniques, machine learning models, and language processing algorithms [20]. OCR systems achieve high recognition accuracy by utilizing deep learning techniques, including convolutional neural networks (CNNs) [28] and recurrent neural networks (RNNs) [29], which are trained on extensive datasets comprising annotated license plate images.
Postprocessing techniques refine the OCR results, rectify errors, and enhance the overall dependability of the LPR system. Context-based validation rules, spell-checking algorithms, or statistical models may be employed in this procedure to filter out false positives and validate the accuracy of the recognized license plate number [25,29,30].
Therefore, factors include hardware prerequisites, computational efficiency, system architecture, and performance evaluation metrics. Accuracy, speed, and scalability are harmoniously balanced in the design of LPR systems to satisfy the needs of particular applications and deployment scenarios.

2.2. OpenCV and Tesseract–OCR

OpenCV is a robust open-source library that offers an extensive selection of algorithms and functions to assist with machine learning, object detection, feature extraction, and image processing, among other computer vision tasks. OpenCV is extensively implemented in LPR systems due to its robust functionalities and widespread support for multiple programming languages, including Python, C++, and Java. Image processing operations are a critical component of OpenCV in the context of LPR. These operations are indispensable for preprocessing license plate images, ensuring that they are of higher quality and that regions of interest containing license plates are isolated [31,32]. Image preprocessing methods, including morphological operations, edge detection, and image filtering [33], are frequently utilized to eliminate noise, rectify distortions, and enhance the legibility of license plate characters. The accurate recognition of process operations through images is of utmost significance for a wide range of license plate varieties, which may display discrepancies in font styles, colors, and background textures [32].
One of the most essential components of LPR systems [34] is license plate localization, which identifies regions of interest within images that may contain license plates. OpenCV offers a diverse range of algorithms and methodologies to assist in the localization of license plates. These encompass machine-learning-based object detection models, template matching, and contour analysis [32]. The algorithms can efficiently identify and extract license plate regions even in complex backgrounds and varying lighting conditions, thereby enabling precise and resilient LPR [35]. OpenCV offers OCR functionality, which is critical for deciphering the alphanumeric characters on license plates [36], images, and object detection. Text recognition, or OCR, is the electronic extraction of text from images. The integration of OpenCV and Tesseract enables accurate text detection, whereas Python supplies the requisite scripting for data extraction and recognition. Google’s Tesseract–OCR is an open-source OCR engine that identifies numerous language characters on license plates across multiple languages. It performs character segmentation, OCR, image preprocessing, and license plate localization. Tesseract–OCR demonstrates the ability to precisely identify text across various fonts, sizes, and styles, thereby facilitating the extraction of license plate numbers from images captured under diverse conditions. Therefore, OpenCV and Tesseract–OCR support LPR systems in this investigation by providing essential image processing, object detection, and OCR functionalities.

2.3. Thai License Plates

Similarly to license plates in many other nations, Thai license plates serve as vital identifiers for vehicles and are involved in numerous facets of transportation, law enforcement, and vehicle registration. It is critical to comprehend the attributes and stipulations of Thai license plates to design and implement LPR systems that are efficient and adapted to Thai circumstances and that conform to the format and style prescribed by the Department of Land Transport (DLT) in Thailand. The conventional Thai license plate combines Thai characters, Arabic numerals, and special symbols. In the context of Thai license plates, the prevailing structure consists of two Thai characters succeeded by four Arabic numerals, which are delimited by a hyphen or space (e.g., กข-1234 or กข 1234). Furthermore, Thai license plates can feature distinctive symbols or indicators that designate particular vehicle categories or regions.
Standardization of the design and layout of Thai license plates guarantees legibility and consistency across all vehicles. To improve legibility in diverse lighting conditions, the characters are conventionally imprinted or embossed on a reflective metal plate featuring a high contrast. Regulations govern the font size, style, and color of the characters to guarantee legibility and adherence to traffic regulations. Additional security measures, including holograms, watermarks, or tamper-evident materials, may be incorporated into Thai license plates to deter counterfeiting and unauthorized modifications.
Recent advances in image recognition technology have made contemporary LPR systems capable of autonomously discerning, distinguishing, and interpreting Thai license plates. LPR systems utilize image processing, object detection, and OCR algorithms and techniques to extract license plate numbers from images captured by cameras affixed to vehicles, roadside infrastructure, or surveillance systems. Thailand can augment its traffic monitoring, enforcement, and security endeavors by integrating LPR technology into parking facilities, law enforcement agencies, and traffic management systems.

2.4. OCR Image to Text

OCR is a technological solution utilized to transform scanned documents and printed or handwritten text from images or other visual sources into text machines can read [37]. OCR technology has become an indispensable tool in numerous industries and applications, including document digitization, data extraction, and text recognition. As illustrated in Figure 1, OCR image-to-text conversion requires the following steps: image acquisition, image preprocessing, text localization, character segmentation, feature extraction, character recognition, and postprocessing.
As depicted in Figure 1, the OCR process commences with acquiring text-containing images by utilizing scanning devices, cameras, or other imaging apparatuses. Images utilized in recognition may consist of screenshots, photographs, scanned documents, or frames extracted from video streams.
In the following step, images are preprocessed. Before OCR processing, image processing techniques are utilized to improve the quality and clarity of the input image. In character recognition, the following operations are performed: contrast adjustment, skew correction, image binarization, and noise reduction. Preprocessing images enhances OCR precision by optimizing text region visibility.
Text detection algorithms assist in localizing text regions for further processing and identify regions of interest that contain text within the input image. They examine the image to identify regions highly likely to contain text, including text blocks, paragraphs, or individual words.
Character segmentation is a key step. It involves dividing detected text regions into individual characters to aid recognition. This is straightforward in languages like English with clear-cut character spacing and shapes. However, it becomes more complex for languages like Arabic, Chinese, or Thai, which have intricate character structures. For instance, in Arabic, characters change shape depending on their position in a word. Specialized techniques are needed to handle such complexities.
OCR is a subsequent procedure. OCR algorithms examine segmented characters to identify and convert them into machine-readable text. They do this by matching character shapes with predefined character templates or language models. To achieve high recognition accuracy, modern OCR systems leverage deep learning techniques like CNNs and RNNs. These models are trained on massive datasets of annotated text images, enabling them to recognize a wide range of characters with high precision.
Posttreatment constitutes the sixth stage. By utilizing postprocessing techniques, the OCR results are refined, errors are rectified, and the quality of the recognized text is enhanced. This process may entail the utilization of spell-checking algorithms, validation rules specific to the language, or context-based correction methods. Postprocessing plays a crucial role in improving the precision and legibility of the OCR output, guaranteeing that the identified text closely resembles its initial state.
The process finishes once the recognized and formatted license plate number is achieved. Positive and negative output testing is performed to validate the system’s accuracy, wherein the disparity between accurately identified and inaccurately identified license plate numbers is distinguished. Cross-validation [4,38], confusion matrix analysis [15,25,39], and real-world testing [1,40] are among the methods employed to assess the performance of the system. Through rigorous testing and implementation, LPR systems significantly contribute to the automation of processes and the implementation of security measures in real-world applications.

2.5. Factors for Single-Board Computer’s Capability and LPR Accuracy

Particularly for LPR applications, the Raspberry Pi has emerged as a competitive platform in SBC technology. Given its compact dimensions, cost-effectiveness, and adequate computational capabilities, Raspberry Pi can manage the deployment of LPR systems in environments where conventional computing resources are impractical or prohibitively expensive. By utilizing computations for sensor integration and robust processing power to execute lightweight machine learning algorithms directly at the local computing capability, the Raspberry Pi can be leveraged for real-time LPR. This methodology diminishes latency, bandwidth consumption is curtailed through local data processing, and the LPR system’s real-time responsiveness is improved in dynamic environments. As detailed in Table 1, the Raspberry Pi provides numerous advantages when utilized in SBC technology for real-time LPR systems.
However, the precision of LPR systems is crucial for their efficacy in diverse applications. High accuracy is crucial for reliable vehicle identification, optimizing operational efficiency, and enhancing security measures. Several factors can have a significant impact on the performance of LPR systems. These factors include the quality of the camera hardware, the complexity of the image processing and recognition algorithms, and environmental conditions, such as lighting, weather, and vehicle speed. The significance of technological advancements and strategic system design in improving LPR accuracy is demonstrated in Table 2.

3. Materials and Methods

This section is structured to enhance understanding through a logical progression, thus breaking down the complex components of the proposed system. In Section 3.1, the research methodology is discussed to support the experiments. In Section 3.2, the authors introduce the comprehensive framework of our IoT-based LPR architecture, highlighting an integrated and layered structure. Section 3.3 arranges the foundation for a detailed hardware and system design analysis, thereby detailing each component within the IoT-driven LPR ecosystem. Section 3.4 discusses the experimental setup and data collection methodology, showcasing how the system’s effectiveness was tested in real-world conditions. Each subsection builds upon the last, thus thoroughly describing the system’s architecture, components, and performance evaluation.

3.1. Design of IoT-Based LPR Framework Using SBC Technology

Figure 2 illustrates the flexible LPR framework designed to perform robustly in various operational settings. This framework, rigorously evaluated under different environmental conditions and camera configurations, consistently delivers dependable performance. It is powered by essential software and libraries, such as OpenCV, TensorFlow Lite, and PyTorch, thus ensuring accurate image processing and machine learning tasks. The inclusion of communication protocols like MQTT, HTTP, and LoRaWAN further enhances its flexibility, thereby ensuring reliable data transmission between IoT devices and servers across various network environments. The system’s support for multiple platforms, including Windows IoT and Linux, and its ability to process various data formats underscore its flexibility and comprehensive functionality in LPR systems.
The architecture utilizes Raspberry Pi 4 and USB webcams, thus showcasing an economical and effective solution for real-time SBC computation capabilities. This configuration facilitates distributed, edge computing, and cloud computing models, thus offering scalable and flexible deployment choices that accommodate different computational requirements and resource availabilities. Furthermore, incorporating distinct elements, such as power supplies and USB webcams, guarantees that the system can efficiently manage real-time data processing and analysis.
This study, based on a sophisticated real-time IoT-based LPR system framework, is specifically developed using aspect-layering architecture [49] to enhance modularity for precision, productivity, and flexibility. The proposed framework utilizes existing SBC technology, reliable programming libraries, and effective communication protocols to fulfill the rigorous requirements of real-time LPR applications. This comprehensive approach not only improves the technological capabilities of LPR systems using SBC but also broadens their usefulness across various sectors, thus significantly impacting the field of license plate recognition.
Each layer within the proposed IoT-based LPR framework is a modular interface, thus improving compatibility among components and technologies. The software layer of the framework is crucial, as it establishes standardized communication protocols compatible with various application interfaces and hardware devices. The framework enables easy configuration during implementation. The proposed architecture enables precise fine-tuning and calibrations at every level, thus improving the ability to meet security requirements and optimizing real-time LPR capabilities. The system architecture incorporates a variety of hardware models and seamlessly integrates state-of-the-art USB web cameras to ensure dependable data processing. The design is essential for effectively implementing LPR technologies, thus guaranteeing that the system can adapt to changing image recognition capabilities and demonstrating the interconnectedness of each vital component in terms of scalability and long-term viability of LPR systems.

3.2. Material Design

The entire material design incorporates both hardware and software components into a unified framework that facilitates real-time data processing and analysis, as illustrated in Table 3. The system design prioritizes scalability and flexibility, thus enabling future enhancements, like integrating extra cameras or upgrading system components without the need for extensive redesigns.
This study utilizes Raspberry Pi 4 with 8 GB of RAM to implement an IoT-based LPR system designed explicitly for identifying Thai license plates. Raspberry Pi 4 offers significant memory and enhanced processing power, making it suitable for performing real-time image processing and character recognition tasks crucial for LPR systems. Raspberry Pi 4 runs on the 64-bit version of the Raspbian operating system. This study employs the Logitech C922 Pro Stream webcam (Logitech, San Jose, CA, USA), which can capture high-quality images and videos with resolutions of up to 1080p at a frame rate of 30 frames per second. The study utilizes the Pytesseract library, a reliable tool designed explicitly for text recognition in images, for the OCR of Thai license plates. This library effectively manages a wide range of font styles and sizes, a crucial feature for accurately interpreting the diverse typographic designs found on Thai license plates. By combining Pytesseract with the high-resolution image input from the Logitech C922, the system can precisely extract and interpret license plate text in diverse environmental conditions and from multiple perspectives.
Figure 3 depicts the methodical process of an IoT-powered LPR system specifically designed to identify and examine Thai license plates on vehicles. This diagram provides a detailed overview of the sequential process, beginning with the initial detection of cars and ending with the integration of the final output data. Every stage in the procedure is vital to guarantee the precise identification and handling of license plate data. The workflow consists of several essential stages, including car detection, license plate localization, image preprocessing, and OCR using Pytesseract, which is specifically designed to handle Thai characters.
Figure 3 illustrates the sequence of steps in an IoT-enabled LPR system. The system algorithm consists of the following components.
  • Vehicle detection: The first step ensures the proposed system is activated only when a car is detected. This activation process prioritizes attention, thus preventing unnecessary processing and errors. Thus, it enhances the system’s efficiency and reduces false triggers. As a result, the reliability of subsequent steps is improved.
  • License plate localization: License plate localization involves accurately identifying the precise location of the license plate within an image. This process ensures that the OCR software only analyzes the relevant portion of the image. This focused approach reduces the likelihood of mistakenly interpreting nearby texts or symbols as part of the license plate, thereby enhancing the accuracy of the captured data for recognition.
  • Image preprocessing: This process involves enhancing the quality of the image by adjusting such things as contrast enhancement, normalization, and resizing. These adjustments directly affect the OCR system’s ability to read the license plate accurately. Improved image quality results in increased accuracy in character recognition, thereby enhancing the reliability of the recognition process by ensuring that the characters are clear and properly structured for analysis.
  • OCR with Pytesseract: This crucial process converts visual data into text machines that can read. Pytesseract, a multi-language OCR tool, enables precise retrieval of alphanumeric information from license plates, demonstrates exceptional accuracy in converting images to text across diverse conditions, thus ensuring high reliability and validity in data extraction, and can distinguish between Thai letters and Arabic numerals in a manuscript. License plate images are enhanced for precise OCR by implementing OpenCV image preprocessing, noise reduction, contrast adjustment, and normalization. OpenCV is employed to perform character segmentation in the license plate area. Pytesseract recognizes and interprets these segmented characters. By configuring Pytesseract with both language packs, it can distinguish between Thai letters and Arabic numerals in images. This is advantageous for Thai license plates, which employ both scripts. The multilingual support of the OCR engine enables it to seamlessly transition between Arabic numerals and Thai script. Context-based validation is employed postprocessing to refine the recognized text and maintain the license plate format. This comprehensive method employs Pytesseract’s multi-language OCR and OpenCV’s robust image processing to accurately differentiate between Thai letters and Arabic numerals.
  • Thai character conversion: This conversion is essential in regions where non-Latin scripts, like Thai, are used. This step ensures that data interpretation remains accurate across different character sets. It guarantees that the characters are correctly understood and represented in the system, thus maintaining the integrity and applicability of the data in local contexts.
  • Output and integration: The final stage of the process involves formatting and integrating the data into more extensive traffic or security systems, which is crucial for practical implementation.

3.3. Experimental Procedure

The research methodology is devised to systematically evaluate the influence of camera angle, object velocity, and distance on the efficacy of license plate detection using the Raspberry Pi 4 platform set up with a camera module that can capture images specifically for license plate detection. With the aim of achieving our research objectives, this study rigorously tests a variety of camera angles (135°, 150°, 165°, 180°, 195°, 210°, 225°), object speeds (<5 km/h, <10 km/h, <15 km/h), and distances (5–10 m) using a Raspberry Pi 4 platform. Each configuration is tested 100 times to ensure statistical reliability. The methodology is specifically designed to evaluate the significant impact of these variables on LPR system accuracy and reliability, thus providing crucial insights for the field of computer vision and image processing. Additionally, the distances between the camera and the objects are modified to replicate diverse operational situations, as depicted in Figure 4.
During the data collection phase, images of license plates are captured under various combinations of camera angles, object velocities, and distances. The images acquired during the experiments are marked and annotated with accurate information for evaluation. Afterward, the efficacy of license plate detection is evaluated using certain metrics, such as detection accuracy, false positives, and false negatives. Performance metrics are computed for each experimental condition to measure the influence of different conditions on detection accuracy.

3.4. Data Collection

The data collection process involves systematically testing an experimental LPR system to evaluate its performance in detecting and recognizing Thai license plates under different conditions. The authors manipulate multiple variables during the testing process to conduct this study. During each test, the experimental LPR system is activated, and data are gathered to determine the system’s ability to identify Thai license plates accurately. Once a license plate is detected, the system proceeds to identify the characters displayed on the plate. Character recognition accuracy is assessed by considering certain variables, such as vehicle velocity, distance from the camera, and the angle in relation to the camera.
The sampling results thoroughly assess the LPR system implemented on Raspberry Pi 4 through practical experimentation, as shown in Figure 5a–d. The data illustrate the results obtained by capturing images from different perspectives and distances using the LPR system in real-world scenarios. Figure 5a–d depict images obtained at distances ranging from 10 m to 5 m. The camera was positioned at a fixed angle of 180 degrees. These images provide a visual depiction of the recorded license plates at different distances, thus enabling an evaluation of the system’s ability to recognize characters at varying proximities accurately. Additionally, Figure 5e demonstrates the outcome of the LPR for OCR procedure on a Thai license plate. It showcases the system’s ability to accurately extract and recognize characters from the captured license plates.
The performance of the LPR system on Raspberry Pi 4 was thoroughly evaluated based on the data obtained from practical experiments, as illustrated in Figure 6a–c. The experiments involved capturing images at a fixed angle of 165 degrees and at different distances from the target license plate. The images in Figure 6a–c depict license plates captured at varying proximities of 10 m, 8 m, and 5 m. These visual representations illustrate the license plates at different distances. Furthermore, an image captured at a precise angle of 195 degrees from a distance of 5 m is displayed in Figure 6d. This image offers valuable insights into the system’s ability to withstand variations in camera angle. The outcome of the LPR for OCR procedure is depicted in Figure 6e, which demonstrates that the system successfully extracted and identified characters from the captured license plates with precision.

3.5. Data Evaluation

Precision, as used in the data evaluation of this study on an IoT-based LPR system, denotes the proportion of accurately identified license plates concerning the total number of identified plates. As an illustration, in the scenario where the system correctly identifies 90 out of 100 plates, the precision stands at 90%. Recall evaluates the capability of the system to locate every genuine license plate. For example, the recall rate is 90% if the system correctly identifies 90 out of 100 authentic license plates. The F1 Score is a metric that integrates precision and recall by computing their harmonic mean, thereby achieving a balance between the two. This formula becomes critical when an equilibrium must be struck between recalling the maximum number of plates identified and ensuring precise identifications.
Precision is a metric that is calculated as the ratio of true positives to the sum of true positives (TP) and false positives (FP). In other words, it measures the accuracy of the positive identifications made by the system. This calculation, as shown in Equation (1), is a fundamental part of our evaluation of the IoT-based LPR system.
Precision = TP TP + FP
Recall is determined by dividing the number of true positives by the sum of false negatives (FN) and true positives (TP), as illustrated in Equation (2). The formulation assesses the system’s capability to identify every pertinent instance.
Recall = TP TP + FN
Equation (3) computes the F1 score, the harmonic mean of recall and precision. The purpose of the formulation is to achieve a balance between recall and precision.
F 1   Score = 2   ×   Precision   ×   Recall   Precision + Recall
A statistical analysis is performed to ascertain correlations and associations between the spatial parameters (camera angle, object speed, and distance) and detection performance. Regression analysis can be conducted to identify the camera settings that optimize detection efficiency. Ultimately, the findings derived from the experiments are analyzed to answer the research inquiries and goals. An analysis is conducted to determine how camera angle, object speed, and distance affect the effectiveness of license plate detection. Based on the results, recommendations can be made to improve the design and operation of LPR systems that utilize Raspberry Pi 4. The meticulous methodology is essential to guarantee that the LPR system is theoretically effective, resilient, and efficient in practical scenarios. The methodological rigor of the system ensures its adaptability to potential variations in environmental conditions and operational demands, making it dependable and applicable to real-life situations.

4. Results and Analysis

The dataset that was examined comprises the outcomes of an experiment conducted utilizing the suggested LPR system integrated into a Raspberry Pi4 with a web camera capable of capturing images at 1080p resolution. The purpose of this configuration was to assess the effectiveness of the LPR system across a range of distances (5–10 m), angles (135–225 degrees) concerning the position of the camera, and object velocities (5–15 km/h), as detailed in Table 4.
Table 5 presents an all-encompassing synopsis of the principal performance indicators for the real-time LPR system throughout 126 distinct testing scenarios, which differ in distance, velocity, and angle. The average angle of 172.62 degrees was measured, exhibiting a range of 135 to 225 degrees. A standard deviation of 2.15 m and a mean distance of 8.83 m indicated that the test distances were clustered relatively closely around the mean. True positives exhibited substantial variability, with a minimum of 15 and a maximum of 90, for a mean of 49.65, indicating a generally high recognition rate utilized by SBC in practice. The mean number of false positives was 31.83, ranging from 2 to 82, which suggests that the system made sporadic misidentifications of non-license plate objects as plates, potentially jeopardizing the system’s overall dependability. This result is a challenge that must be resolved to improve the performance of the system setting. True negatives remained consistently at zero throughout all tests, which aligns with the anticipated outcome in situations where detection is limited to license plates. False negatives, which peak at 60 and have an average value of 16.31, draw attention to where valid license plates were limited by hardware, an essential component in enhancing the system’s precision, which presents yet another obstacle that must be surmounted. The precision metric, which denotes the proportion of license plates accurately identified to the overall number of identifications, exhibited a minimum value of 0.943 and a maximum value of 0.591, which suggests that approximately 60% of the plates identified were accurately recognized on average.
However, the reduction of false positives could be enhanced. The mean recall value of 0.692, which quantifies the system’s capability to identify every genuine plate, suggested that approximately 70% of the actual plates in the test scenarios could be detected. Variability was evident in the recall values, which spanned from 0.25% to 1.000%, contingent upon the test conditions. Although the table does not explicitly state the F1 score, which is calculated as the harmonic mean of precision and recall, it can be deduced that it is contingent upon both metrics. A more significant correlation exists between precision and recall values and F1 score, suggesting that a more balanced and effective system is the case. To ensure consistency throughout the testing process, 100 instances of each test were administered regularly. In summary, while the license plate recognition system exhibited commendable performance in accurately identifying valid plates, it encountered obstacles in the form of both false positives and false negatives. Metrics for precision and recall indicate that although the system exhibits efficacy, there exist particular circumstances in which it might be more efficiently executed. Additional research could be devoted to identifying the causes of elevated false positive and false negative rates to improve the overall precision and dependability of the system.
Table 6 presents an all-encompassing synopsis of the principal performance indicators for the license plate recognition system throughout 126 distinct testing scenarios, which differ in distance, velocity, and angle. The average angle of 172.62 degrees was measured, exhibiting a range of 135 to 225 degrees. The ‘average’ is a statistical term that represents the central tendency of a set of data. In this case, it indicates the most common angle observed in the testing scenarios. The ‘standard deviation’ is a measure of the amount of variation or dispersion in a set of values. In this case, it was 2.15 m, indicating that the system was tested at various distances from the license plate. Based on the test distances’ close proximity to the mean, it can be inferred that the performance of the system remained consistent throughout the various distances.
True positives (TPs) quantify the quantity of license plates that are accurately identified. False positives (FPs) refer to the quantity of non-license plate objects that are erroneously classified as license plates. True negatives (TNs) represent the count of non-license plate objects that have been accurately classified as such. True negatives (FNs) represent the quantity of legitimate license plates that were overlooked. The tests were conducted by presenting the system with a variety of license plates and non-license plate objects in different scenarios. The tests yielded true positives with a mean of 49.65, a minimum of 15, and a maximum of 90, indicating a high recognition rate on average, albeit with considerable variation. The data revealed a mean of 31.83 false positives, with a range of 2 to 82. This suggests that the system intermittently erroneously classified non-license plate objects as license plates, potentially compromising the system’s overall dependability. Consistently, there were no true negatives across all tests, which is to be expected in situations where detection is limited to license plates. False negatives, which totaled sixty on average and sixteen at most, draw attention to the situations in which valid license plates were overlooked, an essential component in enhancing the precision of the system.
The mean value of precision, which is calculated as the proportion of accurately identified license plates to the overall number of positive identifications, was 0.591. This means that approximately 60% of the plates identified were identified accurately on average. The maximum value recorded was 0.943, indicating that the system can achieve a high level of precision under certain conditions. However, further enhancements are required to minimize the occurrence of false positives. The mean recall value of 0.692, which quantifies the system’s ability to identify every genuine plate, suggested that approximately 70% of the actual plates in the test scenarios could be detected. This means that the system has the potential to detect a significant portion of the license plates. Variability was evident in the recall values, which spanned from 0.25% to 1.00%, contingent upon the test conditions.
As a harmonic means of recall and precision, the F1 score is a solitary value that offers an equitable evaluation of the performance of the system. In the context of license plate recognition systems, a higher F1 score indicates a more balanced and effective system. Although not explicitly stated in the table, its dependence on both metrics can be deduced. A greater correlation exists between precision and recall values and F1 score, suggesting that a more balanced and effective system is the case. To ensure consistency throughout the testing process, 100 instances of each test were administered on a regular basis.
In summary, the license plate recognition system has demonstrated a noteworthy level of success in accurately identifying valid plates, thereby showcasing its resilient capabilities. However, it has encountered obstacles in the form of false positives and false negatives. The system’s effectiveness is indicated by the precision and recall metrics; however, there exist particular circumstances that could be optimized in order to augment its overall performance. It is crucial to emphasize the need for additional investigation to comprehend the circumstances that contribute to elevated rates of false positives and false negatives, thus augmenting the overall precision and dependability of the system. Other’s expertise and insights are invaluable in this process.
Figure 7 not only presents our study but also underscores its significance. The findings reveal that the real-time LPR system integrated with an SBC, like a Raspberry Pi, achieves its peak performance at a viewing angle of 180 degrees. This angle, where the license plate is directly in front of the camera, provides the most accurate view, thus enhancing the system’s license plate identification and detection capabilities. The significant drop in precision and recall at viewing angles of 135 and 210 degrees further emphasizes the viewing angle’s critical role in the LPR system’s accuracy and reliability.
Our research confirms our conclusions and underscores the potential of SBCs in image recognition tasks. The linear trend lines in the graph demonstrate that a deviation of 180 degrees in viewing angle significantly decreases precision and recall, thus highlighting the importance of maintaining optimal camera positioning. Our statistical analysis provides further evidence that an SBC can efficiently handle complex image recognition tasks despite its limited computational capabilities, given that the system configuration is appropriately optimized. The results suggest that an SBC can be a practical and cost-effective solution for license plate recognition tasks through meticulous configuration techniques, such as maintaining an ideal viewing angle.
The mean precision and recall of the real-time LPR system at different distances, implemented on an SBC, are effectively depicted in Figure 8. The mean recall for each distance is represented by the orange bars, whereas the mean precision is denoted by the blue bars. Dot trend lines underscore trends in overall performance. The chart illustrates a noteworthy discovery: the optimal performance is observed at 5 m, with corresponding mean precision and recall values of around 0.607 and 0.823. This finding indicates that the SBC-based system successfully manages the trade-off between accurate license plate identification and comprehensive detection at the ideal distance. This finding is critical to optimizing the system.
The chart’s linear trend lines underscore the significance of distance to system performance, illustrating a consistent decrease in both precision and recall with increasing distance. This phenomenon highlights the criticality of sustaining an ideal distance to optimize the performance of an LPR system based on SBC. Notwithstanding the restricted computational capabilities of SBCs, Raspberry Pi exhibits the potential to attain exceptional precision in LPR tasks by optimizing its system configuration. As mentioned earlier, the results underscore the significance of distance and physical configuration in optimizing the capabilities of SBCs for sophisticated image recognition tasks. SBCs must be configured appropriately, including maintaining an optimal distance, to function as dependable and cost-effective solutions for LPR systems in the real world.
The mean precision and recall of the real-time LPR system operating at various speeds on an SBC, such as Raspberry Pi, are illustrated in Figure 9. The mean precision is denoted by the blue bars, whereas the mean recall for each speed category (<5 km/h, <10 km/h, <15 km/h) is represented by the orange bars. The dotted trend lines depict overarching patterns in precision and recall with the progression of vehicle speed. The system attains its highest mean precision and recall values, approximately 0.607 and 0.823, respectively, at speeds below 5 km/h. This indicates that the SBC-based system is most effective at balancing comprehensive detection with accurate license plate identification at lower speeds.
The results emphasize the significance of vehicle velocity in determining the effectiveness of an SBC-based LPR system. As the speed increases, there is a discernible decrease in both precision and recall. A slight decrease in precision is observed at speeds below 10 km/h, whereas the recall maintains a relatively high value. Nevertheless, beyond 15 km/h, precision and recall continue to decline, indicating that the system’s performance deteriorates as the vehicle accelerates. This result is supported by the trend lines, which indicate that precision and recall decrease as speed increases. These results highlight the necessity for vehicle-speed-based system optimization. Notwithstanding the computational constraints of SBCs, the system attains elevated levels of precision while operating at reduced speeds. Through appropriate hardware optimization, the showcases show that SBCs can proficiently manage intricate image recognition assignments, thereby materializing feasible and economical resolutions for practical real-time LPR applications. Nevertheless, to preserve peak performance, it is imperative to consider the ramifications of vehicle velocity and guarantee that the system is configured to function efficiently in the anticipated circumstances.
Table 7 presents an analytical summary of the impact that angle, distance, and speed have on the precision and recall of an LPR system.
The table presented provides a summary of the outcomes of the analysis of variance (ANOVA) tests that were performed to ascertain the influence of different independent variables on the real-time LPR system’s performance metrics (precision and recall). Speed (kilometers per hour), angle (degrees), and distance (meters) are the independent variables that are examined. The dependent variable under investigation, the corresponding independent variable, the F-statistic, the p-value, and the statistical significance of the results are presented in each table row.
The impact of the license plate’s angle on precision, as indicated by the F-statistic of 10.25 and the p-value of 0.0001, is statistically significant. The F-statistic, serving as an indicator of the model’s overall significance, suggests that the license plate angle does indeed influence precision in a significant way. Similarly, recall is significantly affected by angle, as indicated by the F-statistic of 8.47 and the p-value of 0.0015. These results highlight the importance of angle in ensuring that the system accurately captures all license plates, a crucial insight for the development and optimization of real-time LPR systems.
Moreover, the distance between the camera and the license plate emerges as a pivotal performance factor for the system. The precision F-statistic of 12.56 and the p-value of 0.00001 for the distance variable indicate a highly significant effect. This finding suggests that specific distances can empower the camera to capture clear and detailed images, thereby boosting the system’s precision. Distance also significantly influences recall, as indicated by the F-statistic of 9.68 and the p-value of 0.0003. These findings reiterate the potential for substantial performance improvements by adjusting the distance at which the plates are captured, offering a promising avenue for system optimization.
Speed, which is classified as a numeric variable, exerts a substantial influence on the operational efficiency of the real-time LPR system. The system’s accuracy in identifying license plates is notably impacted by speed, as evidenced by an F-statistic of 7.89 and a p-value of 0.0021. This result suggests that the speed of vehicles is responsible for motion blur or other image distortions associated with velocity, which subsequently affect the system’s precision. Likewise, recall exhibits a substantial correlation with velocity, as evidenced by an F-statistic of 11.33 and a p-value of 0.00003; these values indicate that the system’s efficacy in accurately detecting all license plates is impacted by the speed at which vehicles travel.
The ANOVA results, presented with clarity in the table, strongly validate that the precision and recall of the LPR system are significantly influenced by angle, distance, and speed. These results, with their high statistical significance, not only carry weight but also have direct implications, suggesting that even small adjustments to these independent variables can lead to significant performance improvements for the SBC systems. This insight points to a promising direction for further investigation and advancement, potentially strengthening the real-time LPR system’s precision and dependability in practical applications.

5. Discussion

Our analysis, which focused on achieving a true positive rate of 70% or higher, has yielded significant insights into the performance of real-time LPR systems. This study demonstrates the system’s effectiveness in correctly identifying over 70% of license plates, with practical implications for law enforcement, parking management, and other applications. The mean precision and recall values provide a comprehensive assessment, with higher precision indicating accurate identification and minimizing false positives. Improved recall confirms the system’s capability to detect most real plates, ensuring dependability under favorable conditions. The study also highlights the potential of implementing LPR systems using SBCs like Raspberry Pi, showcasing its adaptability and cost-effectiveness. For instance, Raspberry Pi consistently maintained a true positive rate exceeding 70% under diverse lighting conditions and license plate sizes, demonstrating its ability to manage complex image recognition tasks effectively despite its more minor processing power compared to more costly systems. The adaptability and cost-effectiveness of SBCs make them viable for various applications where spatial and financial limitations are critical. This study successfully identified optimal configurations for real-time LPR using the Raspberry Pi 4 platform, contributing to developing reliable and cost-effective LPR systems. Future work should address environmental variability and optimize algorithms for higher speeds and greater distances. The correlation between high true positive rates and enhanced precision and recall underscores the importance of optimal system configuration in overall performance, supporting the feasibility of using SBCs to develop effective LPR systems that maintain peak performance despite limited resources.

5.1. Theoretical Contributions

The findings suggest that for theoretical contributions, maintaining camera angles close to 180°, keeping vehicle speeds low, and placing cameras within 10 m of the target are crucial for optimal performance. These results support the research objectives and aid in developing more accurate and reliable LPR systems on low-cost platforms like Raspberry Pi 4. Significant progress has been made in successfully implementing real-time LPR systems within IoT frameworks using SBC technology, highlighting how LPR can enhance parking efficiency, traffic management, automated toll collection, and security protocols in smart cities. This research provides a detailed analysis of optimizing decision-making processes and adapting to changing urban environments through real-time data exchange in IoT infrastructures. Leveraging SBC technology, mainly Raspberry Pi 4, for local data processing reduces latency and enhances system capabilities. This study is vital for scenarios requiring real-time monitoring and immediate law enforcement actions where centralized data processing is impractical due to time constraints. By optimizing resource utilization and system architecture within IoT ecosystems, the findings contribute to theoretical discussions of balancing on-device processing power with centralized data analysis. Additionally, this study significantly advances machine learning algorithms to improve image recognition accuracy under diverse operational conditions while addressing various challenges, such as variable lighting, varying vehicle speeds, and different license plate visibility angles, thereby enhancing the understanding of algorithmic behavior in complex visual data interpretation.

5.2. Practical Implications

The practical implementation of real-time LPR systems necessitates careful attention to several factors that directly influence their dependability and efficacy. Primary concerns are the system’s accuracy and dependability, which are critical for minimizing false positives and negatives to prevent legal and social repercussions from erroneous identification or omission. These systems must be able to adjust to varying environmental conditions, including weather scenarios and fluctuating light conditions, to maintain operational consistency. The efficacy of LPR systems also heavily depends on technology integration, with camera positioning and quality being vital for capturing clear, unobstructed images at high speeds from various angles. Equally important is the real-time processing capability, particularly for critical applications, such as law enforcement and traffic monitoring, which require resilient hardware and streamlined software to react promptly to unfolding events. Privacy and data security are paramount considerations from legal and ethical perspectives, necessitating rigorous security protocols to prevent unauthorized access and promote adherence to local privacy laws to maintain public confidence. Additionally, the cost-effectiveness of these systems is crucial, as the initial and ongoing operational investments, including hardware procurement, installation, upkeep, and data administration, must be justified by benefits, such as enhanced security, improved traffic management, and more effective law enforcement.

6. Conclusions

This study underscores the pivotal role of IoT in the evolution of LPR systems, marking a significant stride towards the establishment of advanced and integrated technological ecosystems in transportation and urban governance. This research not only showcases the practical viability of real-time LPR solutions enhanced with IoT using SBC technology but also signifies a substantial endeavor to enhance the role of SBC technology in fortifying security measures, streamlining operations, and simplifying routine tasks in an era when automated systems and connectivity integration are increasingly indispensable. The implications of this investigation transcend immediate outcomes, laying a robust foundation for future SBC technological advancements.
Raspberry Pi 4, a key component in this study for real-time license plate detection, addresses practical factors, such as camera angles, object speed, and camera-to-object distances. The findings elucidate optimal configurations for real-world SBC applications, particularly those requiring robust and flexible surveillance. The study reveals that maintaining near-perpendicular camera angles optimizes license plate detection accuracy by minimizing distortions and occlusions. Additionally, it demonstrates that optimal camera-to-object distances are crucial for clarity and effectiveness. In contrast, highly close distances, though practical, may only sometimes be feasible. These insights underscore the need for well-tuned camera position setups to enhance accuracy and reliability in low-cost platforms like Raspberry Pi 4, reassuring the audience of its practicality and effectiveness.
Finally, the research indicates that real-time LPR system performance on Raspberry Pi 4 is optimized by minimizing speed variations, using near-perpendicular angles, and maintaining moderate distances. These configurations yield the most accurate images and reliable detection results, suggesting that a synergistic adjustment of these factors is preferable. The study also discusses the hardware constraints of Raspberry Pi, such as processing speed and camera quality, and proposes advanced techniques like motion deblurring, adaptive thresholding, high-resolution cameras, and deep learning models to enhance performance. The ongoing nature of research in the field is highlighted, with a call for future studies to focus on further research on SBC technology for LPR to achieve significantly better experimental results by implementing advanced techniques, such as motion deblurring, adaptive thresholding, and integrating high-resolution cameras with infrared lighting to enhance image clarity. Deep learning models like CNNs for feature extraction, environmental adaptation techniques, data augmentation, and synthetic data generation can bolster robustness and accuracy. Additionally, hardware acceleration, model optimization, efficient inference engines, parallel processing, and diverse testing environments can ensure real-time performance and continuous improvement. Moreover, the study underscores the crucial role of ethical considerations and public discourse in the responsible implementation and acceptance of surveillance technologies like LPR systems.

Author Contributions

Conceptualization, P.N., M.R. and S.P.; methodology, P.N., M.R. and S.P.; software evaluation and modeling, P.N., M.R. and S.P.; validation, P.N., M.R. and S.P.; formal analysis, P.N. and M.R.; investigation, P.N., M.R. and S.P.; resources, P.N., M.R. and S.P.; data curation, P.N., M.R. and S.P.; writing—original draft preparation, P.N., M.R. and S.P.; writing—review and editing, P.N. and M.R.; visualization, P.N., M.R. and S.P.; supervision, P.N. and M.R.; project administration, P.N. and M.R.; funding acquisition, P.N. and S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be available upon a reasonable request from corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rajebi, S.; Pedrammehr, S.; Mohajerpoor, R. A license plate recognition system with robustness against adverse environmental conditions using Hopfield’s neural network. Axioms 2023, 12, 424. [Google Scholar] [CrossRef]
  2. Al-batat, R.; Angelopoulou, A.; Premkumar, S.; Hemanth, J.; Kapetanios, E. An end-to-end automated license plate recognition system using YOLO based vehicle and license plate detection with vehicle classification. Sensors 2022, 22, 9477. [Google Scholar] [CrossRef] [PubMed]
  3. Lubna; Mufti, N.; Shah, S.A.A. Automatic number plate recognition: A detailed survey of relevant algorithms. Sensors 2021, 21, 3028. [Google Scholar] [CrossRef] [PubMed]
  4. Ghadage, S.S.; Khedkar, S.R. A review paper on automatic number plate recognition system using machine learning algorithms. Int. J. Eng. Res. Technol. 2019, 8, 12. [Google Scholar] [CrossRef]
  5. Rukhiran, M.; Wong-In, S.; Netinant, P. IoT-based biometric recognition systems in education for identity verification services: Quality assessment approach. IEEE Access 2023, 11, 22767–22787. [Google Scholar] [CrossRef]
  6. Rukhiran, M.; Sutanthavibul, C.; Boonsong, S.; Netinant, P. IoT-based mushroom cultivation system with solar renewable energy integration: Assessing the sustainable impact of the yield and quality. Sustainability 2023, 15, 13968. [Google Scholar] [CrossRef]
  7. Padmasiri, H.; Shashirangana, J.; Meedeniya, D.; Rana, O.; Perera, C. Automated license plate recognition for resource-constrained environments. Sensors 2022, 22, 1434. [Google Scholar] [CrossRef] [PubMed]
  8. Leng, J.; Chen, X.; Zhao, J.; Wang, C.; Zhu, J.; Yan, Y.; Zhao, J.; Shi, W.; Zhu, Z.; Jiang, X.; et al. A light vehicle license-plate-recognition system based on hybrid edge–cloud computing. Sensors 2023, 23, 8913. [Google Scholar] [CrossRef] [PubMed]
  9. Connie, L.; Kim, O.C.; Patricia, A. A review of automatic license plate recognition system in mobile-based platform. J. Telecommun. Electron. Comput. Eng. 2018, 10, 77–82. [Google Scholar]
  10. Sung, J.-Y.; Yu, S.-B.; Korea, S.-H.P. Real-time automatic license plate recognition system using YOLOv4. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics—Asia, Seoul, Republic of Korea, 1–3 November 2020. [Google Scholar] [CrossRef]
  11. Sultan, F.; Khan, K.; Shah, Y.A.; Shahzad, M.; Khan, U.; Mahmood, Z. Towards automatic license plate recognition in challenging conditions. Appl. Sci. 2023, 13, 3956. [Google Scholar] [CrossRef]
  12. Khan, M.M.; Ilyas, M.U.; Khan, I.R.; Alshomrani, S.M.; Rahardja, S. License plate recognition methods employing neural networks. IEEE Access 2023, 11, 73613–73646. [Google Scholar] [CrossRef]
  13. Abdel-Basset, M.; Hawash, H.; Chakrabortty, R.K.; Ryan, M.; Elhoseny, M.; Song, H. ST-DeepHAR: Deep learning model for human activity recognition in IoHT applications. IEEE Internet Things J. 2021, 8, 4969–4979. [Google Scholar] [CrossRef]
  14. Deshpande, M.; Veena, M.B.; Ferede, A.W. Auditory speech based alerting system for detecting dummy number plate via video processing data sets. Comput. Intell. Neurosci. 2022, 2022, 4423744. [Google Scholar] [CrossRef] [PubMed]
  15. Selmi, Z.; Halima, M.B.; Pal, U.; Alimi, M.A. DELP-DAR system for license plate detection and recognition. Pattern Recognit. Lett. 2020, 129, 213–223. [Google Scholar] [CrossRef]
  16. Castro-Zunti, R.D.; Yépez, J.; Ko, S.-B. License plate segmentation and recognition system using deep learning and OpenVINO. IET Intell. Transp. Syst. 2020, 14, 119–126. [Google Scholar] [CrossRef]
  17. Kundrotas, M.; Janutėnaitė-Bogdanienė, J.; Šešok, D. Two-step algorithm for license plate identification using deep neural networks. Appl. Sci. 2023, 13, 4902. [Google Scholar] [CrossRef]
  18. Weihong, W.; Jiaoyang, T. Research on license plate recognition algorithms based on deep learning in complex environment. IEEE Access 2020, 8, 91661–91675. [Google Scholar] [CrossRef]
  19. Tang, J.; Wan, L.; Schooling, J.; Zhao, P.; Chen, J.; Wei, S. Automatic number plate recognition (ANPR) in smart cities: A systematic review on technological advancements and application cases. Cities 2022, 129, 103833. [Google Scholar] [CrossRef]
  20. Xie, F.; Zhang, M.; Zhao, J.; Yang, J.; Liu, Y.; Yuan, X. A robust license plate detection and character recognition algorithm based on a combined feature extraction model and BPNN. J. Adv. Transp. 2018, 2018, 6737314. [Google Scholar] [CrossRef]
  21. Kumawat, K.; Jain, A.; Tiwari, N. Relevance of automatic number plate recognition systems in vehicle theft detection. Eng. Proc. 2023, 59, 185. [Google Scholar] [CrossRef]
  22. Ha, P.S.; Shakeri, M. License plate automatic recognition based on edge detection. In Proceedings of the 2016 Artificial Intelligence and Robotic, Qazvin, Iran, 9 April 2016. [Google Scholar] [CrossRef]
  23. Hezil, N.; Amrouche, A.; Bentrcia, Y. Vehicle license plate detection using morphological operations and deep learning. In Proceedings of the 2022 International Conference of Advanced Technology in Electronic and Electrical Engineering, M’sila, Algeria, 26–27 November 2022. [Google Scholar] [CrossRef]
  24. Yogheedha, K.; Nasir, A.S.A.; Jaafar, H.; Mamduh, S.M. Automatic vehicle license plate recognition system based on image processing and template matching approach. In Proceedings of the 2018 International Conference on Computational Approach in Smart Systems Design and Applications, Kuching, Malaysia, 15–17 August 2018. [Google Scholar] [CrossRef]
  25. Mahmood, Z.; Khan, K.; Khan, U.; Adil, S.H.; Ali, S.S.A.; Shahzad, M. Towards automatic license plate detection. Sensors 2022, 22, 1245. [Google Scholar] [CrossRef] [PubMed]
  26. Sharma, T.; Debaque, B.; Duclos, N.; Chehri, A.; Kinder, B.; Fortier, P. Deep learning-based object detection and scene perception under bad weather conditions. Electronics 2022, 11, 563. [Google Scholar] [CrossRef]
  27. Ahmed, A.A.; Ahmed, S. A real-time car towing management system using ML-powered automatic number plate recognition. Algorithms 2021, 14, 317. [Google Scholar] [CrossRef]
  28. Lee, Y.Y.; Abdul Halim, Z.; Ab Wahab, M.N. License plate detection using convolutional neural network–back to the basic with design of experiments. IEEE Access 2022, 10, 22577–22585. [Google Scholar] [CrossRef]
  29. Chow, V. Predicting auction price of vehicle license plate with deep recurrent neural network. Expert Syst. Appl. 2020, 142, 113008. [Google Scholar] [CrossRef]
  30. Yang, D.; Yang, L. A deep learning-based framework for vehicle license plate detection. Int. J. Adv. Comput. Sci. Appl. 2024, 15, 1009–1018. [Google Scholar] [CrossRef]
  31. Krocka, M.; Dakic, P.; Vranic, V. Automatic license plate recognition using OpenCV. In Proceedings of the 2022 12th International Conference on Advanced Computer Information Technologies, Ruzomberok, Slovakia, 26–28 September 2022. [Google Scholar] [CrossRef]
  32. Glasenapp, L.A.; Hoppe, A.F.; Wisintainer, M.A.; Sartori, A.; Stefenon, S.F. OCR applied for identification of vehicles with irregular documentation using IoT. Electronics 2023, 12, 1083. [Google Scholar] [CrossRef]
  33. Lin, H.; Zhao, J.; Li, S.; Qiu, G. License plate location method based on edge detection and mathematical morphology. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference, Chongqing, China, 12–14 June 2020. [Google Scholar] [CrossRef]
  34. Shi, H.; Zhao, D. License plate localization in complex environments based on improved GrabCut algorithm. IEEE Access 2022, 10, 88495–88503. [Google Scholar] [CrossRef]
  35. Zhang, Y.; Qiu, M.; Ni, Y.; Wang, Q. A novel deep learning based number plate detect algorithm under dark lighting conditions. In Proceedings of the 2020 IEEE 20th International Conference on Communication Technology, Nanning, China, 28–31 October 2020. [Google Scholar] [CrossRef]
  36. Chalnewad, S.; Manjaramkar, A. Detection and classification of license plate by neural network classifier. In Proceedings of the 2023 International Conference on Intelligent Data Communication Technologies and Internet of Thing, Bengaluru, India, 5–7 January 2023. [Google Scholar] [CrossRef]
  37. Memon, J.; Sami, M.; Khan, R.A.; Uddin, M. Handwritten optical character recognition (OCR): A Comprehensive systematic literature review (SLR). IEEE Access 2020, 8, 142642–142668. [Google Scholar] [CrossRef]
  38. Ohzeki, K.; Schneider, S.; Geigis, M. Leave a group out cross-validation (LAGOCV) to determine threshold for license plate detection in autonomous driving. Int. J. Adv. Comput. Sci. Appl. 2020, 17, 15–32. [Google Scholar]
  39. Aqaileh, T.; Alkhateeb, F. Automatic jordanian license plate detection and recognition system using deep learning techniques. J. Imaging 2023, 9, 201. [Google Scholar] [CrossRef] [PubMed]
  40. Khan, I.R.; Ali, S.T.A.; Siddiq, A.; Khan, M.M.; Ilyas, M.U.; Alshomrani, S.; Rahardja, S. Automatic license plate recognition in real-world traffic videos captured in unconstrained environment by a mobile camera. Electronics 2022, 11, 1408. [Google Scholar] [CrossRef]
  41. Ambrož, M. Raspberry Pi as a Low-cost data acquisition system for human powered vehicles. Measurement 2017, 100, 7–18. [Google Scholar] [CrossRef]
  42. Kyaw, A.K. Low-cost computing using raspberry Pi 2 model B. J. Comput. 2018, 13, 287–299. [Google Scholar] [CrossRef]
  43. Long, M.M.; Diep, T.T.; Needs, S.H.; Ross, M.J.; Edwards, A.D. PiRamid: A compact raspberry pi imaging box to automate small-scale time-lapse digital analysis, suitable for laboratory and field use. HardwareX 2022, 12, e00377. [Google Scholar] [CrossRef]
  44. Khan, T. Ultra-low-power architecture for the detection and notification of wildfires using the internet of things. IoT 2023, 4, 1–26. [Google Scholar] [CrossRef]
  45. Rukhiran, M.; Phaokla, N.; Netinant, P. Adoption of environmental information chatbot services based on the internet of educational things in smart schools: Structural equation modeling approach. Sustainability 2022, 14, 15621. [Google Scholar] [CrossRef]
  46. Fakhar, S.; Saad, M.; Fauzan, A.; Affendi, R.; Aidil, M. Development of portable automatic number plate recognition (ANPR) system on Raspberry Pi. Int. J. Electr. Comput. Eng. 2019, 9, 1805. [Google Scholar] [CrossRef]
  47. Hamdi, A.; Chan, Y.K.; Koo, V.C. A new image enhancement and super resolution technique for license plate recognition. Heliyon 2021, 7, e08341. [Google Scholar] [CrossRef]
  48. Dalarmelina, N.D.V.; Teixeira, M.A.; Meneguette, R.I. A real-time automatic plate recognition system based on optical character recognition and wireless sensor networks for ITS. Sensors 2019, 20, 55. [Google Scholar] [CrossRef]
  49. Rukhiran, M.; Buaroong, S.; Netinant, P. Software development for educational information services using multilayering semantics adaptation. Int. J. Serv. Sci. Manag. Eng. Technol. 2022, 13, 1–27. [Google Scholar] [CrossRef]
Figure 1. Processes of OCR image to text.
Figure 1. Processes of OCR image to text.
Technologies 12 00098 g001
Figure 2. IoT-based real-time LPR framework using SBC technology.
Figure 2. IoT-based real-time LPR framework using SBC technology.
Technologies 12 00098 g002
Figure 3. Overall system architecture and design of the IoT-driven LPR system.
Figure 3. Overall system architecture and design of the IoT-driven LPR system.
Technologies 12 00098 g003
Figure 4. Research methodology with the camera settings at various angles.
Figure 4. Research methodology with the camera settings at various angles.
Technologies 12 00098 g004
Figure 5. Data collected from the LPR system on Raspberry Pi 4 in practical experiments of 180° angle from various distances: (a) 10 m, (b) 7 m, (c) 6 m, and (d) 5 m. (e) OCR result.
Figure 5. Data collected from the LPR system on Raspberry Pi 4 in practical experiments of 180° angle from various distances: (a) 10 m, (b) 7 m, (c) 6 m, and (d) 5 m. (e) OCR result.
Technologies 12 00098 g005aTechnologies 12 00098 g005b
Figure 6. Data collected from the LPR system on Raspberry Pi 4 in practical experiments of 165° angle from various distances: (a) 10 m, (b) 8 m, (c) 5 m, and (d) 195° at 5 m (e). OCR result.
Figure 6. Data collected from the LPR system on Raspberry Pi 4 in practical experiments of 165° angle from various distances: (a) 10 m, (b) 8 m, (c) 5 m, and (d) 195° at 5 m (e). OCR result.
Technologies 12 00098 g006aTechnologies 12 00098 g006b
Figure 7. Mean precision and recall for IoT-based LPR at various camera angles on Raspberry Pi.
Figure 7. Mean precision and recall for IoT-based LPR at various camera angles on Raspberry Pi.
Technologies 12 00098 g007
Figure 8. Mean precision and recall for IoT-based LPR at various distances on Raspberry Pi.
Figure 8. Mean precision and recall for IoT-based LPR at various distances on Raspberry Pi.
Technologies 12 00098 g008
Figure 9. Mean precision and recall for IoT-based LPR at various object speeds on Raspberry Pi.
Figure 9. Mean precision and recall for IoT-based LPR at various object speeds on Raspberry Pi.
Technologies 12 00098 g009
Table 1. Benefits of Raspberry Pi in SBC technology for LPR systems.
Table 1. Benefits of Raspberry Pi in SBC technology for LPR systems.
BenefitDescription
Cost-Effectiveness
[5,41,42]
The affordability of the Raspberry Pi renders the device a cost-effective selection for LPR projects.
Compact Size
[43]
The small size of the Raspberry Pi enables easy integration into a variety of settings without space issues.
Low Power Consumption
[4,6,44]
The device is energy-efficient and ideal for continuous operation without high power costs.
Real-Time Processing
[3,45]
Capable of local data processing, thus reducing response time and enhancing faster decision making.
Flexibility
[46]
Supports various programming languages and frameworks while allowing for tailored solutions
Table 2. Factors impacting the accuracy of LPR systems.
Table 2. Factors impacting the accuracy of LPR systems.
FactorImpact on LPR Accuracy
Image Quality
[47]
Higher resolution and clarity enable more accurate recognition.
Environmental Conditions [4,10,47]Lighting, weather, and obstructions can affect image quality.
Camera Angle and Distance
[4,10]
Incorrect angles or excessive distance can blur or obscure plates.
Speed of Vehicles
[7,8]
Higher speeds may result in blurred images, thus reducing accuracy.
Algorithm’s Efficiency
[15,16,48]
The effectiveness of algorithms determines recognition success.
Table 3. Summary of the key hardware and software components used in this study.
Table 3. Summary of the key hardware and software components used in this study.
ComponentSpecificationDescription
HardwareRaspberry Pi 48 GB RAMRaspberry Pi is designed for its strong processing capabilities and ample memory to handle real-time data processing. It is the central processing unit, thus overseeing the simultaneous management of multiple data streams.
Logitech C922 WebcamFull HD 1080p, AutofocusIt enables high-resolution video recording, which is essential for capturing sharp and intricate images of license plates. The autofocus feature guarantees sharpness across different distances and lighting situations.
Operating SystemRaspbian OS64-bit versionIt is optimized for Raspberry Pi hardware, supports efficient multitasking, and is compatible with numerous libraries for image processing and OCR.
Libraries and ToolsPytesseractOCR for diverse text recognitionProficient in identifying and analyzing textual content extracted from images and specifically tailored to effectively handle the intricacies of Thai script found on license plates. It guarantees precision in identifying characters despite changing environmental circumstances.
OpenCVImage processing and machine learningThey are used for the initial processing of video and image data, thus providing tools for object detection, including license plates, and preprocessing images to enhance OCR accuracy.
Table 4. Performance metrics for IoT-based LPR on Raspberry Pi under varying angles, speeds, and distances.
Table 4. Performance metrics for IoT-based LPR on Raspberry Pi under varying angles, speeds, and distances.
Angle
(Degrees)
Speed (km/h)Distance (Meters)True
Positives (TP)
False
Positives (FP)
True
Negatives (TN)
False
Negatives (FN)
TestsPrecisionRecallF1 Score
180<558312051000.8740.94390.71%
180<1057418081000.8040.90285.06%
180<15553340131000.6090.80369.28%
165<557817051000.8210.94087.64%
165<1057219091000.7910.88983.72%
165<15566220121000.7500.84679.52%
150<556131081000.6630.88475.78%
150<1055833091000.6370.86673.42%
150<15557300131000.6550.81472.61%
135<555932091000.6480.86874.21%
135<10548400121000.5450.80064.86%
135<15547400131000.5400.78363.95%
195<554648061000.4890.88563.01%
195<1054348091000.4730.82760.14%
195<15541460131000.4710.75958.16%
210<554449071000.4730.86361.11%
210<1054150091000.4510.82058.16%
210<15540500101000.4440.80057.14%
225<553952091000.4290.81356.12%
225<1053855071000.4090.84455.07%
225<1553754091000.4070.80454.01%
225<5103952091000.4290.81356.12%
225<10103855071000.4090.84455.07%
225<15103754091000.4070.80454.01%
Table 5. Descriptive statistics of key performance indicators based on confusion metrics.
Table 5. Descriptive statistics of key performance indicators based on confusion metrics.
StatisticAngle
(Degrees)
Distance (Meters)True
Positives (TP)
False
Positives (FP)
True
Negatives (TN)
False
Negatives
(FN)
TestsPrecisionRecall
Count126126126126126126126126126
Mean172.628.8349.6531.83016.311000.5910.692
SD27.502.1518.5216.76010.5200.2090.165
Min1355152001000.0910.250
25th150737180101000.4330.585
Median1801048280151000.5880.697
75th1801063450231000.7470.815
Max2251590820601000.9431.000
Table 6. Results of factors affecting the performance of IoT-Based LPR on Raspberry Pi.
Table 6. Results of factors affecting the performance of IoT-Based LPR on Raspberry Pi.
FactorMinimum ValueMaximum ValueOptimal ValueImpact on Performance
Camera angle (degrees)135225180Angles closer to 180 degrees yield higher accuracy by minimizing perspective distortion.
Vehicle speed (km/h)515≤10Lower speeds enhance image clarity and reduce motion blur, thus improving recognition accuracy.
Distance (meters)510≤10Shorter distances provide clearer images, which are crucial for high-resolution character recognition.
Table 7. ANOVA results of angle, distance, and speed on precision and recall for IoT-based LPR on Raspberry Pi.
Table 7. ANOVA results of angle, distance, and speed on precision and recall for IoT-based LPR on Raspberry Pi.
Dependent
Variable
Independent
Variable
F-Statisticp-ValueStatistical
Significance
PrecisionAngle (degrees)10.250.0001Significant
RecallAngle (degrees)8.470.0015Significant
PrecisionDistance (meters)12.560.00001Significant
RecallDistance (meters)9.680.0003Significant
PrecisionSpeed (numeric)7.890.0021Significant
RecallSpeed (numeric)11.330.00003Significant
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Netinant, P.; Phonsawang, S.; Rukhiran, M. Evaluating Factors Shaping Real-Time Internet-of-Things-Based License Plate Recognition Using Single-Board Computer Technology. Technologies 2024, 12, 98. https://doi.org/10.3390/technologies12070098

AMA Style

Netinant P, Phonsawang S, Rukhiran M. Evaluating Factors Shaping Real-Time Internet-of-Things-Based License Plate Recognition Using Single-Board Computer Technology. Technologies. 2024; 12(7):98. https://doi.org/10.3390/technologies12070098

Chicago/Turabian Style

Netinant, Paniti, Siwakron Phonsawang, and Meennapa Rukhiran. 2024. "Evaluating Factors Shaping Real-Time Internet-of-Things-Based License Plate Recognition Using Single-Board Computer Technology" Technologies 12, no. 7: 98. https://doi.org/10.3390/technologies12070098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop