Resource-Efficient Design and Implementation of Real-Time Parking Monitoring System with Edge Device
Abstract
:1. Introduction
- Modular design of the proposed system: The proposed system entails three general modules that are compatible with all AI-based architectures rather than relying on specific use cases. The first module, the IP camera frame capture thread, implements stable and reliable image quality management and data capture through FFmpeg-based data processing, ensuring real-time performance. The second module determines whether a vehicle is occupied by a parking space by using an AI model. It can simultaneously determine the status of multiple parking spaces with a single image and is compatible with various AI models so as to ensure flexibility and scalability. The last module transmits data to the central server only when there exists a difference between the previous state of the parking space and its recognized current state, thereby reducing the network load and minimizing transmission costs.
- Efficient data processing and reduced computational load: We devised a novel dual-trigger algorithm that combines the parking lot mask trigger and periodic trigger to efficiently use computing resources. Through the parking lot mask trigger, parking lot status changes can be sensitively detected in real-time, thereby reducing the frequency of AI model calls and effectively reducing the computational load on edge devices.
- Field verification and performance evaluation: The proposed system was operated for 4 months in an actual parking lot with a total of 12 parking spaces, exhibiting stable performance in various scenarios.
- Economical and scalable solution: The aforementioned contributions have significant implications in real-world parking management scenarios, demonstrating the economic effectiveness and practicality of the system. The proposed system, built using only edge devices and IP cameras, has remarkably reduced initial installation costs and can be deployed faster than traditional sensor-based solutions, demonstrating real scalability for smart city implementations.
2. Related Research
2.1. Parking Occupancy Detection
Reference | Region | Purpose | Dataset | Method | Results |
---|---|---|---|---|---|
[15] | Brazil | Detecting parking occupancy using texture descriptor (local phase quantization, LPQ) for parking occupancy detection | RGB images (Data: 695,899 images captured from digital camera; PKLot) | Local binary pattern (LBP) and local phase quantization | Accuracy of 89% |
[16] | Italy | Distributed parking lot occupancy detection using color histograms and linear support vector machines (SVMs) | Normalized hue histograms extracted from labeled data | Hue histogram, binary linear SVM classifier | Accuracy of 87% |
[17] | USA | Efficient video processing algorithm-based vehicle occupancy detection for video-based real-time parking occupancy detection system | RGB images (Data: 1800 frames captured from a digital camera in various environmental conditions) | Background estimation and subtraction, motion detection, occlusion detection, and localization | Accuracy of 93.9% |
[18] | Europe | Use a trained CNN for each parking space to determine occupancy status | RGB images (Data: PKLot, 12,584 images captured from a digital camera; CNRPark) | CNN (mAlexNet and mLeNet) | Accuracy of 89.9% |
[19] | Indonesia | Real-time parking occupancy detection using only IP cameras using multiple deep learning architectures | RGB images (Data: CNRPark-Ext) | CNN (LeNet, AlexNet, mLeNet, and mAlexNet) | Accuracy of 93.15% |
[20] | Australia | Identify parking space occupancy using features extracted from pre-trained deep CNN and detect parking occupancy using the SVM classifier | RGB images (Data: PKLot, 24,300 images captured from DSLR camera; Barry Street) | Pre-trained deep CNN and SVM | Accuracy of 96.7% |
[23] | Czech Republic | Distributed a wireless camera system for determining parking space occupancy based on information from multiple cameras | RGB images (Data: about 1000 images collected in different daytimes and weather conditions) | Histogram of gradient (HOG)-based classifier and SVM | Accuracy of 91% |
[24] | China | Using edge computing for parking occupancy detection using real-time video feeds | RGB images (Data: Pascal VOC, MIO-TCD) | SSD, background (BG) modeling detector, SORT, and fusion | Accuracy of 95.6% |
[27] | Pakistan | Predicting parking locations using the deep extreme learning machine (DELM) approach to determine appropriate parking zones for drivers | RGB images (Data: 34,718 images from the PKLot) | DELM | Accuracy of 91.25% |
[28] | Sweden | Real-time vehicle occupancy measurement using thermal imaging cameras and deep learning technology | RGB images (Data: 600 frames captured using a thermal camera based on motion detection in various environmental conditions) | YOLOv2, YOLO-Conv, GoogleNet, ResNet18, and ResNet50 | Accuracy of 96.16% |
2.2. Edge Device-Based Object Detection
3. Proposed System Design
3.1. IP Camera Frame Capture Thread
3.2. Trigger Algorithm
Algorithm 1: Parking Space Mask Trigger Process |
Require: Streaming frame , Parking space , User-defined thresholds , Stabilization frames 1. Input: Streaming frame 2. Initial setup: Configure parking lot masks based on 3. Step 1: Verification Queue 4. For each parking space (): 5. For each mask (): 6. = 1 if > T else = 0 7. Step 2: Stabilization Queue 8. For each parking space (): 9. For each mask (): 10. = 1 if else = 0 11. Step 3: Final Trigger Decision 12. For each parking space (): 13. = 1 if else = 0 14. Return |
Algorithm 2: Periodic Trigger Process |
Require: Periodic trigger interval 1. Initialize: , 2. Repeat for every timestamp : 3. if : 4. ▷ Set trigger decision to active 5. ▷ Update the last trigger timestamp 6. else: 7. 8. Return |
3.3. Parking Decision Module
Algorithm 3: Parking Decision Process |
Require: Triggered image , Parking spaces , Thresholds 1. Input: Triggered image , Parking spaces 2. Detection: Use the AI model to detect all bounding boxes in 3. For each parking space (): 4. For each bounding box (): 5. if : 6. 7. else: 8. 9. Return |
4. Experimental Setting
5. Results and Discussion
5.1. Edge Device-Based AI Model Performance Evaluation
5.2. Trigger Algorithm Performance Evaluation
5.3. Field Validation with Dual Trigger and SSD-MobileNetv2
5.4. Comparison with Other Research
6. Conclusions
- Enhancing Computational Efficiency: As the number of cameras and data volume increase, ensuring real-time processing becomes crucial. This can be achieved through hardware performance enhancement, such as utilizing more powerful AI accelerators or deploying multiple edge devices for distributed processing. Additionally, model optimization techniques, including ONNX conversion and FP16 quantization, can reduce computational load while maintaining high detection accuracy.
- Vehicle-Based Detection Instead of License Plate Recognition: The current system relies on license plate detection, which requires cameras to be positioned at angles where plates are visible. However, an alternative approach is vehicle-based detection using a top-down perspective, allowing a single camera to monitor multiple parking spaces simultaneously. This method eliminates the dependency on license plate visibility, making the system more scalable and flexible. This approach is currently being explored in ongoing research on outdoor parking monitoring, utilizing top-view cameras to detect vehicles based on their position and movement patterns.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hedges & Company. How Many Cars Are There in the World? Hedges & Company Blog. Available online: https://www.whichcar.com.au/news/how-many-cars-are-there-in-the-world (accessed on 31 December 2024).
- Towne Park. Parking Statistics. Towne Park. Available online: https://www.townepark.com/parking-statistics/ (accessed on 31 December 2024).
- Cao, K.; Liu, Y.; Meng, G.; Sun, Q. An overview on edge computing research. IEEE Access 2020, 8, 85714–85728. [Google Scholar] [CrossRef]
- Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge computing: Vision and challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
- Yang, P.A.; Hu, X.; Li, R.; Zhou, Z.; Gui, Y.; Sun, R.; Wu, D.; Wang, X.; Bian, X. Flexible magnetoelectric sensors with enhanced output performance and response time for parking spaces detection systems. Sens. Actuators A Phys. 2025, 382, 116161. [Google Scholar] [CrossRef]
- Allbadi, Y.; Shehab, J.N.; Jasim, M.M. The smart parking system using ultrasonic control sensors. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2021; Volume 1076, p. 012064. [Google Scholar]
- Al-Turjman, F.; Malekloo, A. Smart parking in IoT-enabled cities: A survey. Sustain. Cities Soc. 2019, 49, 101608. [Google Scholar] [CrossRef]
- Lin, T.; Rivano, H.; Le Mouël, F. A survey of smart parking solutions. IEEE Trans. Intell. Transp. Syst. 2017, 18, 3229–3253. [Google Scholar] [CrossRef]
- Zhang, Z.; Li, X.; Yuan, H.; Yu, F. A street parking system using wireless sensor networks. Int. J. Distrib. Sensor Netw. 2013, 9, 107975. [Google Scholar]
- Ratti, S.A.; Pirzada, N.; Shah, S.M.A.; Naveed, A. Intelligent Car Parking System Using WSN. In Proceedings of the 2023 Global Conference on Wireless and Optical Technologies (GCWOT), Malaga, Spain, 24–27 January 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–9. [Google Scholar]
- Zhang, Z.; Tao, M.; Yuan, H. A parking occupancy detection algorithm based on AMR sensor. IEEE Sens. J. 2015, 15, 1261–1269. [Google Scholar] [CrossRef]
- Jeon, Y.; Ju, H.-I.; Yoon, S. Design of an LPWAN communication module based on secure element for smart parking application. In Proceedings of the IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 12–14 January 2018; pp. 1–2. [Google Scholar] [CrossRef]
- Lou, L.; Zhang, J.; Xiong, Y.; Jin, Y. An improved roadside parking space occupancy detection method based on magnetic sensors and wireless signal strength. Sensors 2019, 19, 2348. [Google Scholar] [CrossRef]
- Yamada, S.; Watanabe, Y.; Kanamori, R.; Sato, K.; Takada, H. Estimation method of parking space conditions using multiple 3D-LiDARs. Int. J. ITS Res. 2022, 20, 422–432. [Google Scholar] [CrossRef]
- De Almeida, P.R.; Oliveira, L.S.; Britto, A.S., Jr.; Silva, E.J., Jr.; Koerich, A.L. PKLot–A robust dataset for parking lot classification. Expert Syst. Appl. 2015, 42, 4937–4949. [Google Scholar] [CrossRef]
- Baroffio, L.; Bondi, L.; Cesana, M.; Redondi, A.E.; Tagliasacchi, M. A visual sensor network for parking lot occupancy detection in smart cities. In Proceedings of the IEEE 2nd World Forum Internet Things (WF-IoT), Milan, Italy, 14–16 December 2015; pp. 745–750. [Google Scholar] [CrossRef]
- Bulan, O.; Loce, R.P.; Wu, W.; Wang, Y.; Bernal, E.A.; Fan, Z. Video-based real-time on-street parking occupancy detection system. J. Electron. Imaging 2013, 22, 041109. [Google Scholar] [CrossRef]
- Amato, G.; Carrara, F.; Falchi, F.; Gennaro, C. Vairo Car parking occupancy detection using smart camera networks and deep learning. In Proceedings of the 2016 IEEE Symposium on Computers and Communication, Messina, Italy, 27–30 June 2016; pp. 1212–1217. [Google Scholar] [CrossRef]
- AFarley; Ham, H. Real time IP camera parking occupancy detection using deep learning. Procedia Comput. Sci. 2021, 179, 606–614. [Google Scholar] [CrossRef]
- Acharya, D.; Yan, W.; Khoshelham, K. Real-time image-based parking occupancy detection using deep learning. Res. Locate 2018, 4, 33–40. [Google Scholar]
- Xie, Z.; Wei, X. Automatic parking space detection system based on improved YOLO algorithm. In Proceedings of the 2021 2nd International Conference on Computer Science and Management Technology (ICCSMT), Shanghai, China, 12–14 November 2021; pp. 279–285. [Google Scholar]
- Carrasco, D.P.; Rashwan, H.A.; García, M.Á.; Puig, D. T-YOLO: Tiny Vehicle Detection Based on YOLO and Multi-Scale Convolutional Neural Networks. IEEE Access 2023, 11, 22430–22440. [Google Scholar]
- Vítek, S.; Melničuk, P. A distributed wireless camera system for the management of parking spaces. Sensors 2017, 18, 69. [Google Scholar] [CrossRef] [PubMed]
- Ke, R.; Zhuang, Y.; Pu, Z.; Wang, Y. A smart, efficient, and reliable parking surveillance system with edge artificial intelligence on IoT devices. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4962–4974. [Google Scholar] [CrossRef]
- Falaschetti, L.; Manoni, L.; Palma, L.; Pierleoni, P.; Turchetti, C. Embedded Real-Time Vehicle and Pedestrian Detection Using a Compressed Tiny YOLO v3 Architecture. IEEE Trans. Intell. Transp. Syst. 2024, 25, 19399–19414. [Google Scholar]
- Ming, P.Y.K.; Khan, N.A.; Asirvatham, D.A.; Tayyab, M.; Balakrishnan, S.A.; Kumar, D. Detecting Street Parking Occupancy Using Image Recognition with Yolo. In Proceedings of the 2024 International Conference on Emerging Trends in Networks and Computer Communications (ETNCC), Windhoek, Namibia, 23–25 July 2024; pp. 1–7. [Google Scholar]
- Siddiqui, S.Y.; Khan, M.A.; Abbas, S.; Khan, F. Smart occupancy detection for road traffic parking using deep extreme learning machine. J. King Saud Univ.-Comp. Inf. Sci. 2022, 34, 727–733. [Google Scholar] [CrossRef]
- Paidi, V.; Fleyeh, H.; Nyberg, R.G. Deep learning-based vehicle occupancy detection in an open parking lot using thermal camera. IET Intell. Transp. Syst. 2020, 14, 1295–1302. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Ren, S. Faster R-CNN: Towards real-time object detection with region proposal networks. arXiv 2015, arXiv:1506.01497. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the European Conference Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar] [CrossRef]
- Alqahtani, D.K.; Cheema, M.A.; Toosi, A.N. Benchmarking deep learning models for object detection on edge computing devices. In Proceedings of the International Conference on Service-Oriented Computing, Tunis, Tunisia, 3–6 December 2024; Springer Nature: Singapore, 2024; pp. 142–150. [Google Scholar]
- Tensorflow Hub. Available online: https://www.kaggle.com/models/tensorflow/ssd-mobilenet-v2/tensorFlow2/ssd-mobilenet-v2 (accessed on 22 December 2024).
- Park, S.W.; Park, Y.J.; Choi, H.W.; Ha, S.H.; Do, Y.S. Real-time Object Detection Model for Raspberry Pi. In Proceedings of the Annual Conference of KIPS, Gwangju, Republic of Korea, 31 October–2 November 2024; Korea Information Processing Society: Seoul, Republic of Korea, 2024; pp. 944–945. [Google Scholar]
- Ling, X.; Sheng, J.; Baiocchi, O.; Liu, X.; Tolentino, M.E. Identifying parking spaces & detecting occupancy using vision-based IoT devices. In Proceedings of the Global Internet Things Summit (GIoTS), Geneva, Switzerland, 6–9 June 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Nieto, R.M.; García-Martín, Á.; Hauptmann, A.G.; Martínez, J.M. Automatic vacant parking places management system using multicamera vehicle detection. IEEE Trans. Intell. Transp. Syst. 2019, 20, 1069–1080. [Google Scholar] [CrossRef]
- Nguyen, T.; Tran, T.; Mai, T.; Le, H.; Le, C.; Pham, D.; Phung, K.H. An adaptive vision-based outdoor car parking lot monitoring system. In Proceedings of the 2020 IEEE Eighth International Conference on Communications and Electronics (ICCE), Phu Quoc Island, Vietnam, 13–15 January 2021; pp. 445–450. [Google Scholar] [CrossRef]
Item | Specifications |
---|---|
CPU | Quad-core Cortex-A73/Dual-core Cortex-A53 |
RAM | 4 GB LPDDR4-3200 SDRAM |
GPU | Mali-G52 GPU |
Storage | SD Card 32 GB |
OS | Debian-installer-11-netboot-amd64 (20210731 + deb11u10) |
Size | 90 mm × 90 mm × 17 mm |
Weight | Approximately 132 g |
Model | Inference Time | Accuracy (%) | Precision (%) | Recall (%) | F1-Score |
---|---|---|---|---|---|
YOLOv8n | 1.58 s per frame | 99.97% | 100% | 99.94% | 0.9997 |
YOLOv8n (ONNX + FP16) | 1.88 s per frame | 99.95% | 99.95% | 99.94% | 0.9994 |
SSD-MobileNet v2 | 0.32 s per frame | 99.65% | 99.73% | 99.72% | 0.9972 |
Trigger Type | Scenario | Total Trigger | TP | FP | FN | Recall (%) | CPU Usage (%) | Memory Usage (%) |
---|---|---|---|---|---|---|---|---|
Periodic trigger | Weekend | 34,560 | 10 | 34,550 | 0 | 100% | 69.21% | 36.10% |
Weekday | 34,560 | 210 | 34,350 | 0 | 100% | 71.13% | 39.44% | |
Parking space Mask trigger | Weekend | 32 | 10 | 22 | 0 | 100% | 42.98% | 56.20% |
Weekday | 548 | 207 | 338 | 3 | 98.57% | 43.62% | 56.78% | |
Dual trigger | Weekend | 608 | 10 | 598 | 0 | 100% | 55.16% | 55.93% |
Weekday | 1124 | 210 | 914 | 0 | 100% | 59.05% | 56.57% |
Research Work | Ling et al. 2017 [36] | Nieto et al. 2019 [37] | Ke et al. 2020 [24] | Nguyen et al. 2021 [38] | This Study |
---|---|---|---|---|---|
System input | Single video | Multiple videos (IP camera) | Single video | Single video | Multiple videos (IP camera) |
Development environment | IoT devices | Desktop | IoT devices | IoT devices | IoT devices |
Pipeline logic | Classification | Detection | Detection | Classification | Detection |
Applied algorithms | Haar and F-test | Faster R-CNN and fusion | SSD, BG, SORT, and fusion | mAlexNet | Trigger algorithm and SSD-MobileNet v2 |
Algorithm operation cycle | NA | NA | NA | Every 30 s | When trigger occurs |
Training data | 469 frames | 6616 frames | 127,125 frames | 11,760 patches | 137,550 frames |
Validation data | 90 detections | 2200 images | 3-month real-word validation and 10 parking lots (outdoor) + 6 parking lots (indoor) | Three-day real-world validation and 24 parking lots (outdoor) | 4-month real-world validation and 12 parking lots (indoor) |
Test environment | Outdoor | Outdoor | Outdoor and indoor | Outdoor | Indoor |
Image processing speed | 1 frame per 5 s | NA | 1 frame per 1 s | 1 frame per 0.743 s | 1 frame per 0.32 s |
System accuracy | 91% | >91% | 95.6% | >97% | 98.48% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, J.; Jeong, I.; Jung, J.; Cho, J. Resource-Efficient Design and Implementation of Real-Time Parking Monitoring System with Edge Device. Sensors 2025, 25, 2181. https://doi.org/10.3390/s25072181
Kim J, Jeong I, Jung J, Cho J. Resource-Efficient Design and Implementation of Real-Time Parking Monitoring System with Edge Device. Sensors. 2025; 25(7):2181. https://doi.org/10.3390/s25072181
Chicago/Turabian StyleKim, Jungyoon, Incheol Jeong, Jungil Jung, and Jinsoo Cho. 2025. "Resource-Efficient Design and Implementation of Real-Time Parking Monitoring System with Edge Device" Sensors 25, no. 7: 2181. https://doi.org/10.3390/s25072181
APA StyleKim, J., Jeong, I., Jung, J., & Cho, J. (2025). Resource-Efficient Design and Implementation of Real-Time Parking Monitoring System with Edge Device. Sensors, 25(7), 2181. https://doi.org/10.3390/s25072181