Artificial Vision System for Autonomous Mobile Platform Used in Intelligent and Flexible Indoor Environment Inspection
Abstract
:1. Introduction
1.1. Automated Indoor Inspection
1.2. Autonomous Indoor Inspection Systems
- (a)
- (b)
- (c)
- Energy autonomy, i.e., practical solutions for energy management for the prolonged operation of robots;
- (d)
- (e)
- Data privacy and security must address issues related to people’s privacy in indoor spaces and protection against cyber-attacks [4].
2. Materials and Methods
- (a)
- The development and implementation of a mobile platform (hardware) capable of remote control and autonomous navigation based on sensor data;
- (b)
- The development and integration of a computer vision system (hardware) to enhance the perception capabilities of the mobile platform;
- (c)
- The design and implementation of an object detection module (software) utilizing artificial intelligence techniques;
- (d)
- The deployment of the object detection module on the mobile platform, creating an artificial vision system;
- (e)
- The development and implementation of an autonomous navigation algorithm leveraging object detection for enhanced mobility and decision making;
- (f)
- The development of a software module for managing the inspection process in indoor environments, ensuring flexible missions and efficient data collection;
- (g)
- The comprehensive testing and validation of the artificial vision system to assess its accuracy and operational efficiency under real-world conditions.
2.1. Description of the Inspection Process
2.2. Structure of the Inspection System
2.3. Hardware Subsystem
2.3.1. Initial Structure of the Mobile Platform
2.3.2. Modified Structure of the Mobile Platform
2.3.3. Artificial Vision System
2.4. Software Subsystem
2.4.1. Platform Autonomous Navigation Software
2.4.2. Object Detection and Decision Software
2.4.3. AI Models Used for Object Detection
2.4.4. AI Model Analysis and Selection
Model | Release Year | Accuracy (mAP) % | Speed (FPS) | Model Size (MB) | Parameters (Million) | Key Features and Improvements |
---|---|---|---|---|---|---|
YOLOv1 | 2015 | 63.4 | 45 | 193 | 62 | Introduced real-time object detection with a single neural network |
YOLOv2 | 2016 | 76.8 | 40 | 193 | 67 | Improved accuracy with batch normalization, high-resolution classifier, and anchor boxes |
YOLOv3 | 2018 | 78.6 | 30 | 236 | 62 | Added feature pyramid networks for better detection at different scales |
YOLOv4 | 2020 | 82 | 62 | 244 | 64 | Incorporated CSPDarknet53, PANet, Mish activation, and Mosaic data augmentation |
YOLOv5 | 2020 | 88 | 140 | 27 | 7.5 | Focused on ease of use, smaller model sizes, and faster inference |
YOLOv6 | 2022 | 52 | 123 | 17 | 15.9 | Optimized for industrial applications and improved training strategies |
YOLOv7 | 2022 | 56.8 | 161 | 72 | 36.9 | Introduced extended efficient layer aggregation networks |
YOLOv8 | 2023 | 53.9 | 111 | 68 | 43.7 | Featured a new backbone and neck architecture and improved benchmark performance |
YOLOv9 | 2024 | 54 | 112 | 69 | 44 | Further optimized architecture and enhanced feature extraction |
YOLOv10 | 2024 | 55 | 115 | 70 | 45 | Better handling of small objects and refined training procedures |
YOLOv11 | 2024 | 57 | 120 | 72 | 46 | Enhanced feature extraction and optimized for efficiency and speed |
2.4.5. Dataset Used for Training, Validation, and Testing
2.4.6. Management Software for Indoor Environments Inspection
3. Results
3.1. Model Performance Evaluation
- True Positive (TP) if the model correctly detects an object (IoU ≥ threshold) and classifies it into the correct category;
- False Positive (FP) if a bounding box is predicted but it does not sufficiently overlap with a ground truth box (IoU < threshold), or the class prediction is incorrect;
- False Negative (FN) if an object is present in the ground truth box, but the model fails to detect it, or the predicted bounding box is of the wrong class;
- True Negative (TN) if the model correctly identifies an image region as not containing any object of interest.
3.2. Training, Validation, and Testing of YOLO Models
3.3. Exporting YOLO Models to ONNX and Converting to HEF
3.4. Testing and Validation of the Artificial Vision System in Real-World Conditions Using the Autonomous Mobile Platform
4. Discussion
4.1. AI Model Selection
4.2. Image Dataset Preparation
4.3. Model Performance After Training and Validation
4.4. Model Performance in Real-World Tests
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Sanchez-Cubillo, J.; Del Ser, J.; Martin, J.L. Toward Fully Automated Inspection of Critical Assets Supported by Autonomous Mobile Robots, Vision Sensors, and Artificial Intelligence. Sensors 2024, 24, 3721. [Google Scholar] [CrossRef]
- de M. Santos, R.C.C.; Silva, M.C.; Santos, R.L.; Klippel, E.; Oliveira, R.A.R. Towards Autonomous Mobile Inspection Robots Using Edge AI. In Proceedings of the 25th International Conference on Enterprise Information Systems (ICEIS 2023), Prague, Czech Republic, 24–26 April 2023; SCITEPRESS–Science and Technology Publications, Lda.: Setúbal, Portugal, 2023; Volume 1, pp. 555–562. [Google Scholar]
- Pearson, E.; Szenher, P.; Huang, C.; Englot, B. Mobile Manipulation Platform for Autonomous Indoor Inspections in Low-Clearance Areas. In Proceedings of the ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference IDETC/CIE2023, Boston, MA, USA, 20–23 August 2023. [Google Scholar]
- Halder, S.; Afsari, K. Robots in Inspection and Monitoring of Buildings and Infrastructure: A Systematic Review. Appl. Sci. 2023, 13, 2304. [Google Scholar] [CrossRef]
- Macaulay, M.O.; Shafiee, M. Machine learning techniques for robotic and autonomous inspection of mechanical systems and civil infrastructure. Auton. Intell. Syst. 2022, 2, 8. [Google Scholar] [CrossRef]
- Dai, Y.; Kim, D.; Lee, K. An Advanced Approach to Object Detection and Tracking in Robotics and Autonomous Vehicles Using YOLOv8 and LiDAR Data Fusion. Electronics 2024, 13, 2250. [Google Scholar] [CrossRef]
- Hütten, N.; Alves Gomes, M.; Hölken, F.; Andricevic, K.; Meyes, R.; Meisen, T. Deep Learning for Automated Visual Inspection in Manufacturing and Maintenance: A Survey of Open-Access Papers. Appl. Syst. Innov. 2024, 7, 11. [Google Scholar] [CrossRef]
- Rane, N. YOLO and Faster R-CNN Object Detection for Smart Industry 4.0 and Industry 5.0: Applications, Challenges, and Opportunities. 25 October 2023. Available online: https://ssrn.com/abstract=4624206 (accessed on 17 January 2025). [CrossRef]
- Toman, R.; Rogala, T.; Synaszko, P.; Katunin, A. Robotized Mobile Platform for Non-Destructive Inspection of Aircraft Structures. Appl. Sci. 2024, 14, 10148. [Google Scholar] [CrossRef]
- Rea, P.; Ottaviano, E. Hybrid Inspection Robot for Indoor and Outdoor Surveys. Actuators 2023, 12, 108. [Google Scholar] [CrossRef]
- Bai, C.; Bai, X.; Wu, K. A Review: Remote Sensing Image Object Detection Algorithm Based on Deep Learning. Electronics 2023, 12, 4902. [Google Scholar] [CrossRef]
- Song, Q.; Zhou, Z.; Ji, S.; Cui, T.; Yao, B.; Liu, Z. A Multiscale Parallel Pedestrian Recognition Algorithm Based on YOLOv5. Electronics 2024, 13, 1989. [Google Scholar] [CrossRef]
- TSCINBUNY. Available online: https://tscinbuny.com/products/tscinbuny-esp32-robot-for-arduino-uno-starter-kit-programmable-robot-educational-kit-4wd-60mm-omni-directional-wheel-chassis-with-wifi-app-obstacle-avoidance-line-tracking-smart-car-set (accessed on 17 January 2025).
- Raspberry Pi 5. Available online: https://www.raspberrypi.com/products/raspberry-pi-5/ (accessed on 17 January 2025).
- Raspberry Pi Camera Module 3. Available online: https://www.raspberrypi.com/products/camera-module-3/ (accessed on 17 January 2025).
- Raspberry Pi AI Kit(Hailo-8L) vs. Coral USB Accelerator vs. Coral M.2 Accelerator with Dual Edge TPU. Available online: https://www.seeedstudio.com/blog/2024/07/16/raspberry-pi-ai-kit-vs-coral-usb-accelerator-vs-coral-m-2-accelerator-with-dual-edge-tpu/ (accessed on 17 January 2025).
- Raspberry Pi AI Kit. Available online: https://www.raspberrypi.com/products/ai-kit/ (accessed on 17 January 2025).
- Lin, L.; Guo, J.; Liu, L. Multi-scene application of intelligent inspection robot based on computer vision in power plant. Sci. Rep. 2024, 14, 10657. [Google Scholar] [CrossRef]
- Serey, J.; Alfaro, M.; Fuertes, G.; Vargas, M.; Durán, C.; Ternero, R.; Rivera, R.; Sabattin, J. Pattern Recognition and Deep Learning Technologies, Enablers of Industry 4.0, and Their Role in Engineering Research. Symmetry 2023, 15, 535. [Google Scholar] [CrossRef]
- Pavel, M.I.; Tan, S.Y.; Abdullah, A. Vision-Based Autonomous Vehicle Systems Based on Deep Learning: A Systematic Literature Review. Appl. Sci. 2022, 12, 6831. [Google Scholar] [CrossRef]
- Li, Z.; Wang, Y.; Zhang, N.; Zhang, Y.; Zhao, Z.; Xu, D.; Ben, G.; Gao, Y. Deep Learning-Based Object Detection Techniques for Remote Sensing Images: A Survey. Remote Sens. 2022, 14, 2385. [Google Scholar] [CrossRef]
- Trigka, M.; Dritsas, E. A Comprehensive Survey of Machine Learning Techniques and Models for Object Detection. Sensors 2025, 25, 214. [Google Scholar] [CrossRef]
- Carranza-García, M.; Torres-Mateo, J.; Lara-Benítez, P.; García-Gutiérrez, J. On the Performance of One-Stage and Two-Stage Object Detectors in Autonomous Vehicles Using Camera Data. Remote Sens. 2021, 13, 89. [Google Scholar] [CrossRef]
- Ramalingam, B.; Hayat, A.A.; Elara, M.R.; Gómez, B.F.; Yi, L.; Pathmakumar, T.; Rayguru, M.M.; Subramanian, S. Deep Learning Based Pavement Inspection Using Self-Reconfigurable Robot. Sensors 2021, 21, 2595. [Google Scholar] [CrossRef]
- Alotaibi, A.; Alatawi, H.; Binnouh, A.; Duwayriat, L.; Alhmiedat, T.; Alia, O.M. Deep Learning-Based Vision Systems for Robot Semantic Navigation: An Experimental Study. Technologies 2024, 12, 157. [Google Scholar] [CrossRef]
- Rane, L.N.; Choudhary, P.S.; Rane, J. YOLO and Faster R-CNN Object Detection in Architecture, Engineering and Construction (AEC): Applications, Challenges, and Future Prospects. SSRN Electron. J. 2023. [Google Scholar] [CrossRef]
- Zi, X.; Chaturvedi, K.; Braytee, A.; Li, J.; Prasad, M. Detecting Human Falls in Poor Lighting: Object Detection and Tracking Approach for Indoor Safety. Electronics 2023, 12, 1259. [Google Scholar] [CrossRef]
- Wang, K.; Zhou, H.; Wu, H.; Yuan, G. RN-YOLO: A Small Target Detection Model for Aerial Remote-Sensing Images. Electronics 2024, 13, 2383. [Google Scholar] [CrossRef]
- What is YOLO? The Ultimate Guide. 2025. Available online: https://blog.roboflow.com/guide-to-yolo-models (accessed on 17 January 2025).
- Zhu, P.; Chen, B.; Liu, B.; Qi, Z.; Wang, S.; Wang, L. Object Detection for Hazardous Material Vehicles Based on Improved YOLOv5 Algorithm. Electronics 2023, 12, 1257. [Google Scholar] [CrossRef]
- Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
- Yunusov, N.; Bappy, S.; Abdusalomov, A.; Kim, W. Robust Forest Fire Detection Method for Surveillance Systems Based on You Only Look Once Version 8 and Transfer Learning Approaches. Processes 2024, 12, 1039. [Google Scholar] [CrossRef]
- Raspberry Pi AI Kit: Custom Object Detection with Hailo8L. Available online: https://www.cytron.io/tutorial/raspberry-pi-ai-kit-custom-object-detection-with-h (accessed on 11 October 2024).
- Model Training with Ultralytics YOLO. Available online: https://docs.ultralytics.com/modes/train/ (accessed on 17 January 2025).
- Model Prediction with Ultralytics YOLO. Available online: https://docs.ultralytics.com/modes/predict/ (accessed on 17 January 2025).
- Model Export with Ultralytics YOLO. Available online: https://docs.ultralytics.com/modes/export/ (accessed on 17 January 2025).
- Raspberry Pi AI Kit: ONNX to HEF Conversion. Available online: https://www.cytron.io/tutorial/raspberry-pi-ai-kit-onnx-to-hef-conversion (accessed on 11 October 2024).
- Deepview-Validator 3.3.1. Available online: https://pypi.org/project/deepview-validator/ (accessed on 4 February 2025).
- Hailort. Available online: https://github.com/hailo-ai/hailort (accessed on 4 February 2025).
- Hailo Application Code Examples. Available online: https://github.com/hailo-ai/Hailo-Application-Code-Examples (accessed on 4 February 2025).
- Aktouf, L.; Shivanna, Y.; Dhimish, M. High-Precision Defect Detection in Solar Cells Using YOLOv10 Deep Learning Model. Solar 2024, 4, 639–659. [Google Scholar] [CrossRef]
- Computer Vision Model Leaderboard. Available online: https://leaderboard.roboflow.com/ (accessed on 4 February 2025).
Feature | Coral USB Accelerator | Coral M.2 Accelerator with Dual Edge TPU | Raspberry Pi AI Kit |
---|---|---|---|
Release Year | 2020 | 2021 | 2024 |
Form Factor | USB 3.0 Type-C | M.2-2230-D3-E | M.2 2242 |
AI Chip | Edge TPU | Dual Edge TPU | Hailo-8L |
Performance | 4 TOPS | 8 TOPS | 13 TOPS |
Connectivity | USB 3.0 Type-C | Two PCIe Gen2 × 1 interfaces | PCIe 3.0 (Single Lane, 8 Gbps) |
Compatibility | Linux, macOS, Windows | Debian-based Linux, Windows 10 | Raspberry Pi 5 |
AI Applications | AI NVR systems, home automation, vision projects | Industrial AI, edge AI development, traffic systems | Computer vision, object detection, smart home automation |
Price | USD 59.99 | USD 39.99 | USD 70 |
Feature | One-Stage (OS) | Two-Stage (TS) |
---|---|---|
Accuracy (mAP) | Good, but lower than TS | Higher than OS |
Speed (FPS) | Fast–Very fast | Very slow–Medium |
Computational Cost | Low–Medium | Medium–High |
Complexity | Medium | High |
Recommended devices | Edge, Embedded | With GPU |
Recommended applications | Real-time apps, autonomous driving, video surveillance, robotics | High-precision object detection, medical imaging |
Model Type | Algorithm | Accuracy (mAP) | Speed (FPS) | Computational Cost | Complexity |
---|---|---|---|---|---|
Two-stage | R-CNN | High | Very slow | High | Complex |
Fast R-CNN | High | Faster | Moderate | Moderate | |
Faster R-CNN | Very high | Moderate | Moderate | More complex than Fast R-CNN | |
Cascade R-CNN | Highest | Slower | High | Most complex | |
One-stage | YOLO | High | Very fast | Low | Relatively simple |
SSD | Moderate | Fast | Moderate | Moderate | |
RetinaNet | High | Moderate | Moderate | Moderate | |
EfficientDet | High | High | Moderate | Moderate |
Class Number | Class Name | Training Count | Validation Count | Testing Count | Total Count |
---|---|---|---|---|---|
0 | Door_Open_DI3 | 81 | 23 | 11 | 115 |
1 | Fire_extinguisher | 56 | 16 | 8 | 80 |
2 | Inspection_1 | 56 | 16 | 8 | 80 |
3 | Inspection_2 | 56 | 16 | 8 | 80 |
4 | Inspection_3 | 57 | 16 | 7 | 80 |
5 | Inspection_4 | 56 | 16 | 8 | 80 |
6 | Stop_Sign | 56 | 16 | 8 | 80 |
7 | Window_Open_DI3 | 140 | 40 | 20 | 200 |
8 | Window_Open_Hall | 167 | 49 | 22 | 238 |
TOTAL images | 725 | 208 | 100 | 1033 |
MODEL Type | Number of Layers | Number of Parameters | Number of Gradients | Computational Complexity |
---|---|---|---|---|
YOLOv8n | 225 | 3,012,603 | 3,012,587 | 8.2 GFLOPs |
YOLOv8s | 225 | 11,139,083 | 11,139,067 | 28.7 GFLOPs |
YOLOv10n | 385 | 2,710,550 | 2,710,534 | 8.4 GFLOPs |
YOLOv10s | 402 | 8,073,318 | 8,073,302 | 24.8 GFLOPs |
Class Name | Total Images | TP | FP | FN | Precision | Recall | F1-Score | Accuracy |
---|---|---|---|---|---|---|---|---|
Door_Open_DI3 | 23 | 23 | 0 | 0 | 1.00 | 1.00 | 1.00 | 100.00% |
Fire_extinguisher | 16 | 16 | 0 | 0 | 1.00 | 1.00 | 1.00 | 100.00% |
Inspection_1 | 16 | 16 | 0 | 0 | 1.00 | 1.00 | 1.00 | 100.00% |
Inspection_2 | 16 | 16 | 0 | 0 | 1.00 | 1.00 | 1.00 | 100.00% |
Inspection_3 | 16 | 16 | 0 | 0 | 1.00 | 1.00 | 1.00 | 100.00% |
Inspection_4 | 16 | 16 | 0 | 0 | 1.00 | 1.00 | 1.00 | 100.00% |
Stop_Sign | 16 | 16 | 0 | 0 | 1.00 | 1.00 | 1.00 | 100.00% |
Window_Open_DI3 | 40 | 40 | 0 | 0 | 1.00 | 1.00 | 1.00 | 100.00% |
Window_Open_Hall | 49 | 49 | 0 | 0 | 1.00 | 1.00 | 1.00 | 100.00% |
TOTAL | 208 | 208 | 0 | 0 | 100.00% |
Model Type | Total Images | TP | FP | FN | Precision | Recall | F1-Score | Accuracy | mAP@0.5 | mAP@0.5:0.95 |
---|---|---|---|---|---|---|---|---|---|---|
YOLOv8n | 725 | 725 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 0.9989 | 0.9868 |
208 | 208 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 1.0000 | 0.9783 | |
100 | 100 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 1.0000 | 0.9850 | |
TOTAL | 1033 | 1033 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 0.9989 | 0.9843 |
YOLOv8s | 725 | 725 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 0.9989 | 0.9874 |
208 | 208 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 1.0000 | 0.9783 | |
100 | 100 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 1.0000 | 0.9830 | |
TOTAL | 1033 | 1033 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 0.9989 | 0.9846 |
YOLOv10n | 725 | 721 | 0 | 4 | 1.000 | 0.996 | 0.998 | 99.638% | 0.9934 | 0.9812 |
208 | 208 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 1.0000 | 0.9805 | |
100 | 100 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 1.0000 | 0.9854 | |
TOTAL | 1033 | 1029 | 0 | 4 | 1.000 | 0.997 | 0.999 | 99.746% | 0.9955 | 0.9810 |
YOLOv10s | 725 | 724 | 0 | 1 | 1.000 | 0.998 | 0.999 | 99.802% | 0.9967 | 0.9815 |
208 | 208 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 1.0000 | 0.9789 | |
100 | 100 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 1.0000 | 0.9815 | |
TOTAL | 1033 | 1032 | 0 | 1 | 1.000 | 0.999 | 0.999 | 99.861% | 0.9967 | 0.9797 |
Model Type | Total Images | TP | FP | FN | Precision | Recall | F1-Score | Accuracy | mAP@0.5 | mAP@0.5:0.95 |
---|---|---|---|---|---|---|---|---|---|---|
YOLOv8n 50 epochs | 725 | 725 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 0.9989 | 0.9588 |
208 | 208 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 1.0000 | 0.9496 | |
100 | 100 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 1.0000 | 0.9644 | |
TOTAL | 1033 | 1033 | 0 | 0 | 1.000 | 1.000 | 1.000 | 100.000% | 0.9989 | 0.9561 |
YOLOv8s 58 epochs | 725 | 718 | 7 | 0 | 0.988 | 1.000 | 0.994 | 98.611% | 0.9945 | 0.9645 |
208 | 207 | 1 | 0 | 0.993 | 1.000 | 0.997 | 99.306% | 1.0000 | 0.9590 | |
100 | 97 | 3 | 0 | 0.967 | 1.000 | 0.980 | 95.833% | 0.9817 | 0.9519 | |
TOTAL | 1033 | 1022 | 11 | 0 | 0.987 | 1.000 | 0.993 | 98.472% | 0.9934 | 0.9610 |
YOLOv10n 58 epochs | 725 | 721 | 0 | 4 | 1.000 | 0.996 | 0.998 | 99.580% | 0.9928 | 0.9564 |
208 | 207 | 1 | 0 | 0.993 | 1.000 | 0.997 | 99.306% | 0.9919 | 0.9386 | |
100 | 99 | 0 | 1 | 1.000 | 0.995 | 0.997 | 99.495% | 0.9945 | 0.9583 | |
TOTAL | 1033 | 1027 | 1 | 5 | 0.999 | 0.997 | 0.998 | 99.519% | 0.9929 | 0.9520 |
YOLOv10s 50 epochs | 725 | 712 | 1 | 12 | 0.998 | 0.978 | 0.988 | 97.622% | 0.9706 | 0.9356 |
208 | 205 | 0 | 3 | 1.000 | 0.983 | 0.991 | 98.333% | 0.9817 | 0.9451 | |
100 | 99 | 0 | 1 | 1.000 | 0.984 | 0.991 | 98.413% | 0.9835 | 0.9503 | |
TOTAL | 1033 | 1016 | 1 | 16 | 0.999 | 0.980 | 0.989 | 97.848% | 0.9740 | 0.9374 |
Model Type | Total Images | TP | FP | FN | Precision | Recall | F1-Score | Accuracy |
---|---|---|---|---|---|---|---|---|
YOLOv8n—50 epochs | 1033 | 800 | 169 | 064 | 0.782 | 0.898 | 0.822 | 72.827% |
YOLOv8s—58 epochs | 1033 | 691 | 253 | 89 | 0.701 | 0.811 | 0.738 | 63.432% |
YOLOv10n—58 epochs | 1033 | 595 | 236 | 202 | 0.691 | 0.661 | 0.649 | 52.266% |
YOLOv10s—50 epochs | 1033 | 664 | 172 | 197 | 0.744 | 0.727 | 0.723 | 59.478% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Luculescu, M.C.; Cristea, L.; Boer, A.L. Artificial Vision System for Autonomous Mobile Platform Used in Intelligent and Flexible Indoor Environment Inspection. Technologies 2025, 13, 161. https://doi.org/10.3390/technologies13040161
Luculescu MC, Cristea L, Boer AL. Artificial Vision System for Autonomous Mobile Platform Used in Intelligent and Flexible Indoor Environment Inspection. Technologies. 2025; 13(4):161. https://doi.org/10.3390/technologies13040161
Chicago/Turabian StyleLuculescu, Marius Cristian, Luciana Cristea, and Attila Laszlo Boer. 2025. "Artificial Vision System for Autonomous Mobile Platform Used in Intelligent and Flexible Indoor Environment Inspection" Technologies 13, no. 4: 161. https://doi.org/10.3390/technologies13040161
APA StyleLuculescu, M. C., Cristea, L., & Boer, A. L. (2025). Artificial Vision System for Autonomous Mobile Platform Used in Intelligent and Flexible Indoor Environment Inspection. Technologies, 13(4), 161. https://doi.org/10.3390/technologies13040161