Applied AI with PLC and IRB1200
Abstract
:Featured Application
Abstract
1. Introduction
2. Materials and Methods
3. System Configuration
3.1. System Configuration—Testbed Description at Sii
- S7-1512C controller with implemented instructions needed to read data from the camera connected directly to the NPU module.
- NPU module (The manufacturer states that the chip contains 16 low-power programmable SHAVE (Streaming Hybrid Architecture Vector Engine) cores and is equipped with a dedicated hardware accelerator for overbuilding deep neural network structures Intel Movidius MyriadTM X).
- ET200S controller and IO device island IM 151-3 PN ST located at Sii head office (the mentioned components are necessary to control the operation of the robot arm).
- Intel RealSense camera tasked with image reading and communicating with the NPU module via USB 3.1.
- ABB robot arm type IRB-1200, which reacts in a certain way to the detected colour.
- Workspace: the AI model learned the colored square boxes on a white background, so for most effective testing of the network, colors were printed on a white sheet of paper.
3.2. NLP Study—Image Recognition
4. Results
- Part 1: Artificial intelligence model (reading from Intel RealSense camera) recognizes colors: yellow, blue, green, red, black
- Part 2: The IRB 1200 robot, after recognizing the colors, reacts in a specific way described in Table 2.
4.1. Results Part 1—AI
4.2. Results Part 2—Testbed
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Savva, M.; Kong, N.; Chhajta, A.; Fei-Fei, L.; Agrawala, M.; Heer, J. Revision: Automated classification, analysis and redesign of chart images. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 16–19 October 2011; pp. 393–402. [Google Scholar]
- Dessí, D.; Osborne, F.; Recupero, D.R.; Buscaldi, D.; Motta, E. SCICERO: A deep learning and NLP approach for generating scientific knowledge graphs in the computer science domain. Knowl. Based Syst. 2022, 258, 109945. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. GradientBased Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2323. [Google Scholar] [CrossRef] [Green Version]
- Pathak, A.R.; Pandey, M.; Rautaray, S. Application of deep learning for object detection. Procedia Comput. Sci. 2018, 132, 1706–1717. [Google Scholar] [CrossRef]
- Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 13, 7068349. [Google Scholar] [CrossRef]
- Lee, K.T.; Chee, P.S.; Lim, E.H.; Lim, C.C. Artificial intelligence (AI)-driven smart glove for object recognition application. Mater. Today Proc. 2022, 64, 1563–1568. [Google Scholar] [CrossRef]
- Wang, L.; Lu, H.; Ruan, X.; Yang, M.H. Deep Networks for Saliency Detection via Local Estimation and Global Search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 3183–3192. [Google Scholar]
- Dai, B.; Ye, W.; Zheng, J.; Chai, Q.; Yao, Y. RETRACTED: Deep network for visual saliency prediction by encoding image composition. J. Vis. Commun. Image Represent. 2018, 55, 789–794. [Google Scholar] [CrossRef]
- Kong, X.; Meng, Z.; Nojiri, N.; Iwahori, Y.; Meng, L.; Tomiyama, H. A HOG-SVM Based Fall Detection IoT System for Elderly Persons Using Deep Sensor. Procedia Comput. Sci. 2019, 147, 276–282. [Google Scholar] [CrossRef]
- Arora, P.; Srivastava, S.; Arora, K.; Bareja, S. Improved Gait Recognition Using Gradient Histogram Gaussian Image. Procedia Comput. Sci. 2015, 58, 408–413. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 779–788. [Google Scholar]
- Işın, A.; Direkoğlu, C.; Şah, M. Review of MRI-based Brain Tumor Image Segmentation Using Deep Learning Methods. Procedia Comput. Sci. 2016, 102, 317–324. [Google Scholar] [CrossRef]
- Qureshi, I.; Yan, J.; Abbas, Q.; Shaheed, K.; Riaz, A.B.; Wahid, A.; Khan, M.W.J.; Szczuko, P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. Inf. Fusion 2023, 90, 316–352. [Google Scholar] [CrossRef]
- Tontiwachwuthikul, P.; Chan, C.W.; Zeng, F.; Liang, Z.; Sema, T.; Chao, M. Recent progress and new developments of applications of artificial intelligence (AI), knowledge-based systems (KBS), and Machine Learning (ML) in the petroleum industry. Spec. Issue Artif. Intell. Pet. J. 2020, 6, 319–320. [Google Scholar] [CrossRef]
- Yang, Y.; Li, R.; Xiang, Y.; Lin, D.; Yan, A.; Chen, W.; Li, Z.; Lai, W.; Wu, X.; Wan, C.; et al. Standardization of collection, storage, annotation, and management of data related to medical artificial intelligence. Intell. Med. 2021; in press. [Google Scholar]
- Hegde, S.; Ajila, V.; Zhu, W.; Zeng, C. Review of the Use of Artificial Intelligence in Early Diagnosis and Prevention of Oral Cancer. Asia-Pac. J. Oncol. Nurs. 2022, 9, 100133. [Google Scholar] [CrossRef]
- Ahmad, T.; Zhang, D.; Huang, C.; Zhang, H.; Dai, N.; Song, Y.; Chen, H. Artificial intelligence in sustainable energy industry: Status Quo, challenges and opportunities. J. Clean. Prod. 2021, 289, 125834. [Google Scholar] [CrossRef]
- Abioye, S.O.; Oyedele, L.O.; Akanbi, L.; Ajayi, A.; Delgado, J.M.D.; Akinade, O.O.; Ahmed, A. Artificial intelligence in the construction industry: A review of present status, opportunities and future challenges. J. Build. Eng. 2021, 44, 103299. [Google Scholar] [CrossRef]
- Zhang, F.; Qiao, Q.; Wang, J.; Liu, P. Data-driven AI emergency planning in process industry. J. Loss Prev. Process Ind. 2022, 76, 104740. [Google Scholar] [CrossRef]
- Niewiadomski, P.; Stachowiak, A.; Pawlak, N. Knowledge on IT Tools Based on AI Maturity-Industry 4.0 Perspective. Procedia Manuf. 2019, 39, 574–582. [Google Scholar] [CrossRef]
- Kumpulainen, S.; Terziyan, V. Artificial General Intelligence vs. Industry 4.0: Do They Need Each Other. Procedia Comput. Sci. 2022, 200, 140–150. [Google Scholar] [CrossRef]
- Parisi, L.; Ma, R.; RaviChandran, N.; Lanzillotta, M. hyper-sinh: An accurate and reliable function from shallow to deep learning in TensorFlow and Keras. Mach. Learn. Appl. 2021, 6, 100112. [Google Scholar] [CrossRef]
- Haghighat, E.; Juanes, R. SciANN: A Keras/TensorFlow wrapper for scientific computations and physics-informed deep learning using artificial neural networks. Comput. Methods Appl. Mech. Eng. 2021, 373, 113552. [Google Scholar] [CrossRef]
- PS Janardhanan. Project repositories for machine learning with TensorFlow. Procedia Comput. Sci. 2020, 171, 188–196. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Michele, A.; Colin, V.; Santika, D.D. MobileNet Convolutional Neural Networks and Support Vector Machines for Palmprint Recognition. Procedia Comput. Sci. 2019, 157, 110–117. [Google Scholar] [CrossRef]
- Korbel, J.J.; Siddiq, U.H.; Zarnekow, R. Towards Virtual 3D Asset Price Prediction Based on Machine Learning. J. Theor. Appl. Electron. Commer. Res. 2022, 17, 924–948. [Google Scholar] [CrossRef]
- Arena, P.; Baglio, S.; Fortuna, L.; Manganaro, G. Self-organization in a two-layer CNN. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1998, 45, 157–162. [Google Scholar] [CrossRef]
- Shafi, I.; Mazahir, A.; Fatima, A.; Ashraf, I. Internal defects detection and classification in hollow cylindrical surfaces using single shot detection and MobileNet. Measurement 2022, 202, 111836. [Google Scholar] [CrossRef]
- Company Mathworks: Introduction CNN. Available online: https://www.mathworks.com/videos/introduction-to-deep-learning-what-are-convolutional-neural-networks--1489512765771.html (accessed on 14 November 2022).
- Explained Convolutional-Neural-Networks. Available online: https://towardsdatascience.com/convolutional-neural-networks-explained-9cc5188c4939 (accessed on 24 October 2022).
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 4510–4520. [Google Scholar]
- Information about Openvino. Available online: https://docs.openvino.ai/2020.1/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html (accessed on 14 November 2022).
- Company Whos Starter Work System. Available online: https://sii.pl/ (accessed on 24 October 2022).
- Solowjow, E.; Ugalde, I.; Shahapurkar, Y.; Aparicio, J.; Mahler, J.; Satish, V.; Goldberg, K.; Claussen., H. Industrial robot grasping with deep learning using a programmable logic controller (plc). In Proceedings of the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), online, 20–21 August 2020; pp. 97–103. [Google Scholar]
- Siemens Module NPU. Available online: https://new.siemens.com/global/en/products/automation/systems/industrial/plc/simatic-s7-1500/simatic-s7-1500-tm-npu.html (accessed on 12 October 2022).
- Picture Result on Page. Available online: https://www.facebook.com/profile.php?id=100049758506515 (accessed on 25 October 2022).
- Video Run Robot IRB1200 with NPU and Camera. Available online: https://pl-pl.facebook.com/people/Ko%C5%82o-Naukowe-HMI/100049758506515/ (accessed on 2 November 2022).
- Zocca, V.; Spacagna, G.; Slater, D.; Roelants, P. Deep learning. Uczenie głębokie z językiem Python. In Sztuczna Inteligencja Neuronowe; Helion Energy: Everett, WA, USA, 2017. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
Color | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|
Black | |||||
Blue | |||||
Green | |||||
Red | |||||
Yellow |
Color | Yellow | Blue | Green | Red | Black |
---|---|---|---|---|---|
Action | One DOT | Two DOT | Tree DOT | PLUS + | Not reaction |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rybczak, M.; Popowniak, N.; Kozakiewicz, K. Applied AI with PLC and IRB1200. Appl. Sci. 2022, 12, 12918. https://doi.org/10.3390/app122412918
Rybczak M, Popowniak N, Kozakiewicz K. Applied AI with PLC and IRB1200. Applied Sciences. 2022; 12(24):12918. https://doi.org/10.3390/app122412918
Chicago/Turabian StyleRybczak, Monika, Natalia Popowniak, and Krystian Kozakiewicz. 2022. "Applied AI with PLC and IRB1200" Applied Sciences 12, no. 24: 12918. https://doi.org/10.3390/app122412918
APA StyleRybczak, M., Popowniak, N., & Kozakiewicz, K. (2022). Applied AI with PLC and IRB1200. Applied Sciences, 12(24), 12918. https://doi.org/10.3390/app122412918