D-STGCN: Dynamic Pedestrian Trajectory Prediction Using Spatio-Temporal Graph Convolutional Networks
Abstract
:1. Introduction
1.1. Trajectory Prediction Methods with RNNs
1.2. Trajectory Prediction Methods with CNNs
- An improved trajectory prediction by increasing the number of convolutional layers in the model, resulting in a significant improvement generated in the social-STGCNN [9] architecture.
- Performed an ablation study to reveal the contribution of different components of the presented model on the prediction problem and to validate the performance of the attention-based ST-GCNN and TXP-CNN.
- Illustrated the qualitative results visualization on different scenarios from ETH, UCY, and SDD datasets.
2. Related Work
3. Problem Description
4. The Proposed Method
4.1. Graph Representation of Pedestrian Trajectory
4.2. Spatio-Temporal Graph Convolutional Neural Network
4.3. Time-Extrapolator Convolutional Neural Network
Algorithm 1. Dynamic Graph Convolutional Networks | |||
Input: real-world pedestrians coordinate X = (x, y); | |||
Output: the evaluation metrics, i.e., the average displacement error (ADE) and final displacement error (FDE); | |||
1: | fordo | ||
2: | |||
3: | ; | ||
4: | End for | ||
5: | by using Equation (2); | ||
6: | Generate Laplacian matrix by using Equation (3); | ||
7: | Fordo | ||
8: | For alldo | ||
9: | of the model by using Equation (1); | ||
10: | |||
11: | End for | ||
12: | End for | ||
13: | Collect all of the predicted location and the real location for each pedestrian; | ||
14: | Compute the ADE and FDE with the formulas from Equations (5) and (6); | ||
15: | Return ADE and FDE. |
5. Experiments
5.1. Implementation Details
5.2. Datasets
5.3. Evaluation Metrics
- The ADE represents the average distance between the ground truth and the predicted trajectories over all future time steps, as shown in Equation (5).
- The FDE represents the distance between the final positions of the ground truth and the predicted trajectories, as shown in Equation (6).
6. Results
6.1. Ablation Study
- One ST-GCNN layer and three TXP-CNN layers for the ETH-UCY datasets.
- Three ST-GCNN layers and two TXP-CNN layers for the SDD dataset.
6.2. Visualization
6.3. Comparison with the State-of-the-Art Methods
7. Discussion and Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lefèvre, S.; Vasquez, D.; Laugier, C. A survey on motion prediction and risk assessment for intelligent vehicles. Robomech J. 2014, 1, 1. [Google Scholar] [CrossRef] [Green Version]
- WHO. Global Status Report on Road Safety 2018; WHO: Geneva, Switzerland, 2018; p. 11. [Google Scholar]
- ITF. Pedestrian Safety, Urban Space and Health; OECD Publishing: Paris, France, 2012. [Google Scholar]
- Gálvez-Pérez, D.; Guirao, B.; Ortuño, A.; Picado-Santos, L. The Influence of Built Environment Factors on Elderly Pedestrian Road Safety in Cities: The Experience of Madrid. Int. J. Environ. Res. Public Health 2022, 19, 2280. [Google Scholar] [CrossRef] [PubMed]
- Winkle, T. Safety benefits of automated vehicles: Extended findings from accident research for development, validation, and testing. In Autonomous Driving; Springer: Berlin/Heidelberg, Germany, 2016; pp. 335–364. [Google Scholar]
- Moussaïd, M.; Perozo, N.; Garnier, S.; Helbing, D.; Theraulaz, G. The walking behaviour of pedestrian social groups and its impact on crowd dynamics. PloS ONE 2010, 5, e10047. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sharma, N.; Dhiman, C.; Indu, S. Pedestrian Intention Prediction for Autonomous Vehicles: A Comprehensive Survey. Neurocomputing 2022, 508, 120–152. [Google Scholar] [CrossRef]
- Alahi, A.; Goel, K.; Ramanathan, V.; Robicquet, A.; Fei-Fei, L.; Savarese, S. Social LSTM: Human Trajectory Prediction in Crowded Spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 961–971. [Google Scholar]
- Mohamed, A.; Qian, K.; Elhoseiny, M.; Claudel, C. Social-STGCNN: A Social Spatio-Temporal Graph Convolutional Neural Network for Human Trajectory Prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 14412–14420. [Google Scholar]
- Pellegrini, S.; Ess, A.; Schindler, K.; Van Gool, L. You’ll never walk alone: Modeling social behavior for multi-target tracking. In Proceedings of the IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 261–268. [Google Scholar]
- Lerner, A.; Chrysanthou, Y.; Lischinski, D. Crowds by example. Comput. Graph. Forum 2007, 26, 655–664. [Google Scholar] [CrossRef]
- Robicquet, A.; Sadeghian, A.; Alahi, A.; Savarese, S. Learning Social Etiquette: Human Trajectory Understanding in Crowded Scenes. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Volume 9912. [Google Scholar]
- Fernando, T.; Denman, S.; Sridharan, S.; Fookes, C. Soft+ hardwired attention: An lstm framework for human trajectory prediction and abnormal event detection. Neural Netw. 2018, 108, 466–478. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Xue, X.; Huynh, D.; Reynolds, M. SS-LSTM: A Hierarchical LSTM Model for Pedestrian Trajectory Prediction. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 1186–1194. [Google Scholar]
- Zhang, P.; Ouyang, W.; Zhang, P.; Xue, J.; Zheng, N. SR-LSTM: State Refinement for LSTM Towards Pedestrian Trajectory Prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 12077–12086. [Google Scholar]
- Zhang, P.; Xue, J.; Zhang, P.; Zheng, N.; Ouyang, W. Social-Aware Pedestrian Trajectory Prediction via States Refinement LSTM. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 2742–2759. [Google Scholar] [CrossRef] [PubMed]
- Jain, A.; Zamir, A.R.; Savarese, S.; Saxena, A. Structural-RNN: Deep Learning on Spatio-Temporal Graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 5308–5317. [Google Scholar]
- Gupta, A.; Johnson, J.; Fei-Fei, L.; Savarese, S.; Alahi, A. Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 2255–2264. [Google Scholar]
- Amirian, J.; Hayet, J.; Pettré, J. Social Ways: Learning Multi-Modal Distributions of Pedestrian Trajectories with GANs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 2964–2972. [Google Scholar]
- Sadeghian, A.; Kosaraju, V.; Hirose, N.; Rezatofighi, H.; Savarese, S. SoPhie: An Attentive GAN for Predicting Paths Compliant to Social and Physical Constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1349–1358. [Google Scholar]
- Díaz Berenguer, A.; Alioscha-Perez, M.; Oveneke, M.C.; Sahli, H. Context-Aware Human Trajectories Prediction via Latent Variational Model. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 1876–1889. [Google Scholar] [CrossRef]
- Nikhil, N.; Tran Morris, B. Convolutional neural network for trajectory prediction. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Zhao, T.; Xu, Y.; Monfort, M.; Choi, W.; Baker, C.; Zhao, Y.; Wang, Y.; Wu, Y.N. Multi-Agent Tensor Fusion for Contextual Trajectory Prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 12118–12126. [Google Scholar]
- Chandra, R.; Bhattacharya, U.; Bera, A.; Manocha, D. TraPHic: Trajectory Prediction in Dense and Heterogeneous Traffic Using Weighted Interactions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8475–8484. [Google Scholar]
- Liang, J.; Jiang, L.; Niebles, J.; Hauptmann, A.; Fei-Fei, L. Peeking into the Future: Predicting Future Person Activities and Locations in Videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 2960–2963. [Google Scholar]
- Salzmann, T.; Ivanovic, B.; Chakravarty, P.; Pavone, M. Trajectron++: Multi-agent generative trajectory forecasting with heterogeneous data for control. In Proceedings of the 16th European Conference on Computer Vision (ECCV), Seattle, WA, USA, 16–18 June 2020; pp. 683–700. [Google Scholar]
- Zamboni, S.; Kefato, Z.; Girdzijauskas, S.; Noren, C.; Dal Col, L. Pedestrian trajectory prediction with convolutional neural networks. Pattern Recognit. 2022, 121, 108252. [Google Scholar] [CrossRef]
- Battaglia, P.W.; Hamrick, J.B.; Bapst, V.; Sanchez-Gonzalez, A.; Zambaldi, V.; Malinowski, M.; Tacchetti, A.; Raposo, D.; Santoro, A.; Faulkner, R.; et al. Relational inductive biases, deep learning, and graph networks. arXiv 2018, arXiv:1806.01261. [Google Scholar]
- Bruna, J.; Zaremba, W.; Szlam, A.; LeCun, Y. Spectral networks and locally connected networks on graphs. arXiv 2013, arXiv:1312.6203. [Google Scholar]
- Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 3844–3852. [Google Scholar]
- Kipf, T.; Welling, M. Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Hamilton, W.; Ying, R.; Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the Annual Conference on Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 1024–1034. [Google Scholar]
- Yan, S.; Xiong, Y.; Lin, D. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 7444–7452. [Google Scholar]
- Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lió, P.; Bengio, Y. Graph attention networks. In Proceedings of the 6th International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Isola, P.; Zhu, J.; Zhou, T.; Efros, A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar]
- Kosaraju, V.; Sadeghian, A.; Martin-Martin, R.; Reid, I.; Rezatofighi, S.; Savarese, S. Social-bigat: Multimodal trajectory forecasting using bicycle-gan and graph attention networks. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 137–146. [Google Scholar]
- Huang, Y.; Bi, H.; Li, Z.; Mao, T.; Wang, Z. STGAT: Modeling Spatial-Temporal Interactions for Human Trajectory Prediction. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6271–6280. [Google Scholar]
- Vemula, A.; Muelling, K.; Oh, J. Social attention: Modeling attention in human crowds. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–26 May 2018; pp. 4601–4607. [Google Scholar]
- Li, J.; Ma, H.; Tomizuka, M. Conditional Generative Neural System for Probabilistic Trajectory Prediction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; pp. 6150–6156. [Google Scholar]
- Sighencea, B.I.; Stanciu, R.I.; Căleanu, C.D. A Review of Deep Learning-Based Methods for Pedestrian Trajectory Prediction. Sensors 2021, 21, 7543. [Google Scholar] [CrossRef] [PubMed]
- Bai, S.; Kolter, J.Z.; Koltun, V. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
- Zou, X.; Sun, B.; Zhao, D.; Zhu, Z.; Zhao, J.; He, Y. Multi-Modal Pedestrian Trajectory Prediction for Edge Agents Based on Spatial-Temporal Graph. IEEE Access 2020, 8, 83321–83332. [Google Scholar] [CrossRef]
- Huang, L.; Zhuang, J.; Cheng, X.; Xu, R.; Ma, H. STI-GAN: Multimodal Pedestrian Trajectory Prediction Using Spatiotemporal Interactions and a Generative Adversarial Network. IEEE Access 2021, 9, 50846–50856. [Google Scholar] [CrossRef]
- Dendorfer, P.; Ošep, A.; Leal-Taixé, L. Goal-GAN: Multimodal Trajectory Prediction Based on Goal Position Estimation. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 January 2020; Volume 12623, pp. 405–420. [Google Scholar]
- Chai, R.; Tsourdos, A.; Savvaris, A.; Chai, S.; Xia, Y.; Chen, C.L.P. Multiobjective Overtaking Maneuver Planning for Autonomous Ground Vehicles. IEEE Trans. Cybern. 2021, 51, 4035–4049. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sighencea, B.I.; Stanciu, R.I.; Căleanu, C.D. Pedestrian Trajectory Prediction in Graph Representation Using Convolutional Neural Networks. In Proceedings of the IEEE 16th International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, 25–28 May 2022; pp. 000243–000248. [Google Scholar]
- Sighencea, B.I.; Stanciu, R.I.; Sorândaru, C.; Căleanu, C.D. The Alpha-Beta Family of Filters to Solve the Threshold Problem: A Comparison. Mathematics 2022, 10, 880. [Google Scholar] [CrossRef]
- Caesar, H.; Bankiti, V.; Lang, A.; Vora, S.; Liong, V.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. Nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11618–11628. [Google Scholar]
ST-GCNN | TXP-CNN | SDD | ETH | HOTEL | UNIV | ZARA1 | ZARA2 | AWG |
---|---|---|---|---|---|---|---|---|
1 | 1 | 17.03/28.32 | 0.68/1.32 | 0.52/0.86 | 0.47/0.83 | 0.41/0.61 | 0.34/0.54 | 0.48/0.83 |
1 | 2 | 20.47/36.21 | 0.72/1.23 | 0.54/1.01 | 0.47/0.83 | 0.38/0.64 | 0.32/0.50 | 0.48/0.84 |
1 | 3 | 18.98/29.46 | 0.63/1.03 | 0.40/0.65 | 0.50/0.89 | 0.37/0.60 | 0.32/0.50 | 0.44/0.73 |
1 | 4 | 19.69/34.28 | 0.74/1.26 | 0.37/0.58 | 0.47/0.85 | 0.35/0.57 | 0.29/0.48 | 0.44/0.74 |
2 | 1 | 15.82/25.50 | 0.69/1.33 | 0.58/0.91 | 0.46/0.78 | 0.38/0.56 | 0.34/0.51 | 0.49/0.81 |
2 | 2 | 18.65/31.05 | 0.99/1.80 | 0.52/0.93 | 0.52/0.96 | 0.37/0.58 | 0.37/0.57 | 0.55/0.96 |
2 | 3 | 24.32/44.00 | 0.77/1.50 | 0.43/0.72 | 0.50/0.92 | 0.36/0.59 | 0.36/0.56 | 0.48/0.85 |
2 | 4 | 17.56/31.53 | 0.81/1.64 | 0.67/1.23 | 0.50/0.90 | 0.38/0.62 | 0.34/0.55 | 0.54/0.98 |
3 | 1 | 25.09/29.93 | 0.69/1.34 | 0.53/0.91 | 0.62/1.10 | 0.42/0.70 | 0.39/0.58 | 0.53/0.92 |
3 | 2 | 15.18/25.93 | 0.70/1.26 | 0.58/0.97 | 0.57/1.03 | 0.44/0.67 | 0.35/0.53 | 0.52/0.89 |
3 | 3 | 43.46/62.67 | 0.77/1.34 | 0.66/1.22 | 0.49/0.88 | 0.40/0.60 | 0.39/0.56 | 0.54/0.92 |
3 | 4 | 23.86/42.04 | 0.73/1.42 | 0.48/0.78 | 0.51/0.96 | 0.44/0.70 | 0.32/0.53 | 0.49/0.82 |
4 | 1 | 18.65/31.84 | 0.99/1.74 | 0.53/0.74 | 0.57/0.95 | 0.50/0.86 | 0.35/0.53 | 0.58/0.96 |
4 | 2 | 21.54/31.21 | 0.94/1.94 | 0.64/1.03 | 0.53/0.84 | 0.51/0.89 | 0.35/0.51 | 0.59/1.04 |
4 | 3 | 19.18/33.29 | 0.75/1.29 | 1.13/2.04 | 0.53/0.98 | 0.39/0.63 | 0.39/0.55 | 0.63/1.09 |
4 | 4 | 26.88/43.56 | 0.71/1.25 | 0.89/1.56 | 0.56/0.99 | 0.45/0.69 | 0.35/0.54 | 0.59/1.00 |
Methods | SDD | ETH | HOTEL | UNIV | ZARA1 | ZARA2 | AWG |
---|---|---|---|---|---|---|---|
S-LSTM [8] | 31.19/56.97 | 1.09/2.35 | 0.79/1.76 | 0.67/1.40 | 0.47/1.00 | 0.56/1.17 | 0.71/1.53 |
Social-STGCNN [9] | n/a | 0.64/1.11 | 0.49/0.85 | 0.44/0.79 | 0.34/0.53 | 0.30/0.48 | 0.44/0.75 |
SR-LSTM [15] | n/a | 0.63/1.25 | 0.37/0.74 | 0.51/1.10 | 0.41/0.90 | 0.32/0.70 | 0.44/0.93 |
SR-LSTM-2 [16] | n/a | 0.58/1.13 | 0.31/0.62 | 0.50/1.10 | 0.41/0.90 | 0.33/0.73 | 0.43/0.89 |
S-GAN-P [18] | 27.23/41.44 | 0.87/1.62 | 0.67/1.37 | 0.76/1.52 | 0.35/0.68 | 0.42/0.84 | 0.61/1.20 |
S-Ways [19] | n/a | 0.39/0.64 | 0.39/0.66 | 0.55/1.31 | 0.44/0.64 | 0.51/0.92 | 0.45/0.83 |
SoPhie [20] | 16.27/29.38 | 0.70/1.43 | 0.76/1.67 | 0.54/1.24 | 0.30/0.63 | 0.38/0.78 | 0.53/1.15 |
SSALVM (20) [21] | n/a | 0.61/1.09 | 0.28/0.51 | 0.59/1.24 | 0.30/0.64 | 0.37/0.78 | 0.43/0.85 |
MATF-GAN [23] | 27.82/59.31 | 1.33/2.49 | 0.51/0.95 | 0.56/1.19 | 0.44/0.93 | 0.34/0.73 | 0.64/1.26 |
PIF [25] | n/a | 0.73/1.65 | 0.30/0.59 | 0.60/1.27 | 0.38/0.81 | 0.31/0.68 | 0.46/1.00 |
Social BiGAT [36] | n/a | 0.69/1.29 | 0.49/1.01 | 0.55/1.32 | 0.30/0.62 | 0.36/0.75 | 0.47/0.99 |
STGAT [37] | n/a | 0.65/1.12 | 0.35/0.66 | 0.52/1.10 | 0.34/0.69 | 0.29/0.60 | 0.43/0.83 |
CGNS [39] | 15.84/25.17 | 0.62/1.40 | 0.70/0.93 | 0.48/1.22 | 0.32/0.59 | 0.35/0.71 | 0.49/0.97 |
Our method | 15.18/25.50 | 0.63/1.03 | 0.37/0.58 | 0.46/0.78 | 0.35/0.56 | 0.29/0.48 | 0.42/0.68 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sighencea, B.I.; Stanciu, I.R.; Căleanu, C.D. D-STGCN: Dynamic Pedestrian Trajectory Prediction Using Spatio-Temporal Graph Convolutional Networks. Electronics 2023, 12, 611. https://doi.org/10.3390/electronics12030611
Sighencea BI, Stanciu IR, Căleanu CD. D-STGCN: Dynamic Pedestrian Trajectory Prediction Using Spatio-Temporal Graph Convolutional Networks. Electronics. 2023; 12(3):611. https://doi.org/10.3390/electronics12030611
Chicago/Turabian StyleSighencea, Bogdan Ilie, Ion Rareș Stanciu, and Cătălin Daniel Căleanu. 2023. "D-STGCN: Dynamic Pedestrian Trajectory Prediction Using Spatio-Temporal Graph Convolutional Networks" Electronics 12, no. 3: 611. https://doi.org/10.3390/electronics12030611
APA StyleSighencea, B. I., Stanciu, I. R., & Căleanu, C. D. (2023). D-STGCN: Dynamic Pedestrian Trajectory Prediction Using Spatio-Temporal Graph Convolutional Networks. Electronics, 12(3), 611. https://doi.org/10.3390/electronics12030611