CAPNet: Context and Attribute Perception for Pedestrian Detection
Abstract
:1. Introduction
- We extensively analyze the current prevalent problems in pedestrian detection and propose a one-stage anchor-free pedestrian detector, named CAPNet (Spatial Perception Center and Scale Prediction).
- Through extracting features with consistent semantics and details, mining useful context semantics and bringing hand-craft features to CNNs, we design the network structure to cope with problems in pedestrian detection.
- The experimental results demonstrate that the proposed CAPNet achieves new state-of-the-art performance on both Caltech and CityPersons datasets.
2. Related Work
2.1. Scale-Aware Detectors
2.2. Context-Based Detectors
2.3. Attribute-Based Detectors
3. Method
3.1. Overview of Network Structure
3.2. Feature Extraction Module
3.3. Global Feature Mining and Aggregation Network
- (1)
- Global Feature Modeling
- (2)
- Local Feature Aggregation
- (3)
- Feature Normalization
3.4. Attribute-Guided Multiple Receptive Field Module
3.5. Training
3.6. Inference
4. Experiments and Results
4.1. Experiment Settings
4.1.1. Datasets
4.1.2. Implement Details
4.1.3. Evaluation Metric
4.2. Evaluation of Results
4.2.1. Results on CityPersons
4.2.2. Results on Caltech
4.3. Ablation Study
4.3.1. Feature Extraction Module
4.3.2. Global Feature Mining and Aggregation Network
4.3.3. Attribute-Guided Multiple Receptive Field Module
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Li, P.; Chen, X.; Shen, S. Stereo R-CNN based 3D Object Detection for Autonomous Driving. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7636–7644. [Google Scholar] [CrossRef] [Green Version]
- Tseng, B.L.; Lin, C.Y.; Smith, J.R. Real-time video surveillance for traffic monitoring using virtual line analysis. In Proceedings of the IEEE International Conference on Multimedia and Expo, Lausanne, Switzerland, 26–29 August 2002; pp. 541–544. [Google Scholar] [CrossRef]
- Bergmann, P.; Meinhardt, T.; Leal-Taixe, L. Tracking without bells and whistles. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 941–951. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Lin, L.; Liang, X.; He, K. Is Faster R-CNN Doing Well for Pedestrian Detection? In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 443–457. [Google Scholar] [CrossRef] [Green Version]
- Zhang, S.; Li, S.Z. Occlusion-aware R-CNN: Detecting Pedestrians in a Crowd. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 657–674. [Google Scholar] [CrossRef] [Green Version]
- Li, J.; Liang, X.; Shen, S.; Xu, T.; Feng, J.; Yan, S. Scale-Aware Fast R-CNN for Pedestrian Detection. IEEE Trans. Multimed. 2018, 20, 985–996. [Google Scholar] [CrossRef] [Green Version]
- Liu, S.; Huang, D.; Wang, Y. Adaptive NMS: Refining pedestrian detection in a crowd. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6452–6461. [Google Scholar] [CrossRef] [Green Version]
- Dollár, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian detection: An evaluation of the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 743–761. [Google Scholar] [CrossRef] [PubMed]
- Zhang, S.; Benenson, R.; Schiele, B. CityPersons: A Diverse Dataset for Pedestrian Detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4457–4465. [Google Scholar] [CrossRef] [Green Version]
- Cao, J.; Pang, Y.; Xie, J.; Khan, F.S.; Shao, L. From Handcrafted to Deep Features for Pedestrian Detection: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4913–4934. [Google Scholar] [CrossRef] [PubMed]
- Mao, J.; Xiao, T.; Jiang, Y.; Cao, Z. What Can Help Pedestrian Detection? In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6034–6043. [Google Scholar] [CrossRef] [Green Version]
- Song, T.; Sun, L.; Xie, D.; Sun, H.; Pu, S. Small-scale Pedestrian Detection Based on Somatic Topology Localization and Temporal Feature Aggregation. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 554–569. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Liu, W.; Liao, S.; Hu, W.; Liang, X.; Chen, X. Learning Efficient Single-stage Pedestrian Detectors by Asymptotic Localization Fitting. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 634–659. [Google Scholar] [CrossRef]
- Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep High-Resolution Representation Learning for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3349–3364. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hu, J.; Jin, L.; Gao, S. FPN++: A Simple Baseline for Pedestrian Detection. In Proceedings of the IEEE International Conference on Multimedia and Expo, Shanghai, China, 8–12 July 2019; pp. 1138–1143. [Google Scholar] [CrossRef]
- Cai, Z.; Fan, Q.; Feris, R.S.; Vasconcelos, N. A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Volume 9908, pp. 354–370. [Google Scholar] [CrossRef] [Green Version]
- Wang, X.; Shen, C.; Li, H.; Xu, S. Human Detection Aided by Deeply Learned Semantic Masks. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 2663–2673. [Google Scholar] [CrossRef]
- Jiang, H.; Liao, S.; Li, J.; Prinet, V.; Xiang, S. Urban scene based Semantical Modulation for Pedestrian Detection. Neurocomputing 2022, 474, 1–12. [Google Scholar] [CrossRef]
- Zhang, C.; Kim, J. Object Detection With Location-Aware Deformable Convolution and Backward Attention Filtering. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9452–9461. [Google Scholar] [CrossRef]
- Zhang, S.; Yang, J.; Schiele, B. Occluded Pedestrian Detection Through Guided Attention in CNNs. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 6995–7003. [Google Scholar] [CrossRef]
- Zhang, S.; Bauckhage, C.; Klein, D.A.; Cremers, A.B. Exploring Human Vision Driven Features for Pedestrian Detection. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1709–1720. [Google Scholar] [CrossRef] [Green Version]
- Cao, J.; Pang, Y.; Li, X. Pedestrian Detection Inspired by Appearance Constancy and Shape Symmetry. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1316–1324. [Google Scholar] [CrossRef] [Green Version]
- Zhang, S.; Bauckhage, C.; Cremers, A.B. Informed Haar-Like Features Improve Pedestrian Detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 947–954. [Google Scholar] [CrossRef]
- Xu, Z.; Li, B.; Yuan, Y.; Dang, A. Beta R-CNN: Looking into Pedestrian Detection from Another Perspective. In Proceedings of the Advances in Neural Information Processing Systems, Online, 6–12 December 2020; Volume 33, pp. 19953–19963. [Google Scholar]
- Liu, W.; Liao, S.; Ren, W.; Hu, W.; Yu, Y. High-level semantic feature detection: A new perspective for pedestrian detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5182–5191. [Google Scholar] [CrossRef]
- Yang, F.; Chen, H.; Li, J.; Li, F.; Wang, L.; Yan, X. Single Shot Multibox Detector with Kalman Filter for Online Pedestrian Detection in Video. IEEE Access 2019, 7, 15478–15488. [Google Scholar] [CrossRef]
- Yan, Y.; Li, J.; Qin, J.; Bai, S.; Liao, S.; Liu, L.; Zhu, F.; Shao, L. Anchor-Free Person Search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, Virtual, 19–25 June 2021; pp. 7690–7699. [Google Scholar] [CrossRef]
- Dong, W.; Zhang, Z.; Song, C.; Tan, T. Instance Guided Proposal Network for Person Search. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, 13–19 June 2020; pp. 2582–2591. [Google Scholar] [CrossRef]
- Zhao, C.; Chen, Z.; Dou, S.; Qu, Z.; Yao, J.; Wu, J.; Miao, D. Context-Aware Feature Learning for Noise Robust Person Search. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 7047–7060. [Google Scholar] [CrossRef]
- Jaffe, L.; Zakhor, A. Gallery Filter Network for Person Search. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023, Waikoloa, HI, USA, 2–7 January 2023; pp. 1684–1693. [Google Scholar] [CrossRef]
- Yang, F.; Choi, W.; Lin, Y. Exploit All the Layers: Fast and Accurate CNN Object Detector with Scale Dependent Pooling and Cascaded Rejection Classifiers. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2129–2137. [Google Scholar] [CrossRef]
- Zhang, H.; Wang, K.; Tian, Y.; Gou, C.; Wang, F. MFR-CNN: Incorporating Multi-Scale Features and Global Information for Traffic Object Detection. IEEE Trans. Veh. Technol. 2018, 67, 8019–8030. [Google Scholar] [CrossRef]
- Cao, J.; Pang, Y.; Han, J.; Gao, B.; Li, X. Taking a Look at Small-Scale Pedestrians and Occluded Pedestrians. IEEE Trans. Image Process. 2020, 29, 3143–3152. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, L.; Xu, L.; Yang, M. Pedestrian detection in crowded scenes via scale and occlusion analysis. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016; pp. 1210–1214. [Google Scholar] [CrossRef]
- Zhou, X.; Wang, D.; Krähenbühl, P. Objects as Points. arXiv 2019, arXiv:1904.07850. [Google Scholar]
- Zhang, J.; Lin, L.; Zhu, J.; Li, Y.; Chen, Y.; Hu, Y.; Hoi, S.C.H. Attribute-Aware Pedestrian Detection in a Crowd. IEEE Trans. Multimed. 2021, 23, 3085–3097. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 248–255. [Google Scholar] [CrossRef] [Green Version]
- Cao, Y.; Xu, J.; Lin, S.; Wei, F.; Hu, H. GCNet: Non-local networks meet squeeze-excitation networks and beyond. In Proceedings of the IEEE International Conference on Computer Vision Workshop, Seoul, Republic of Korea, 27–28 October 2019; pp. 1971–1980. [Google Scholar] [CrossRef] [Green Version]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision, Honolulu, HI, USA, 21–26 July 2017; pp. 764–773. [Google Scholar] [CrossRef] [Green Version]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Law, H.; Deng, J. CornerNet: Detecting Objects as Paired Keypoints. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 765–781. [Google Scholar] [CrossRef] [Green Version]
- Wang, W. Detection of panoramic vision pedestrian based on deep learning. Image Vis. Comput. 2020, 103, 986–993. [Google Scholar] [CrossRef]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic Differentiation in PyTorch. In Proceedings of the Advances in Neural Information Processing Systems Workshop on Autodiff, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Wang, X.; Xiao, T.; Jiang, Y.; Shao, S.; Sun, J.; Shen, C. Repulsion Loss: Detecting Pedestrians in a Crowd. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7774–7783. [Google Scholar] [CrossRef] [Green Version]
- Xie, J.; Cholakkal, H.; Anwer, R.M.; Khan, F.S.; Pang, Y.; Shao, L.; Shah, M. Count- and Similarity-Aware R-CNN for Pedestrian Detection. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Volume 12362, pp. 88–104. [Google Scholar] [CrossRef]
- Luo, Z.; Fang, Z.; Zheng, S.; Wang, Y.; Fu, Y. NMS-Loss: Learning with Non-Maximum Suppression for Crowded Pedestrian Detection. In Proceedings of the International Conference on Multimedia Retrieval, Taipei, Taiwan, 12 July 2021; pp. 481–485. [Google Scholar] [CrossRef]
- Li, Q.; Su, Y.; Gao, Y.; Xie, F.; Li, J. OAF-Net: An Occlusion-Aware Anchor-Free Network for Pedestrian Detection in a Crowd. IEEE Trans. Intell. Transp. Syst. 2022, 23, 21291–21300. [Google Scholar] [CrossRef]
- Luo, W.; Li, Y.; Urtasun, R.; Zemel, R. Understanding the Effective Receptive Field in Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; Volume 29, pp. 4898–4906. [Google Scholar]
Branch | General Conv. Kernel | Dilated Conv. Kernel | Dilation Rate | RF Size | RF Aspect Ratio |
---|---|---|---|---|---|
1 | 0.33 | ||||
2 | 0.43 | ||||
3 | 0.38 | ||||
4 | 0.41 | ||||
5 | 0.39 | ||||
… | … | … | … | … | … |
n |
Method | Backbone | Reasonable | Heavy | Partial | Bare | Small | Medium | Large | Test Time |
---|---|---|---|---|---|---|---|---|---|
FRCNN [9] | VGG-16 | 15.4 | - | - | - | 25.6 | 7.2 | 7.9 | - |
FRCNN+Seg [9] | VGG-16 | 14.8 | - | - | - | 22.6 | 6.7 | 8.0 | - |
OR-CNN [5] | VGG-16 | 12.8 | - | - | - | - | - | - | - |
RepLoss [48] | ResNet-50 | 13.2 | 56.9 | 16.8 | 7.6 | - | - | - | - |
TLL [12] | ResNet-50 | 15.5 | 53.6 | 17.2 | 10.0 | - | - | - | - |
TLL+MRF [12] | ResNet-50 | 14.4 | 52.0 | 15.9 | 9.2 | - | - | - | - |
ALFNet [14] | ResNet-50 | 12.0 | 51.9 | 11.4 | 8.4 | 19.0 | 5.7 | 6.6 | 0.27 s/img |
CSP [26] | ResNet-50 | 11.0 | 49.3 | 10.4 | 7.3 | 16.0 | 3.7 | 6.5 | 0.33 s/img |
Adaptive-NMS [7] | ResNet-50 | 10.8 | 54.0 | 11.4 | 6.2 | - | - | - | - |
Beta R-CNN [25] | ResNet-50 | 10.6 | 47.1 | 10.3 | 6.4 | - | - | - | - |
Adapted-CSP [45] | ResNet-101 | 9.3 | 46.8 | 8.7 | 5.6 | - | - | - | 0.42 s/img |
CaSe [49] | VGG-16 | 9.6 | 48.2 | - | - | - | - | - | - |
NMS-Loss [50] | ResNet-50 | 10.1 | - | - | - | - | - | - | - |
APD [37] | DLA-34 | 8.8 | 46.6 | 8.8 | 5.8 | - | - | - | 0.16 s/img |
OAF-Net [51] | HRNet-32 | 9.4 | 43.1 | 8.3 | 5.6 | - | - | - | 0.25 s/img |
CAPNet (ours) | HRNet-40 | 8.7 | 46.1 | 7.4 | 6.0 | 10.0 | 3.0 | 5.6 | 0.31 s/img |
Method | Backbone | Reasonable | All | Occ |
---|---|---|---|---|
FRCNN [9] | VGG-16 | 8.7 | 62.6 | 53.1 |
HyperLearner [11] | VGG-16 | 5.5 | 61.5 | 48.7 |
OR-CNN [5] | VGG-16 | 4.1 | 58.8 | 45.0 |
RepLoss [48] | ResNet-50 | 5.0 | 59.0 | 47.9 |
TLL [12] | ResNet-50 | 8.5 | 40.0 | - |
ALFNet [14] | ResNet-50 | 6.1 | 51.9 | 51.0 |
CSP [26] | ResNet-50 | 4.5 | 56.9 | 45.8 |
NMS-Loss [50] | ResNet-50 | 5.9 | - | - |
OAF-Net [51] | HRNet-32 | 3.8 | 54.9 | 32.1 |
CAPNet (ours) | HRNet-40 | 2.8 | 53.4 | 39.2 |
Method | Backbone | Reasonable | Heavy | Partial | Bare | Small | Medium | Large | Parameters |
---|---|---|---|---|---|---|---|---|---|
Ours | ResNet-50 | 10.7 | 49.3 | 9.9 | 7.1 | 15.9 | 3.7 | 6.5 | 40.0 MB |
ResNet-101 | 10.2 | 48.8 | 9.8 | 6.8 | 13.9 | 3.6 | 6.1 | 84.0 MB | |
EfficientNet-b0 | 12.1 | 51.8 | 13.3 | 7.5 | 18.7 | 4.0 | 6.9 | 11.2 MB | |
EfficientNet-b6 | 10.3 | 49.0 | 9.6 | 6.9 | 14.5 | 3.8 | 5.9 | 51.7 MB | |
HRNet-40 | 9.8 | 48.0 | 8.8 | 6.5 | 13.2 | 3.5 | 5.8 | 53.6 MB |
Method | With GFM | With LFA | Feature Normalization | Reasonable | Heavy | Partial | Bare | |
---|---|---|---|---|---|---|---|---|
Interpolate | Trans. Conv. | |||||||
Ours | ✓ | 9.8 | 48.0 | 8.8 | 6.5 | |||
✓ | ✓ | 9.3 | 47.3 | 8.0 | 6.4 | |||
✓ | ✓ | ✓ | 9.6 | 46.7 | 8.4 | 6.5 | ||
✓ | ✓ | ✓ | 9.2 | 47.0 | 8.0 | 6.3 |
Method | Reasonable | Heavy | Partial | Bare |
---|---|---|---|---|
baseline | 9.2 | 47.0 | 8.0 | 6.3 |
with SMRF | 9.0 | 46.0 | 8.2 | 6.1 |
with TMRF | 9.6 | 46.3 | 9.1 | 6.3 |
with AMRF (ours) | 8.7 | 46.1 | 7.4 | 6.0 |
Number of Branches | Reasonable | Heavy | Partial | Bare |
---|---|---|---|---|
baseline | 9.2 | 47.0 | 8.0 | 6.3 |
1 | 8.8 | 46.1 | 8.0 | 6.1 |
2 | 9.0 | 46.8 | 7.7 | 6.3 |
3 (ours) | 8.7 | 46.1 | 7.4 | 6.0 |
4 | 9.4 | 47.3 | 8.6 | 6.4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhu, Y.; Huang, H.; Yu, H.; Chen, A.; Zhao, G. CAPNet: Context and Attribute Perception for Pedestrian Detection. Electronics 2023, 12, 1781. https://doi.org/10.3390/electronics12081781
Zhu Y, Huang H, Yu H, Chen A, Zhao G. CAPNet: Context and Attribute Perception for Pedestrian Detection. Electronics. 2023; 12(8):1781. https://doi.org/10.3390/electronics12081781
Chicago/Turabian StyleZhu, Yueyan, Hai Huang, Huayan Yu, Aoran Chen, and Guanliang Zhao. 2023. "CAPNet: Context and Attribute Perception for Pedestrian Detection" Electronics 12, no. 8: 1781. https://doi.org/10.3390/electronics12081781
APA StyleZhu, Y., Huang, H., Yu, H., Chen, A., & Zhao, G. (2023). CAPNet: Context and Attribute Perception for Pedestrian Detection. Electronics, 12(8), 1781. https://doi.org/10.3390/electronics12081781