Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People
Abstract
:1. Introduction
- ❖
- Fire image collection and labeling for indoor fire-detection datasets are challenging owing to the lack of open-access fire image datasets for real-world scenarios. A large amount of labeled training data is key to the success of any deep learning model.
- ❖
- Fires do not have any specified shape or size and even at times, vary in color, hence achieving high accuracy in their detection is a difficult task while the data annotation process is also time-consuming.
- ❖
- Distinguishing between a real fire and an artificial fire or fire-like scene (sunlight and lighting) is also a challenging task, particularly when several errors in real-time fire detection occur in the case of a false alarm. This is because the sunlight and lighting pixel values are very close to the fire color intensities, even though they are not real fires.
- ❖
- Ensuring real-time fire detection and notification along with indoor navigation methods is also a potential challenge because the rapid spread of the fire must not risk the health of the blind and endanger their lives.
- Most traditional methods use sensor-based technologies to detect fire scenes [3,4,5,6]; however, these technologies depend on environmental and illumination changes. Further research has shown that camera-based fire-detection systems achieve much better results with high prediction accuracy, low cost, and reduced processing time to enhance fire safety [7,8,9,10].
- AI-based fire detection is a potentially powerful approach to detect flames and to warn building occupants in different indoor environments because it is highly distinctive, does not depend on colors, size, or shape, and is robust to illumination changes [11,12]. However, traditional computer vision or image processing approaches have been applied to simple fires and are appropriate only under certain conditions [13,14,15,16].
- We proposed a fire detection and classification system that can monitor and predict home emergency fire-related situations to provide early warning to BVI people to prevent or reduce the risk to life.
- The use of smart glasses as an assistive technology supports blind users to self-evacuate with minimal assistance during emergencies, such as fires and floods. Furthermore, this assistive technology can be easily adopted in many applications, such as indoor or outdoor navigation, obstacle avoidance, education, and traveling for BVI people.
- The goal of this research is to converge Internet-of-Things devices and AI methods in the field of fire prevention, safety, and indoor navigation for BVI people. This is a unique idea, and smart glasses associated with a security camera can improve the lives of blind users when they perform important daily tasks (such as cooking and heating).
2. Proposed Fire-Detection and Notification Method
2.1. System Overview
2.2. Dataset
2.3. Fire Detection and Notification
2.3.1. Model Selection
2.3.2. Fire Detection
3. Experimental Results
Qualitative Evaluation
4. Conclusions and Future Directions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Korean Statistical Information Service. Available online: http://kosis.kr (accessed on 10 August 2021).
- Ahrens, M.; Maheshwari, R. Home Structure Fires; National Fire Protection Association: Quincy, MA, USA, 2021. [Google Scholar]
- Ba, R.; Chen, C.; Yuan, J.; Song, W.; Lo, S. SmokeNet: Satellite Smoke Scene Detection Using Convolutional Neural Network with Spatial and Channel-Wise Attention. Remote Sens. 2019, 11, 1702. [Google Scholar] [CrossRef]
- Barmpoutis, P.; Stathaki, T.; Dimitropoulos, K.; Grammalidis, N. Early Fire Detection Based on Aerial 360-Degree Sensors, Deep Convolution Neural Networks and Exploitation of Fire Dynamic Textures. Remote Sens. 2020, 12, 3177. [Google Scholar] [CrossRef]
- Khan, F.; Xu, Z.; Sun, J.; Khan, F.M.; Ahmed, A.; Zhao, Y. Recent Advances in Sensors for Fire Detection. Sensors 2022, 22, 3310. [Google Scholar] [CrossRef]
- Muhammad, K.; Khan, S.; Elhoseny, M.; Ahmed, S.H.; Baik, S.W. Efficient Fire Detection for Uncertain Surveillance Environment. IEEE Trans. Ind. Inform. 2019, 15, 3113–3122. [Google Scholar] [CrossRef]
- Li, J.; Yan, B.; Zhang, M.; Zhang, J.; Jin, B.; Wang, Y.; Wang, D. Long-Range Raman Distributed Fiber Temperature Sensor with Early Warning Model for Fire Detection and Prevention. IEEE Sens. J. 2019, 19, 3711–3717. [Google Scholar] [CrossRef]
- Valikhujaev, Y.; Abdusalomov, A.; Cho, Y.I. Automatic fire and smoke detection method for surveillance systems based on dilated CNNs. Atmosphere 2020, 11, 1241. [Google Scholar] [CrossRef]
- Avazov, K.; Mukhiddinov, M.; Makhmudov, F.; Cho, Y.I. Fire Detection Method in Smart City Environments Using a Deep Learning-Based Approach. Electronics 2021, 11, 73. [Google Scholar] [CrossRef]
- Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625. [Google Scholar] [CrossRef]
- Abdusalomov, A.; Baratov, N.; Kutlimuratov, A.; Whangbo, T.K. An improvement of the fire detection and classification method using YOLOv3 for surveillance systems. Sensors 2021, 21, 6519. [Google Scholar] [CrossRef]
- Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. Automatic Fire Detection and Notification System Based on Improved YOLOv4 for the Blind and Visually Impaired. Sensors 2022, 22, 3307. [Google Scholar] [CrossRef]
- Toulouse, T.; Rossi, L.; Celik, T.; Akhloufi, M. Automatic fire pixel detection using image processing: A comparative analysis of rule-based and machine learning-based methods. Signal Image Video Process. 2016, 10, 647–654. [Google Scholar] [CrossRef]
- Jiang, Q.; Wang, Q. Large space fire image processing of improving canny edge detector based on adaptive smoothing. In Proceedings of the 2010 International Conference on Innovative Computing and Communication and 2010 Asia-Pacific Conference on Information Technology and Ocean Engineering, Macao, China, 30–31 January 2010; pp. 264–267. [Google Scholar]
- Zhang, Z.; Zhao, J.; Zhang, D.; Qu, C.; Ke, Y.; Cai, B. Contour based forest fire detection using FFT and wavelet. In Proceedings of the 2008 International Conference on Computer Science and Software Engineering, Wuhan, China, 12–14 December 2018; pp. 760–763. [Google Scholar]
- Celik, T.; Demirel, H.; Ozkaramanli, H.; Uyguroglu, M. Fire detection using statistical color model in video sequences. J. Vis. Commun. Image Represent. 2007, 18, 176–185. [Google Scholar] [CrossRef]
- Kuldoshbay, A.; Abdusalomov, A.; Mukhiddinov, M.; Baratov, N.; Makhmudov, F.; Cho, Y.I. An improvement for the automatic classification method for ultrasound images used on CNN. Int. J. Wavelets Multiresolut. Inform. Process. 2022, 20, 2150054. [Google Scholar]
- Umirzakova, S.; Abdusalomov, A.; Whangbo, T.K. Fully Automatic Stroke Symptom Detection Method Based on Facial Features and Moving Hand Differences. In Proceedings of the 2019 International Symposium on Multimedia and Communication Technology (ISMAC), Quezon City, Philippines, 19–21 August 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Abdusalomov, A.; Whangbo, T.K. An improvement for the foreground recognition method using shadow removal technique for indoor environments. Int. J. Wavelets Multiresolut. Inf. Process. 2017, 15, 1750039. [Google Scholar] [CrossRef]
- Abdusalomov, A.; Whangbo, T.K. Detection and Removal of Moving Object Shadows Using Geometry and Color Information for Indoor Video Streams. Appl. Sci. 2019, 9, 5165. [Google Scholar] [CrossRef]
- Xu, R.; Lin, H.; Lu, K.; Cao, L.; Liu, Y. A Forest Fire Detection System Based on Ensemble Learning. Forests 2021, 12, 217. [Google Scholar] [CrossRef]
- Wang, K.; Liew, J.H.; Zou, Y.; Zhou, D.; Feng, J. Panet: Few-shot image semantic segmentation with prototype alignment. In Proceedings of the IEEE International Conference on Computer Vision (ICCV 2019), Seoul, Korea, 20–26 October 2019; pp. 9197–9206. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Sharma, A. Training the YOLOv5 Object Detector on a Custom Dataset. 2022. Available online: https://pyimg.co/fq0a3 (accessed on 15 August 2022).
- Mukhiddinov, M.; Cho, J. Smart Glass System Using Deep Learning for the Blind and Visually Impaired. Electronics 2021, 10, 2756. [Google Scholar] [CrossRef]
- Abdusalomov, A.; Mukhiddinov, M.; Djuraev, O.; Khamdamov, U.; Whangbo, T.K. Automatic salient object extraction based on locally adaptive thresholding to generate tactile graphics. Appl. Sci. 2020, 10, 3350. [Google Scholar] [CrossRef]
- Makhmudov, F.; Mukhiddinov, M.; Abdusalomov, A.; Avazov, K.; Khamdamov, U.; Cho, Y.I. Improvement of the end-to-end scene text recognition method for “text-to-speech” conversion. Int. J. Wavelets Multiresolut. Inf. Process. 2020, 18, 2050052. [Google Scholar] [CrossRef]
- Mukhriddin, M.; Jeong, R.; Cho, J. Saliency cuts: Salient region extraction based on local adaptive thresholding for image information recognition of the visually impaired. Int. Arab J. Inf. Technol. 2020, 17, 713–720. [Google Scholar]
- Redmon, J. Darknet: Open-Source Neural Networks in C. 2013–2016. Available online: http://pjreddie.com/darknet/ (accessed on 22 August 2021).
- Bochkovskiy, A.; Wang, C.Y.; Liao HY, M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Abdusalomov, A.; Whangbo, T.K.; Djuraev, O. A Review on various widely used shadow detection methods to identify a shadow from images. Int. J. Sci. Res. Publ. 2016, 6, 2250–3153. [Google Scholar]
- Akmalbek, A.; Djurayev, A. Robust shadow removal technique for improving image enhancement based on segmentation method. IOSR J. Electron. Commun. Eng. 2016, 11, 17–21. [Google Scholar]
- Farkhod, A.; Abdusalomov, A.; Makhmudov, F.; Cho, Y.I. LDA-Based Topic Modeling Sentiment Analysis Using Topic/Document/Sentence (TDS) Model. Appl. Sci. 2021, 11, 11091. [Google Scholar] [CrossRef]
- Kutlimuratov, A.; Abdusalomov, A.; Whangbo, T.K. Evolving Hierarchical and Tag Information via the Deeply Enhanced Weighted Non-Negative Matrix Factorization of Rating Predictions. Symmetry 2020, 12, 1930. [Google Scholar] [CrossRef]
- Ayvaz, U.; Gürüler, H.; Khan, F.; Ahmed, N.; Whangbo, T.; Abdusalomov, A. Automatic Speaker Recognition Using Mel-Frequency Cepstral Coefficients Through Machine Learning. CMC-Comput. Mater. Contin. 2022, 71, 5511–5521. [Google Scholar] [CrossRef]
- Park, M.; Ko, B.C. Two-Step Real-Time Night-Time Fire Detection in an Urban Environment Using Static ELASTIC-YOLOv3 and Temporal Fire-Tube. Sensors 2020, 20, 2202. [Google Scholar] [CrossRef]
- Shakhnoza, M.; Sabina, U.; Sevara, M.; Cho, Y.-I. Novel Video Surveillance-Based Fire and Smoke Classification Using Attentional Feature Map in Capsule Networks. Sensors 2022, 22, 98. [Google Scholar] [CrossRef]
- Zhang, S.G.; Zhang, F.; Ding, Y.; Li, Y. Swin-YOLOv5: Research and Application of Fire and Smoke Detection Algorithm Based on YOLOv5. Comput. Intell. Neurosci. 2022, 2022, 6081680. [Google Scholar] [CrossRef]
- Saponara, S.; Elhanashi, A.; Gagliardi, A. Real-time video fire/smoke detection based on CNN in antifire surveillance systems. J. Real-Time Image Proc. 2021, 18, 889–900. [Google Scholar] [CrossRef]
- Xue, Z.; Lin, H.; Wang, F. A Small Target Forest Fire Detection Model Based on YOLOv5 Improvement. Forests 2022, 13, 1332. [Google Scholar] [CrossRef]
- Shi, F.; Qian, H.; Chen, W.; Huang, M.; Wan, Z. A Fire Monitoring and Alarm System Based on YOLOv3 with OHEM. In Proceedings of the 2020 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 7322–7327. [Google Scholar] [CrossRef]
- Jakhongir, N.; Abdusalomov, A.; Whangbo, T.K. 3D Volume Reconstruction from MRI Slices based on VTK. In Proceedings of the 2021 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Korea, 19–21 October 2021; pp. 689–692. [Google Scholar] [CrossRef]
- Nodirov, J.; Abdusalomov, A.B.; Whangbo, T.K. Attention 3D U-Net with Multiple Skip Connections for Segmentation of Brain Tumor Images. Sensors 2022, 22, 6501. [Google Scholar] [CrossRef] [PubMed]
- Imran; Iqbal, N.; Ahmad, S.; Kim, D.H. Towards Mountain Fire Safety Using Fire Spread Predictive Analytics and Mountain Fire Containment in IoT Environment. Sustainability 2021, 13, 2461. [Google Scholar] [CrossRef]
Dataset | Flame Frames | Non-Flame Frames | Total | ||||||
---|---|---|---|---|---|---|---|---|---|
GitHub | Kaggle | Flickr | GitHub | Kaggle | Flickr | ||||
Indoor environment | 1572 | 1693 | 855 | 741 | 962 | 987 | 327 | 258 | 7395 |
Models | Backbone | Image Size | AP | AP50 | Speedms | FLOPS | ParamM | Epochs |
---|---|---|---|---|---|---|---|---|
YOLOv5n | CSPDarknet-53 | 640 × 640 | 28.0% | 45.8% | 1.7 | 4.5 | 1.9 | |
YOLOv5s | CSPDarknet-53 | 37.4% | 57.6% | 2.1 | 16.5 | 7.2 | ||
YOLOv5m | CSPDarknet-53 | 45.4% | 63.9% | 2.8 | 49.1 | 21.2 | 300 | |
YOLOv5l | CSPDarknet-53 | 49.1% | 65.5% | 3.7 | 110 | 46.5 | ||
YOLOv5x | CSPDarknet-53 | 50.6% | 67.4% | 5.9 | 205.7 | 86.7 |
Hardware | Detailed Specifications |
---|---|
Graphic Processing Unit | GeForce RTX 2080 TI 11 GB (2 are installed) |
Central Processing Unit | Intel Core 9 Gen i7-9700k (4.90 GHz) |
Random Access Memory | DDR4 16 GB (4 are installed) |
Storage | SSD: 512 GB HDD: 2 TB (2 are installed) |
Motherboard | ASUS PRIME Z390-A |
Operating System | Ubuntu Desktop |
Local Area Network | Internal port—10/100 Mbps External port—10/100 Mbps |
Power | 1000 W (+12 V Single Rail) |
Hardware | Detailed Specifications |
---|---|
Processor | Broadcom BCM2837B0 chipset, 1.4 GHz Quad-Core ARM Cortex-A53 (64 Bit) |
Graphic Processing Unit | Dual Core Video Core IV® Multimedia Co-Processor |
Memory | 1 GB LPDDR2 SDRAM |
Connectivity Wireless LAN | 2.4 GHz and 5 GHz IEEE 802.11.b/g/n/ac, maximum range of 100 m |
Connectivity Bluetooth | IEEE 802.15 Bluetooth 4.2, BLE, maximum range of 50 m |
Connectivity Ethernet | Gigabit Ethernet over USB 2.0 (maximum throughput 300 Mbps) |
Video and Audio Output | 1 × full size HDMI, Audio Output 3.5 mm jack, 4 × USB 2.0 ports |
Camera | 15-pin MIPI Camera Serial Interface (CSI-2) |
Operating System | Boots from Micro SD card, running a version of the Linux operating system or Windows 10 IoT |
SD Card Support | Micro SD format for loading operating system and data storage |
Power | 5 V/2.5 A DC via micro-USB connector |
Algorithms | P (%) | R (%) | FM (%) | IoU (%) | Average (%) |
---|---|---|---|---|---|
ELASTIC-YOLOv3 [36] | 0.956 | 0.969 | 0.937 | 0.901 | 0.940 |
Capsule Networks [37] | 0.864 | 0.816 | 0.892 | 0.922 | 0.873 |
Swin-YOLOv5 [38] | 0.948 | 0.936 | 0.942 | 0.958 | 0.943 |
YOLOv2 CNN [39] | 0.834 | 0.716 | 0.762 | 0.882 | 0.801 |
YOLOv5 Improvement [40] | 0.937 | 0.942 | 0.943 | 0.941 | 0.940 |
Dilated CNNs [8] | 0.971 | 0.974 | 0.982 | 0.957 | 0.971 |
Improved YOLOv3 [11] | 0.968 | 0.982 | 0.985 | 0.967 | 0.975 |
Improved YOLOv4 [9] | 0.976 | 0.958 | 0.979 | 0.982 | 0.979 |
YOLOv3 + OHEM [41] | 0.866 | 0.778 | 0.892 | 0.863 | 0.845 |
Improved YOLOv4 BVI [12] | 0.968 | 0.981 | 0.974 | 0.974 | 0.977 |
Proposed Method | 0.982 | 0.997 | 0.997 | 0.991 | 0.982 |
Transmission and Image Processing | Average Frame Processing Time (s) |
---|---|
Bluetooth transmission | 0.11 |
5G/Wi-Fi transmission | 0.32 |
Fire detection and notification | 0.83 |
Total | 1.26 |
Criterion | Improved YOLOv3 [11] | Improved YOLOv4 [9] | Improved YOLOv4 BVI [12] | Proposed Method |
---|---|---|---|---|
Scene Independence | standard | robust | standard | robust |
Object Independence | standard | robust | robust | standard |
Robust to Noise | powerless | robust | standard | robust |
Robust to Color | standard | standard | powerless | robust |
Small Fire Detection | robust | standard | robust | robust |
Multiple Fire Identification | standard | powerless | powerless | robust |
Processing Time | powerless | standard | robust | robust |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Abdusalomov, A.B.; Mukhiddinov, M.; Kutlimuratov, A.; Whangbo, T.K. Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People. Sensors 2022, 22, 7305. https://doi.org/10.3390/s22197305
Abdusalomov AB, Mukhiddinov M, Kutlimuratov A, Whangbo TK. Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People. Sensors. 2022; 22(19):7305. https://doi.org/10.3390/s22197305
Chicago/Turabian StyleAbdusalomov, Akmalbek Bobomirzaevich, Mukhriddin Mukhiddinov, Alpamis Kutlimuratov, and Taeg Keun Whangbo. 2022. "Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People" Sensors 22, no. 19: 7305. https://doi.org/10.3390/s22197305
APA StyleAbdusalomov, A. B., Mukhiddinov, M., Kutlimuratov, A., & Whangbo, T. K. (2022). Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People. Sensors, 22(19), 7305. https://doi.org/10.3390/s22197305