Next Article in Journal
A Literature Review of Parameter-Based Models for Walkability Evaluation
Previous Article in Journal
Comparing the Pore Networks of Coal, Shale, and Tight Sandstone Reservoirs of Shanxi Formation, Qinshui Basin: Inspirations for Multi-Superimposed Gas Systems in Coal-Bearing Strata
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of a Mixed Reality System for Simulating Indoor Disaster Rescue

1
Department of Computer Science and Engineering, Sunmoon University, Asan-si 31460, Republic of Korea
2
Department of Artificial Intelligence and Software Technology, Sunmoon University, Asan-si 31460, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(7), 4418; https://doi.org/10.3390/app13074418
Submission received: 28 February 2023 / Revised: 27 March 2023 / Accepted: 28 March 2023 / Published: 30 March 2023

Abstract

:
Modern buildings are large and complex, and as more time is spent inside them, the risk of indoor disasters such as fires and gas leaks increases. In the event of such a disaster, the success of the rescue operation depends on the ability of the rescue team to navigate and respond to the complex environment. To address this challenge, we designed a mixed reality (MR)-based system simulating indoor disaster rescue. This system uses augmented indoor maps and MR technology to help rescue teams quickly, and effectively respond to unexpected environmental variables and carry out rescue activities. To test the effectiveness of the system, we created a virtual disaster scenario and evaluated the rescue and escape performance within a shortened “golden time” of 2 min, as opposed to 5 min, given the virtual characteristics of the simulation. The results showed that the system is particularly effective at increasing the speed of rescue activities. Additionally, our results indicated the potential for further improvement through additional research. By applying this system for disaster rescue simulations and training, the safety of firefighters and rescuers can be improved by reducing the risk of injury during rescue operations.

1. Introduction

People now spend 80–90% of their time within large and elaborate modern buildings [1]. However, as the number of people indoors increases, building navigation becomes more complex. If an indoor disaster occurs due to a fire or gas leak, a large number of casualties and severe property damage can occur [2]; thus, the importance of rescue teams being properly prepared in the event of an indoor disaster has increased. Rescue operations in such situations can be challenging; the performance of various tasks in a complex environment can lead to fatigue among rescuers. Additionally, indoor disaster environments can pose difficulties such as obstacles, blockages caused by fire, and unreliable communication conditions [3]. Rescue teams must make decisions and take action quickly to perform life-saving work; however, making judgments in this type of environment is difficult, and the need to overcome disruptions in communication with the command-and-control unit, for example, can lead to rescue delays.
Current rescue operations are much more advanced than in the past [4] due to the development of new protective equipment and communication technology for rescue teams. However, few technologies are available to improve rescue efficiency during lifesaving events. In particular, if an indoor space is blocked by an obstacle or fire, it may take time for rescuers to decide how to move through it. Mixed reality (MR) technology can be applied to minimize delays in the rescue response by allowing rescue teams to practice decision-making under various scenarios.
Mixed reality combines aspects of virtual reality (VR) and augmented reality (AR). It is possible to interact with virtual objects in the real world, and with real objects in a virtual world [5]. This increases user immersion and thus efficiently provides the desired information by presenting the important features of digital objects. We exploited this when designing a mixed indoor disaster rescue system.
Most firefighting studies related to MR have focused on training. In this research, we designed an MR system to simulate indoor disaster rescue. The system emulates disaster scenarios, and the user navigates to the space where the disaster occurred. It then provides augmented indoor maps for navigating to specific points to rescue victims in various situations, before ultimately escaping the disaster area via various routes to complete the lifesaving exercise. We evaluated the performance of our technology based on the efficiency of the rescue process with respect to the “golden time” [5], i.e., the time at which smoke inhalation becomes fatal.

2. Related Work

In this section, we summarize MR-based research on disaster safety and present example disaster case studies.

2.1. Disaster Safety

Disasters include events such as earthquakes and fires, which often cause damage that is sufficiently severe and widespread to exceed management capabilities. These events often result in injury, property damage, and even loss of life, and thus require an immediate and effective response (especially as they tend to happen unexpectedly). There have been numerous studies on safety in disaster situations. In recent years, the huge amount of data generated during disasters, i.e., “big data,” has led to the development of new safety technologies aimed at improving disaster responses. For example, blockchain technology can be used to verify and store data (i.e., improve disaster data management) to identify the cause of a disaster [6,7]. Lee et al. conducted a study that analyzed and classified the characteristics of “cold wave” disasters using daily report data [8]. Munawar et al. evaluated the safety of facilities during natural disasters by accumulating big data [9]. Kaur and colleagues used a system that collects big data from Internet of Things (IoT) sensors for disaster detection and prompt evacuation [10].
Big data collection, analysis, and visualization are all important for disaster management. Alabdulaali et al. aimed to make disaster-related big data more understandable [11]. As research on disaster-related big data has increased, so too has the use of artificial intelligence in disaster research. Ketmaneechairat and colleagues applied artificial intelligence to convert voice data into text for automatic disaster classification [12]. Elangovan et al. used recurrent neural network models and contextual data from speech to analyze disasters [13]. Xie et al. applied a word embedding model to evaluate disasters by analyzing text data from multiple news articles and social media posts [14]. Duggal et al. analyzed disaster occurrences using sensor data and machine learning models [15]. Zhao and colleagues used an artificial intelligence model to identify the optimal evacuation route during a disaster, based on user location information obtained from sensors [16]. Salmi et al. achieved earthquake detection using seismic vibration data in conjunction with convolutional neural network-long short-term memory models [17]. Majid and colleagues conducted a study that used convolutional neural network models to predict fires through smoke image analysis [18]. Magherini et al. applied a convolutional neural network model, ResNet50, to determine whether a person was wearing a mask by analyzing images captured by a camera [19]. Ahn et al. used closed-circuit television (CCTV) footage to detect fires in advance during fire disasters [20]. Khan et al. developed a fire detection system based on deep learning models to classify wildfire images as “fire” or “normal” by training the model on wildfire footage data [21]. Huang et al. used the YOLOv3 model to classify dangerous areas based on image data collected via CCTV [22]. Yu et al. developed a deep learning model to detect underground disasters [23]. Thus, artificial intelligence plays a key role in disaster management and mitigation efforts. In a disaster situation, it is crucial to understand the current status of the surrounding environment. Therefore, many disaster management systems collect and analyze various forms of environmental data, and then use that data for disaster preparation training. A growing number of studies are using IoT for disaster management systems.
IoT technology enables the collection of data from various sensors in real time, for a more effective and prompt disaster response. Anuradha and colleagues developed a disaster safety monitoring system that uses IoT sensors to collect data on, and monitor, environmental variables such as temperature, rainfall, and vibration under normal conditions, as a way to prepare for potential disasters [24]. Chhetri et al. applied IoT sensors to detect potential disasters by monitoring electrical energy, which can cause fires [25]. Dai et al. created a system for managing marine environments using IoT sensors in the sea [26]. Nguyen et al. employed IoT sensor technology to detect missing people in disaster situations [27]. Brar et al. designed a disaster situation decision-making system using IoT sensors to enhance rescue workers’ safety during disasters [28]. The above studies describe some of the many disaster management and safety measures that rely on real-time data and early warning systems to support and assist rescue workers and victims.

2.2. Mixed Reality

Mixed reality is on a continuum of actual and virtual reality, and thus involves both virtual and real objects; it is an extension of existing VR technology [25]. In other words, VR and AR are fused. A virtual object is embedded in a complete real-world environment and a digitized real-world object is brought into a virtual environment [26]. Figure 1 illustrates the concept of mixed reality.
In one study on mixed reality, Acampora et al. trained police officers to recognize criminals via a mixed-reality game [5]. Gattullo et al. successfully used mixed-reality applications to support lectures [27]. Kabuye et al. used mixed reality to establish a surgical system that closely resembled real surgery. [28]. Wu et al. used mixed reality to ensure the safety of construction workers, by making them aware of the dangers associated with construction site equipment, and materials [29]. Choi et al. used mixed reality to ensure a safe distance between robots and humans during human–robot collaboration [30]. Mortzis et al. worked with managers and field workers in remote locations, and applied mixed reality to real-time factory work [31]. Ulloa et al. used mixed reality for remote robotic manipulation [32]. Franzo et al. employed mixed reality to derive a 3D virtual model for real-time visualization of exercise posture, which enhanced training and rehabilitation [33]. Thus, mixed reality has many applications; we considered that mixed reality might aid disaster recovery.

3. Materials and Methods

In this section, we describe the scenario and design method for our MR system for indoor disaster response and collaboration. The MR equipment used in our system collects spatial information using a built-in localization and mapping sensor. The mapping is driven by an algorithm built into the MR device. Based on the mapped indoor space, the user completes a simulation involving the rescue of trapped individuals in an indoor disaster situation. In the simulation, it is assumed that the user is a firefighter. After the rescue operation, the user moves to the exit and escapes to complete the simulation. The scenario-based rescue operations were designed for two or more users. Figure 2 shows a schematic of the MR system simulating indoor disaster rescue.

3.1. Simulated Scenarios

This section describes the indoor disaster response scenarios simulated by the MR system. After starting the system, the user navigates to the space where the indoor disaster occurred by following the navigation guidance, and the status window is activated. Virtual objects are used to represent fire disasters. The user rescues the victims of the fire disaster. Figure 3 provides an overview of a simulated rescue operation.
The MR system simulates three scenarios, as follows:
  • Scenario 1 (S1): A virtual disaster occurs in one location, and individuals trapped in that space are evacuated through the central stairway.
  • Scenario 2 (S2): A virtual disaster occurs in two locations, and trapped individuals are again evacuated through the central stairway.
  • Scenario 3 (S3): A virtual disaster occurs in one location, and trapped individuals should be evacuated through the central stairway. However, due to an unexpected fire in another corridor, evacuation ultimately proceeds through another exit.
The system uses a map of a university building. The user’s location is the “starting point”. Figure 4 provides an overview of the map and scenarios.

3.2. System Operation

In this section, the system operation is described. First, the user collects spatial information to ascertain their current location. This information is then used to create a virtual indoor disaster scenario, and to display the user’s current location. When the system is started, it maps the current location. The current and stored mapping information are then compared, and the user’s current location is displayed on the indoor map. Using the mapping information, a virtual indoor disaster scenario is created in a random space. The user then moves to the location of the indoor disaster. A navigation function is then activated; it uses arrows and a status window to indicate distance and direction. When the user arrives at the location of the indoor disaster, they must search for and rescue survivors. Once all survivors have been rescued, the user can move to the central stairway to escape. Figure 5 illustrates the sequence of operations of the MR system.

3.3. System Design

This section describes how we designed the MR system using the Unity game engine. With Unity, objects rotate or move relative to the left-handed coordinate system. An icon representing the user’s location is placed on the indoor map on system startup. Then, the camera is rotated along the X-axis by 90° and Y-axis by 180°; the Z-axis remains fixed so that the indoor map and user position icon can be viewed from the front. The reason for using this design is that rotating the Y-axis causes the map to rotate such that it does not affect Y-axis rotation; the X-axis affects the movement, and the Z-axis rotates the map according to the direction of the user position icon (based on the X-axis value). The system is designed to accurately calibrate the user’s location icon on the fixed indoor map user interface by dividing 5.0 position correction values for the X and Z coordinates and adding the X and Z coordinate values to the starting point. Calibration is performed because there is a difference in the coordinates calculated by the hardware and software. Five pixels on the screen represent the movements of “1.0 coordinate values” in the Unity coordinate system. Figure 6 shows how the system’s user position icon is displayed on the indoor map.
Once the system is started and the status window is activated, the user sees their current direction, the next direction, the next distance, and time to reach the next destination. The thread processing feature of Unity is required because the state information must change when the calculated values are applied to the status window. Because the socket thread does not have access to the status window, it makes a request to the main thread. Then, a queue is created and shared with the main thread. The socket thread progresses data along the queue. Unity’s main thread periodically checks the size of the queue, and modifies it if data that would require a change in the information displayed by the status window are available. Figure 7 shows the status window processes.
When the MR system is activated, a virtual indoor disaster situation is randomly generated, including a virtual victim. An escape point is created upon rescue completion. The navigation function is designed to be displayed as an arrow in front of the user. To guide the user to the victim’s location, the navigation arrow rotates based on the angle of the axis where the victim is located. A correction value of 0.5 is applied so that the arrow does not move from the front position, where it points to the victim; the arrow only rotates from the front position to aid interpretation. Regarding the escape point created after victim rescue, the navigation arrow rotates based on the angle of the designated escape route. Because exits generally have many stairs, the simulation is designed such that escape occurs when the y-axis value is <0. Figure 8 describes the navigation function.
As stated above, the MR system simulates disaster situations requiring the rescue of victim/s. Pre-mapped spatial data are stored in the device’s memory. When activated, the system randomly generates virtual disaster sites and victims asynchronously. A button corresponds to the victim’s location; when the button is clicked, the victim object is deactivated (i.e., they are rescued). Figure 9 shows a schematic of the indoor virtual disaster.
The system is configured to run three scenarios, as shown in Figure 10. The situation varies among the scenarios and users can select the scenario of their choice. Figure 11 shows the system specification.

3.4. System Client

This section describes the system client (see also Section 3.3). When the system is started, the user’s location is initialized. The status window, indoor map, user location icon, and navigation arrow are displayed. Figure 12 shows the features activated upon system startup. Number 1 in Figure 12 represents the user’s location on the indoor map. Number 2 represents the navigation function, which guides the user to the disaster location. Number 3 represents the function that displays distances and the estimated time to escape. Number 4 represents virtual disaster sites. Finally, number 5 represents the victim. The scenarios (S1–S3) are described in Section 3.1.
When the user follows the navigation arrow, they arrive at the victim’s location. The number of victims varies from one to four and is randomly determined. When the user arrives at the victim’s location and clicks the button at the bottom of the victim, the victim is saved in scenarios S1–S3. Once the user rescues all victims, navigation guidance to the exit is activated. Figure 13 shows the process of rescuing victims.
After rescuing a victim, the user moves toward the exit by following the navigation guidance. For a typical building, most exits involve stairs. Thus, based on the scenario described in 3.2, descending from the 4th to the 3rd floor will complete the escape process and shut down the system. This corresponds to scenarios S1 and S2. Figure 14 shows the process of rescuing the victim and moving toward the exit.
While rescuing the victim and moving toward the exit, an unexpected fire occurs in the middle corridor. At this point, the navigation arrow guides the user from the currently targeted exit toward another exit. This corresponds to scenario S3; Figure 15 shows this process.
A full video of the experiment can be found in the Supplementary Materials Section.

4. Experimental Results

This section describes the system environment and performance evaluation. The proposed MR system uses Unity, as stated above, which can handle VR/AR systems. Table 1 and Table 2 list the specifications of the MR system and device, respectively.
Time is the most crucial factor when rescuing survivors. Five minutes is the golden time to rescue survivors of indoor disasters such as fires, [5] as >60% of fire deaths are caused by suffocation due to gas and smoke [34]. Therefore, the goal in our simulations is to rescue victims within 5 min. A total of ten experimenters (eight men and two women) participated in our system evaluation based on this time limit; three had experienced MR environments (VR or AR) at least once while the other seven had no experience with these environments. Rescue operations in real-world disaster environments involve many complex variables. In virtual disaster environments, rescue operations can be simplified. Thus, the golden time of 5 min was reduced to 2 min for the rescue simulations. The reason for setting it to 2 min is that there are many situations when rescuing people from a real disaster site. For example, you may find a person, but there may be obstacles in front of you that need to be removed, or there may be a fire that needs to be extinguished. If we had designed the system to take this into account, we would have set the experiment time to match the prime time slot, but as we designed the system to simply rescue people by touch, we assumed that it would take 1 min and 30 s to remove the obstacle or extinguish the fire, so we set the experiment time to 2 min. Each experimenter supervised one experiment for each scenario (total of three experiments). We evaluated the performance of our MR system by recording the time it took participants to rescue the victim/s and escape. The evaluation indices for the scenarios are listed in Table 3; S1–S3 focus on speed, accuracy), and the ability to handle unexpected situations, respectively.
Figure 16 shows the average escape times for the three scenarios. The final participants to complete the experiment were asked to provide feedback to guide future research aimed at system improvements.
In S1, rescue and escape were achieved with a reduction in the golden time of 39.3%, indicating that the navigation guidance provided by the system can enable the rapid rescue of victims and subsequent escape to safety of the rescuer. In S2, the golden time reduction was 11.9%. Finally, the golden time reduction was about 20.6% in S3. Thus, the re-exploration time required for escape following an unexpected situation was reduced. The mean reduction in golden time was 23.9%. Thus, the MR system facilitated indoor disaster rescue. Ways to potentially improve the prototype system are summarized in Table 4 based on participant feedback.
Based on the participants’ feedback, support for multiple devices should be the top priority for MR system improvement. In the system verification stage, an individual standing next to the person wearing the equipment interacted with the virtual objects by hand. We hope to improve the system such that it can be used in conjunction with many MR-compatible devices. Other suggested improvements included addressing overlap of virtual objects, improving rescue operations, and minimizing spatial position error. For the designed system to be applied for training in the response to indoor disaster situations, the equipment used for actual rescues needs to be incorporated as virtual objects for realistic simulations. Finally, the cumulative error in spatial locations in indoor space should be improved. The system is equipped with sensors that allow for navigation through space and the projection of virtual objects into real space. However, as the system is used, position error (i.e., discrepancies in the position of the virtual object and actual space coordinates) accumulates. Therefore, methods and algorithms that can reduce this error are needed.

5. Conclusions

We present an MR-based system simulating indoor disaster rescue designed to ensure a safe, smooth, and rapid rescue response. Virtual disaster situations in which rescuers had to complete rescue operations within a golden time of 2 min were used to evaluate system performance. The participants were assessed in terms of speed, accuracy, and the ability to react to unexpected situations; reductions in the golden time of 39.3%, 11.9%, and 20.6% were achieved in S1–S3, respectively (average improvement of 23.9%). These results confirm the efficiency of the proposed system. Recommendations for improvements were also provided by the participants, including system compatibility with multiple devices. Additionally, object overlap, and cumulative spatial mapping error should be addressed. The proposed MR system simulates rescue activities to promote a more rapid disaster response while also ensuring the safety of the rescuers/firefighters.

Supplementary Materials

The following system execute video for experiment can be downloaded or confirmed at https://drive.google.com/drive/folders/1a0-uhdnLITE6DNcdxZUkXQWBCamKS7gN?usp=share_link (accessed on 27 February 2022).

Author Contributions

Conceptualization, Y.-J.C., J.-H.K. and S.-W.H.; formal analysis, Y.-J.C., J.-H.K., Y.-J.C. and Y.-Y.P.; investigation, Y.-J.C., J.-H.K., Y.-J.C. and Y.-Y.P.; project administration, Y.-J.C. and H.-W.L.; resources, S.-W.H. and H.-W.L.; software, Y.-J.C.; supervision, J.-H.K., S.-W.H. and Y.-Y.P.; validation, Y.-J.C., J.-H.K., H.-W.L. and Y.-Y.P.; writing—original draft, Y.-J.C.; writing—review and editing, Y.-J.C., J.-H.K. and H.-W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea Grant, Funded by the Korean Government (NRF2021R1A2C1004651).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jang, K.H.; Cho, S.B.; Cho, Y.S.; Son, S.N. Development of Fire Engine Travel Time Estimation Model for Securing Golden Time. J. Korea Inst. Intell. Transp. Syst. 2020, 19, 1–13. [Google Scholar] [CrossRef]
  2. Lee, H.W.; Lee, K.O.; Bae, J.H.; Kim, S.Y.; Park, Y.Y. Using Hybrid Algorithms of Human Detection Technique for Detecting Indoor Disaster Victims. Computation 2022, 10, 197. [Google Scholar] [CrossRef]
  3. Munawar, H.S.; Mojtahedi, M.; Hammad, A.W.; Kouzani, A.; Mahmud, M.P. Disruptive technologies as a solution for disaster risk management: A review. Sci. Total Environ. 2022, 806, 151351. [Google Scholar] [CrossRef]
  4. Myomin, T.; Lim, S. The emergence of multiplex dynamics between information provision ties and rescue collaboration ties: A longitudinal network analytic approach to flooding cases in Myanmar. Nat. Hazards 2022, 114, 645–663. [Google Scholar] [CrossRef]
  5. Acampora, G.; Trinchese, P.; Trinchese, R.; Vitiello, A. A Serious Mixed-Reality Game for Training Police Officers in Tagging Crime Scenes. Appl. Sci. 2023, 13, 1177. [Google Scholar] [CrossRef]
  6. National Fire Agency. Available online: http://www.nfa.go.kr/ (accessed on 27 October 2022).
  7. AlAbdulaali, A.; Asif, A.; Khatoon, S.; Alshamari, M. Designing Multimodal Interactive Dashboard of Disaster Management Systems. Sensors 2022, 22, 4292. [Google Scholar] [CrossRef]
  8. Ketmaneechairat, H.; Maliyaem, M. Natural language processing for disaster management using conditional random fields. J. Adv. Inf. Technol. 2020, 11, 97–102. [Google Scholar] [CrossRef]
  9. Elangovan, A.; Sasikala, S. A Multi-label Classification of Disaster-Related Tweets with Enhanced Word Embedding Ensemble Convolutional Neural Network Model. Informatica 2022, 46, 131–144. [Google Scholar] [CrossRef]
  10. Xie, S.; Hou, C.; Yu, H.; Zhang, Z.; Luo, X.; Zhu, N. Multi-label disaster text classification via supervised contrastive learning for social media data. Comput. Electr. Eng. 2022, 104, 108401. [Google Scholar] [CrossRef]
  11. Duggal, R.; Gupta, N.; Pandya, A.; Mahajan, P.; Sharma, K.; Angra, P. Building structural analysis based Internet of Things network assisted earthquake detection. Internet Things 2022, 19, 100561. [Google Scholar] [CrossRef]
  12. Zhao, H.; Schwabe, A.; Schläfli, F.; Thrash, T.; Aguilar, L.; Dubey, R.K.; Karjalainen, J.; Hölscher, C.; Helbing, D.; Schinazi, V.R. Fire evacuation sup-ported by centralized and decentralized visual guidance systems. Saf. Sci. 2022, 145, 105451. [Google Scholar] [CrossRef]
  13. Salmi, S.; Oughdir, L. CNN-LSTM Based Approach for Dos Attacks Detection in Wireless Sensor Networks. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 835–842. [Google Scholar] [CrossRef]
  14. Majid, S.; Alenezi, F.; Masood, S.; Ahmad, M.; Gündüz, E.S.; Polat, K. Attention based CNN model for fire detection and localization in real-world images. Expert Syst. Appl. 2022, 189, 116114. [Google Scholar] [CrossRef]
  15. Magherini, R.; Mussi, E.; Servi, M.; Volpe, Y. Emotion recognition in the times of COVID19: Coping with face masks. Intell. Syst. Appl. 2022, 15, 200094. [Google Scholar] [CrossRef]
  16. Ahn, Y.; Choi, H.; Kim, B.S. Development of early fire detection model for buildings using computer vision-based CCTV. J. Build. Eng. 2022, 65, 105647. [Google Scholar] [CrossRef]
  17. Khan, S.; Khan, A. FFireNet: Deep Learning Based Forest Fire Classification and Detection in Smart Cities. Symmetry 2022, 14, 2155. [Google Scholar] [CrossRef]
  18. Huang, T.; Zou, X.; Wang, Z.; Wu, H.; Wang, Q. RGBD image based human detection for electromechanical equipment in underground coal mine. In Proceedings of the 2022 IEEE International Conference on Mechatronics and Automation (ICMA), Guilin, China, 7–10 August 2022; IEEE: New York, NY, USA, 2022; pp. 952–957. [Google Scholar]
  19. Yu, H.; Guo, Z. Design of Underground Space Intelligent Disaster Prevention System Based on Multisource Data Deep Learning. Wirel. Commun. Mob. Comput. 2022, 2022, 3706392. [Google Scholar] [CrossRef]
  20. Anuradha, B.; Abinaya, C.; Bharathi, M.; Janani, A.; Khan, A. IoT Based Natural Disaster Monitoring and Prediction Analysis for Hills Area Using LSTM Network. In Proceedings of the 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 25–26 March 2022; IEEE: New York, NY, USA, 2022; Volume 1, pp. 1908–1913. [Google Scholar]
  21. Chhetri, S.; Mahamuni, C.V. Design and Implementation of an IoT-Edge Computing-based Flood Monitoring System for Early Warning: A Case of Nagaon District in Assam. In Proceedings of the 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO), Noida, India, 13–14 October 2022; IEEE: New York, NY, USA, 2022; pp. 1–9. [Google Scholar]
  22. Dai, M.; Li, Y.; Li, P.; Wu, Y.; Qian, L.; Lin, B.; Su, Z. A Survey on Integrated Sensing, Communication, and Computing Networks for Smart Oceans. J. Sens. Actuator Netw. 2022, 11, 70. [Google Scholar] [CrossRef]
  23. Nguyen, V.Q.; Vu, H.T.; Nguyen, V.H.; Kim, K. A Smart Evacuation Guidance System for Large Buildings. Electronics 2022, 11, 2938. [Google Scholar] [CrossRef]
  24. Brar, P.S.; Shah, B.; Singh, J.; Ali, F.; Kwak, D. Using Modified Technology Acceptance Model to Evaluate the Adoption of a Proposed IoT-Based Indoor Disaster Management Software Tool by Rescue Workers. Sensors 2022, 22, 1866. [Google Scholar] [CrossRef]
  25. Cárdenas-Robledo, L.A.; Hernández-Uribe, Ó.; Reta, C.; Cantoral-Ceballos, J.A. Extended reality ap-plications in industry 4.0.—A systematic literature review. Telemat. Inform. 2022, 73, 101863. [Google Scholar] [CrossRef]
  26. Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  27. Gattullo, M.; Laviola, E.; Boccaccio, A.; Evangelista, A.; Fiorentino, M.; Manghisi, V.M.; Uva, A.E. Design of a mixed reality application for STEM distance education laboratories. Computers 2022, 11, 50. [Google Scholar] [CrossRef]
  28. Kabuye, E.; LeDuc, P.; Cagan, J. A mixed reality system combining augmented reality, 3D bio-printed physical environments and inertial measurement unit sensors for task planning. Virtual Real. 2023. [Google Scholar] [CrossRef]
  29. Wu, S.; Hou, L.; Zhang, G.K.; Chen, H. Real-time mixed reality-based visual warning for construction workforce safety. Autom. Constr. 2022, 139, 104252. [Google Scholar] [CrossRef]
  30. Choi, S.H.; Park, K.B.; Roh, D.H.; Lee, J.Y.; Mohammed, M.; Ghasemi, Y.; Jeong, H. An integrated mixed reality system for safety-aware human-robot collaboration using deep learning and digital twin generation. Robot. Comput. Integr. Manuf. 2022, 73, 102258. [Google Scholar] [CrossRef]
  31. Mourtzis, D.; Siatras, V.; Angelopoulos, J. Real-time remote maintenance support based on augmented reality (AR). Appl. Sci. 2020, 10, 1855. [Google Scholar] [CrossRef] [Green Version]
  32. Cruz Ulloa, C.; Domínguez, D.; Del Cerro, J.; Barrientos, A. A Mixed-Reality Tele-Operation Method for High-Level Control of a Legged-Manipulator Robot. Sensors 2022, 22, 8146. [Google Scholar] [CrossRef]
  33. Franzò, M.; Pica, A.; Pascucci, S.; Marinozzi, F.; Bini, F. Hybrid System Mixed Reality and Marker-Less Motion Tracking for Sports Rehabilitation of Martial Arts Athletes. Appl. Sci. 2023, 13, 2587. [Google Scholar] [CrossRef]
  34. Wu, B.; Fu, R.; Chen, J.; Zhu, J.; Gao, R. Research on natural disaster early warning system based on uav technology. In IOP Conference Series: Earth and Environmental Science, Proceedings of the 5th International Conference on Civil Engineering, Architectural and Environmental Engineering, Chengdu, China, 23–25 April 2021; IOP Publishing: Bristol, UK, 2021; Volume 787, p. 012084. [Google Scholar]
Figure 1. The concept of mixed reality.
Figure 1. The concept of mixed reality.
Applsci 13 04418 g001
Figure 2. Overview of the mixed reality (MR) system simulating indoor disaster rescue.
Figure 2. Overview of the mixed reality (MR) system simulating indoor disaster rescue.
Applsci 13 04418 g002
Figure 3. Simulated rescue scenario.
Figure 3. Simulated rescue scenario.
Applsci 13 04418 g003
Figure 4. Map used for the simulated scenarios.
Figure 4. Map used for the simulated scenarios.
Applsci 13 04418 g004
Figure 5. Flowchart of the MR system simulating indoor disaster rescue.
Figure 5. Flowchart of the MR system simulating indoor disaster rescue.
Applsci 13 04418 g005
Figure 6. The user’s location is displayed as an icon on the indoor map.
Figure 6. The user’s location is displayed as an icon on the indoor map.
Applsci 13 04418 g006
Figure 7. Status window processes.
Figure 7. Status window processes.
Applsci 13 04418 g007
Figure 8. Navigation function guiding the user toward the victim and exit.
Figure 8. Navigation function guiding the user toward the victim and exit.
Applsci 13 04418 g008
Figure 9. Schematic of the indoor virtual disaster.
Figure 9. Schematic of the indoor virtual disaster.
Applsci 13 04418 g009
Figure 10. Scenario manager for the MR system.
Figure 10. Scenario manager for the MR system.
Applsci 13 04418 g010
Figure 11. Specification of the MR system.
Figure 11. Specification of the MR system.
Applsci 13 04418 g011
Figure 12. System functions: ①: user position, ② navigation function, ③ status window, ④ virtual disaster site, and ⑤ victim location.
Figure 12. System functions: ①: user position, ② navigation function, ③ status window, ④ virtual disaster site, and ⑤ victim location.
Applsci 13 04418 g012
Figure 13. Process of rescuing disaster victims.
Figure 13. Process of rescuing disaster victims.
Applsci 13 04418 g013
Figure 14. Escape process after victim rescue.
Figure 14. Escape process after victim rescue.
Applsci 13 04418 g014
Figure 15. Movement toward another exit.
Figure 15. Movement toward another exit.
Applsci 13 04418 g015
Figure 16. Average escape times under each scenario.
Figure 16. Average escape times under each scenario.
Applsci 13 04418 g016
Table 1. Mixed reality system specifications.
Table 1. Mixed reality system specifications.
Laptop
CPUAMD Ryzen 9 5900H 3.30 Hz
RAM16 GB
OSWindows 11 Pro
GPUNVIDIA GeForce RTX 3060
ToolUnity 2020.3.35 LTS
FrameworkMRTK 2.7.3
CPU: central processing unit; RAM: random access memory; OS: operating system; GPU: graphics.
Table 2. Mixed reality device specifications.
Table 2. Mixed reality device specifications.
Microsoft Hololens2
CPUQualcomm Snapdragon 850
RAM4 GB
OSWindows 10 Pro
Storage64 GB
Function6DOF, 96.1ºFOV
Table 3. Evaluation indices for the three scenarios.
Table 3. Evaluation indices for the three scenarios.
Scenario 1Scenario 2Scenario 3
SpeedAccuracyHandling an unexpected situation
Table 4. Potential system improvements cited by the participants.
Table 4. Potential system improvements cited by the participants.
Potential ImprovementsCount
Inclusion of complex scenarios with multiple variables1
Address overlap of virtual objects2
Improve victim rescue operations2
Address cumulative indoor spatial position error2
Support for multiple devices3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chae, Y.-J.; Lee, H.-W.; Kim, J.-H.; Hwang, S.-W.; Park, Y.-Y. Design of a Mixed Reality System for Simulating Indoor Disaster Rescue. Appl. Sci. 2023, 13, 4418. https://doi.org/10.3390/app13074418

AMA Style

Chae Y-J, Lee H-W, Kim J-H, Hwang S-W, Park Y-Y. Design of a Mixed Reality System for Simulating Indoor Disaster Rescue. Applied Sciences. 2023; 13(7):4418. https://doi.org/10.3390/app13074418

Chicago/Turabian Style

Chae, Yoon-Jae, Ho-Won Lee, Jong-Hyuk Kim, Se-Woong Hwang, and Yoon-Young Park. 2023. "Design of a Mixed Reality System for Simulating Indoor Disaster Rescue" Applied Sciences 13, no. 7: 4418. https://doi.org/10.3390/app13074418

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop