Next Article in Journal
FedScrap: Layer-Wise Personalized Federated Learning for Scrap Detection
Next Article in Special Issue
Parkinson’s Disease Severity Index Based on Non-Motor Symptoms by Self-Organizing Maps
Previous Article in Journal
An On-Demand Fault-Tolerant Routing Strategy for Secure Key Distribution Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Real-Time Semantic Data Integration and Reasoning in Life- and Time-Critical Decision Support Systems

by
Andreas Soularidis
1,*,
Konstantinos Ι. Kotis
1,* and
George A. Vouros
2
1
Intelligent Systems Lab, Department of Cultural Technology and Communication, University of the Aegean, 81100 Mytilene, Greece
2
AI Lab, Department of Digital Systems, University of Piraeus, 18534 Piraeus, Greece
*
Authors to whom correspondence should be addressed.
Electronics 2024, 13(3), 526; https://doi.org/10.3390/electronics13030526
Submission received: 30 December 2023 / Revised: 19 January 2024 / Accepted: 25 January 2024 / Published: 28 January 2024
(This article belongs to the Special Issue New Challenges of Decision Support Systems)

Abstract

:
Natural disasters such as earthquakes, floods, and forest fires involve critical situations in which human lives and infrastructures are in jeopardy. People are often injured and/or trapped without being able to be assisted by first responders on time. Moreover, in most cases, the harsh environment jeopardizes first responders by significantly increasing the difficulty of their mission. In such scenarios, time is crucial and often of vital importance. First responders must have a clear and complete view of the current situation every few seconds/minutes to efficiently and timely tackle emerging challenges, ensuring the safety of both victims and personnel. Advances in related technology including robots, drones, and Internet of Things (IoT)-enabled equipment have increased their usability and importance in life- and time-critical decision support systems such as the ones designed and developed for Search and Rescue (SAR) missions. Such systems depend on efficiency in their ability to integrate large volumes of heterogeneous and streaming data and reason with this data in (near) real time. In addition, real-time critical data integration and reasoning need to be performed on edge devices that reside near the missions, instead of using cloud infrastructure. The aim of this paper is twofold: (a) to review technologies and approaches related to real-time semantic data integration and reasoning on IoT-enabled collaborative entities and edge devices in life- and time-critical decision support systems, with a focus on systems designed for SAR missions and (b) to identify open issues and challenges focusing on the specific topic. In addition, this paper proposes a novel approach that will go beyond the state-of-the-art in efficiently recognizing time-critical high-level events, supporting commanders and first responders with meaningful and life-critical insights about the current and predicted state of the environment in which they operate.

1. Introduction

Large-scale natural disasters such as wildfires, hurricanes, floods, earthquakes, etc., put human lives, animals, properties, and infrastructures in danger. Even if a disastrous event can be predicted (e.g., hurricanes), the post-disaster situation in the affected areas is often unpredictable, especially when it comes to human life losses. In such scenarios, time is of vital importance and, therefore, an effective and urgent response is imperative. For example, after a mega earthquake incident, a time window of 72 h determines the difference between life and death for the victims trapped in debris. However, in most cases, victims are trapped in an area where rescue teams cannot easily establish access. Moreover, unpredicted events such as fire existence, leakage of gas or chemical substances, etc., constitute a hazardous environment that endangers first responders and greatly increases the difficulty of their mission. Technological advances in the fields of Robotics, IoT, and Unmanned Aerial Vehicles (UAVs) led to the use of a variety of entities in SAR missions [1], within a highly interconnected network of sensors and actuators. The use of these entities in such environments significantly determines the outcome of an SAR mission by providing real-time monitoring of the affected areas and reducing the response time [2,3].
However, such environments are highly dynamic since enormous volumes of heterogeneous and fast/continuously changing data are constantly produced (sensed). In disaster scenarios, the incident status changes very fast; thus, the challenge of time-critical analysis of streaming data emerges. Moreover, the heterogeneity in these data streams, both at the structural and semantic levels, constitutes another challenge. To tackle this problem, ontologies are used, not only to bridge the heterogeneity in different data sources/streams but also to facilitate the implementation of intelligent algorithms that can automatically infer new (unseen) knowledge related to the incidents. In addition, to address the problem of streaming data, stream reasoning approaches emerged, enabling the implementation of real-time services [4]. Knowledge graphs (KGs) have gained much attention since they provide a way to organize, manage, and represent large-scale structured data and also support reasoning tasks [5,6]. Apart from the extraction of new (inferred) knowledge, reasoning over KGs can detect inconsistencies in definitions of entities or even new relations (connections) between entities that further enrich the knowledge in the graph [7].
Taking into account the aforementioned technologies, let us assume the following SAR scenario, which occurs after a disastrous mega-earthquake incident in which life- and time-critical decisions must be made. This scenario is focused on the use of sensor-based IoT-enabled collaborative entities, including UAVs, robots, weather stations, social media (e.g., tweets), related applications, wearables (e.g., smart-watches), etc. The key challenge in this scenario is the efficient fusion of heterogeneous big data, supporting real-time automated reasoning for the recognition of high-level events and thus efficient decision-making for time- and life-critical situations. Such a scenario includes at least the following hardware entities: (a) a multi-sensory leader drone recording location, video, sound, air quality (e.g., CO2, gas), and temperature, humidity; (b) a swarm of reconfigurable/foldable drones under the command of the leader-drone recording location and video; (c) a ground weather station recording the environmental conditions (temperature, humidity, wind direction/speed, etc.); (d) a ground robot, e.g., Spot recording location and video; and (e) one or more ground base units/controllers on edge devices, operating as an interoperability middleware, e.g., a Raspberry Pi controller. In addition, the following software entities are involved: (a) social media channels e.g., Telegram (https://web.telegram.org/a/ (accessed on 24 January 2024)), Twitter-X (https://twitter.com/ (accessed on 24 January 2024)), FB (https://www.facebook.com/ (accessed on 24 January 2024)) that generate related posts/messages from users located in (or nearby) the affected area and (b) data collected from applications (e.g., health monitoring apps) run on the wearables of people involved in the incident.
Following the scenario, the leader-drone is equipped with a processor for low-level event recognition (such as temperature/humidity/CO2 variations in the air) while it is gathering real-time context-aware data (video, audio, weather, environmental). The sensor data gathered from the related IoT-enabled entities is transmitted in ground middleware units, i.e., edge devices with a semantic data processing and reasoning engine, with the capability to query knowledge that is modeled in a Graph Neural Network (GNN). The received (stream/dynamic) sensor data are integrated with static data, e.g., mission plans, data about the involved people and objects, and building information models (BIM). An image, audio, or/and video analysis module is used to analyze and detect entities of interest (EoI). An EoI is defined as a ”target” for the mission entity such as a trapped/injured victim a dangerous object such as a gas vessel, etc. The detected EoI combined with other sensor data is used to analyze and recognize real-time low-level events (e.g., gas at specific lat/long/alt coordinates, obstacles/objects/shapes at specific lat/long/alt coordinates—video analysis, human life/voice recognized), and then recognize (via automated reasoning) high-level events, recommending actions to be taken, and decisions to be made about time- and life-critical events, such as (a) for the sign of life recognized at specific lat/long/alt coordinates, send the nearest available first responders and inform the nearest hospital to be prepared for handling the situation and (b) abort the SAR mission due to computation of high-risk factors including inference on possible additional human loses (putting first responders in real danger). More complex event handling must be supported, such as the following:
IF a house on fire is recognized AND people are screaming “help” from this house, THEN send an alert to the nearby fire station that people are in danger at a specific set of latitude and longitude coordinates AND send a flying drone agent to these coordinates for further investigation of the situation AND send to these coordinates a spot robot agent for assisting a human agent (fireman) in entering the house that is on fire.
Based on the high-level events recognized and the corresponding planned decisions to be made, drones operating in a swarm are notified with recommended alternatives, e.g., an alternative mission plan, sending a Spot robot to lat/long coordinates for further investigation, etc. Concurrently, the KG-based GNN is used to make predictions about the evolution of the state in the next few seconds/minutes, recommending alternative responses to the commanders/decision makers (e.g., enforce the SAR team with more ground robots) to enhance awareness and the efficiency of responding, ensuring the safety of both victims and personnel.
The contributions of this paper are summarized as follows:
  • We review technologies and approaches related to real-time semantic data integration and reasoning on IoT-enabled collaborative IoT entities and edge devices in life- and time-critical decision support systems, focusing on SAR missions and disaster management scenarios.
  • We identify and discuss open issues and challenges focusing on the specific topic of semantic data integration and reasoning with SAR-related knowledge in life- and time-critical decision support systems.
  • We propose a novel approach that goes beyond the state-of-the-art in efficiently recognizing time-critical high-level events, supporting commanders and first responders with meaningful and life-critical insights about the current and predicted states of the environment in which they operate.
The structure of this paper is as follows: Section 2 describes the basic concepts of IoT and edge devices, UAVs and swarms, semantic interoperability, KGs, semantic reasoning, and GNNs. Section 3 describes the survey methodology and the decisions made for the final selection of the related papers. Section 4 presents the related work that was reviewed regarding sensor-based IoT applications, semantic modeling and KGs, and decision support using reasoning in time- and life-critical situations such as SAR missions. Section 5 discusses the open issues and challenges, and Section 6 describes the proposed framework’s architectural design. Finally, Section 7 concludes this paper and reports future work.

2. Preliminaries

2.1. IoT and Edge Computing

Technological advances and upgrades in sensor technologies have led to the development of sensor networks connected to the Internet, generating large volumes of stream data and enabling IoT-based systems to spread rapidly. The term IoT generally refers to objects with computing capabilities that are able to gather (sense), process, and share data with each other [8]. The main goal of IoT technologies is to perform related tasks more effectively and efficiently, simplifying processes and procedures in different fields and domains aimed toward improving quality of life. Thus, IoT technologies constitute a major cornerstone of the fourth and fifth industrial revolutions [9]. Advancements in sensors and related electronic components, user-friendly software solutions, network connectivity, and low energy consumption constitute some of the key factors for the widespread application of IoT in various domains such as industry, smart cities, agriculture, healthcare, transportation, etc.
The widespread application of IoT-based systems has triggered a decentralization of cloud-based systems, leading to the emergence of a new computing layer between IoT devices and the cloud, namely, edge computing [10]. Edge devices such as Raspberry Pi work as a middleware between sensor-based IoT entities and the cloud servers addressing challenges regarding the gathering and processing of voluminous and heterogeneous data generated by IoT devices, low latency in data transmission and computation supporting time-critical applications, and enhanced security. Moreover, to alleviate problems related to the heterogeneous nature of the data generated by IoT entities and interoperability issues, edge devices are suitable for semantic data annotation, integration, and enrichment, enabling the development of advanced applications with reasoning capabilities.
Thanks to the development of wireless networking, sensors, and edge IoT devices (Raspberry Pi, etc.), users and applications can gather, track, and share information. The capabilities provided by the existing infrastructure generate a huge amount of streaming data containing information about locations in real time. The availability of environmental, weather, vision, sensor, and location data allows for the development of context-aware applications and services. Such applications often perform data analysis and predictive analytics to provide real-time information and responses/recommendations for effectively managing possible risks and unexpected situations in the field of operations [11,12,13]. Interaction among different IoT devices/systems can further improve the produced results, as in most cases, the sensed data are not exploited to their full extent, while in some cases, interoperability is a mandatory enabler for capturing the maximum potential worth [14].

2.2. UAVs

UAVs, known as drones, are aerial robots without a crew or passengers that can fly manually, autonomously using onboard sensors, processors, and appropriate algorithms, or semi-autonomously using a hybrid approach by combining the aforementioned methods [15]. Drones can be classified based on size, equipment, and range [16]; they can be grouped into five groups based on wing type and the number of rotors, namely, fixed-wing, fixed-wing hybrid, single-rotor, multi-rotors [17], and foldable drones [18]. The key features concern flight range (endurance), speed, hover, and VTOL (vertical hover, take off, and land) capabilities, weight, and cost, while the primary limitation regards battery consumption in the context of flight and mission operations such as sensing, data transmission/processing, communication, etc. These factors determine if a particular type of drone is suitable for a task (e.g., delivery of goods, SAR mission, etc.) or not.
On the other hand, a swarm of drones is a group of drones that cooperate under a common objective. Swarms of drones can be categorized into two main categories based on the degree of flexibility each entity (drone) has [16]. The first category regards the single-layered swarms in which each drone acts autonomously, while the second category regards multi-layered swarms. The latter category has a hierarchical form consisting of layers. Each layer has several drones acting as leaders for the drones at a lower level, which are, at the same time, under the control of the drones from a higher level. Each drone is able to communicate with its leader drone and with the other drones in the same layer.
Technological advances in the field of drones, low cost, and great flexibility are some of the key factors that have led to their use in a plethora of domains and applications [19], such as security and surveillance [20], disaster management and SAR missions [21], delivery of goods [21], and precision agriculture [19]. The use of drones or swarms of drones leads to effective, timely, and highly accurate results. Especially in the field of SAR missions, their use attracts more and more attention as they minimize human involvement, thus reducing the risk and the total cost of missions.
Drones in a SAR mission act as sensing IoT entities by recording the affected site and transmitting/processing the data, thus helping commanders to obtain a clear picture of the current situation [22], detecting the precise location of lost/trapped/injured persons (victims) [23] and/or for delivering supplies (e.g., food, water, medicines) [24], or even acting as aerial base stations for rapid telecommunication recovery after a massive disaster [19].

2.3. Semantic Interoperability

The term interoperability refers to the ability of the involved IoT entities to exchange data in a standardized way, sharing a common meaning [25]. Networks of sensor-based IoT entities are highly dynamic, producing (sensing) and exchanging enormous volumes of data. In such dynamic environments, great challenges emerge regarding the efficient analysis of heterogeneous streaming data. To address these challenges advanced methodologies and techniques for data modeling are required. The concept of interoperability is divided into separate layers, namely, technical, syntactic, and semantic interoperability, each of which solves a particular problem [26]. Semantic Web technologies are the cornerstone to implement the above layers of interoperability [27]. Ontologies are used not only to alleviate the problem of heterogeneity among different data sources/streams but also to enable the implementation of algorithms that can reason on existing knowledge to infer new knowledge.
In the field of SAR missions and disaster management, in which a plethora of entities and organizations are involved, the absence of semantic interoperability has a negative impact and can adversely affect disaster response efforts [28]. During a post-disaster SAR scenario, a huge amount of (raw) data is generated by a plethora of sensors in the field of operations, involving images, video streams, social media data, text data, etc. Gathering and exploiting such data is of vital importance for situation awareness and the decision-making process. However, the heterogeneous nature of these data constitutes a major problem, making their exploitation difficult. Moreover, in such a scenario, a couple of diverse responders from various organizations such as firefighters, health care services, police, etc., are involved, using their own technical vocabulary, resulting in misunderstandings and a lack of information sharing among them. The aforementioned problems can be mitigated by providing a related ontology that not only conceptualizes and organizes heterogeneous data via semantic annotation and semantic integration functionality but also establishes a common understanding of the shared data, eventually achieving the necessary interoperability among different entities/actors.

2.4. KGs

KGs constitute a graph-based data model used to capture knowledge in various domains by integrating diverse data sources at a large scale [29]. In other words, a knowledge graph is a structured representation of real-world facts, consisting of entities (nodes), relations (edges) between those entities, and semantic descriptions, where each entity represents a real-world object or an abstract concept. Ontologies and rules [30] constitute the cornerstones of KGs defining the schema and inferring new knowledge. By populating an ontology with related data, a KG is generated. KGs have gained much attention since they provide a way to organize, manage, and represent large-scale data and they also support reasoning tasks [31]. Except for the extraction of new (inferred) knowledge, reasoning over KGs can detect errors, inconsistencies, or even new relations (connections) between entities, further enriching the graph [18]. Compared with traditional reasoning methods, reasoning over KGs is more flexible and its methods are more diverse [7,32].
In time-critical scenarios, such as SAR missions, timely access to related information, e.g., sensor data, social media data, open data, etc., is imperative. However, as mentioned earlier, the heterogeneity in data sources and formats results in interoperability issues. Ontologies provide semantic annotation and semantic integration capabilities to these streaming and heterogeneous data, resulting in the engineering of the corresponding KG. KGs can effectively organize and represent these voluminous data, modeling the related interlinked concepts and describing the knowledge available during the current life- and time-critical situations in the field of operations.

2.5. Semantic Reasoning

Semantic reasoning is the process of inferring new knowledge based on existing knowledge (facts) using inference rules and axioms. The explosive use of KGs in recent years led to reasoning over KGs to be a hot research topic since it can obtain new knowledge and conclusions from existing data [7]. Reasoning over KGs aims a) to identify erroneous connections between entities and b) to create new connections (knowledge) among the entities, complementing and enriching the KGs and finally, supporting the development of advanced applications. A reasoner is a software that is able to implement rule-based reasoning using ontology languages such as RDFS and OWL [33]. For example, in [34], the Pellet reasoner is presented based on OWL-DL to support incremental reasoning in dynamic KGs.
However, in dynamic environments, such as IoT sensor networks, large volumes of data are produced at a very high rate, producing a stream of data. In such a scenario, traditional reasoning methods are inadequate. Thus, stream reasoning mechanisms were developed [35]. A stream reasoner, using time windows, receives continuous queries as inputs, and then new knowledge is inferred on the fly. In such stream reasoning techniques, data pieces may be time-stamped either at the time of occurrence or in a predefined period [36]. An extended version of SPARQL, namely, C-SPARQL (Continuous SPARQL) is used for stream reasoning [37].
In most cases, SAR missions take place in dynamic environments, such as a post-disaster area after a catastrophic event (e.g., fire, hurricane, earthquake, etc.), in which large volumes of data are generated from various sources continuously. KGs not only model and represent knowledge of the current life- and time-critical situations in the field of operations, but they also support automated reasoning capabilities. Stream reasoning over a constructed KG provides stakeholders with valuable insights in real-time about the status in the field of operations by inferring new (hidden) knowledge, detecting errors and inconsistencies in definitions, and recognizing high-level critical events toward supporting, eventually, SAR missions more effectively.

2.6. Graph Neural Networks

The explosive growth of graph data, such as social networks, KGs, etc., and advancements in the fields of convolutional neural networks (CNN) and graph representation learning (GRL) [38,39], have triggered the evolution of Graph Neural Networks (GNNs). Combining CNNs with low-dimensional vectors, which represent the structure of the graph, various GNNs were developed such as GCN [40], GAT [41], etc. The performance of GNNs in non-Euclidian space led to their use in various graph analysis methods including node classification, link prediction, and clustering [39]. The combination of GNNs and KGs can solve various tasks such as link prediction, knowledge graph alignment, knowledge graph reasoning, and node classification [42], outperforming, in most cases, traditional ML approaches. Thus, the use of the above approach has gained popularity in various applications such as Recommender Systems (RSs) [43,44]. The ability of GGNs to capture relationships within graph data, providing capabilities such as link prediction and node classification, along with the need for event prediction in SAR missions, makes them a suitable tool for the application domain of SAR.

3. Survey Methodology

The research methodology followed in this paper focuses mostly on the collection of information sources related to semantic data integration and reasoning on IoT-enabled collaborative entities in time-critical decision support systems, focusing on systems integrated into SAR missions. Research articles from academia, the related literature, and web resources published in a period of the last 5 years (2019–2023) were examined to limit the survey to the most recent sources. The survey presented in this paper was conducted over a period of 5 months.
The search for articles was conducted on academic web portals like Google Scholar, ResearchGate, IEEXplore, ACM digital library, and Springer Link. The research terms used in various combinations were the following:
  • IoT in disaster management;
  • IoT in SAR missions;
  • Decision support in disaster management;
  • Semantic reasoning in SAR missions;
  • Semantic modeling in SAR missions;
  • Real-time reasoning;
  • Edge computing in SAR missions;
  • Ontologies in SAR missions;
  • Ontologies and IoT in SAR missions;
  • UAVs in SAR missions.
Although a significant amount of work in the fields of SAR missions and disaster management has been published, this paper presents only the related works that (a) integrate various types of IoT entities, such as UAVs and Unmanned Ground Vehicles (UGVs) with or without sensors, and/or use Semantic Web technologies to model heterogenous data under a related ontology and/or (b) have reasoning capabilities for the inference of new knowledge to further enhance the decision-making process about life- and time-critical events in SAR missions. Selection/rejection of the literature was carried out based on the following specific inclusion/exclusion criteria in order to extract the most relevant data. Τhe PRISMA survey methodology flowchart is depicted in Figure 1.
  • Inclusion Criteria
    • IC1. Articles published after 2019.
    • IC2. Articles related to SAR missions.
    • IC3. Articles proposing/implementing a related system/framework.
    • IC4. Articles retrieved from Google Scholar, ResearchGate, IEEE Xplore, ACM digital library, and Springer Link.
    • IC5. Articles related to the aforementioned keywords.
  • Exclusion Criteria
    • EC1. Articles written in a non-English language.
    • EC2. Articles published in non-scientific resources.
    • EC3. Articles with poor analysis.

4. Related Work

4.1. Sensor-Based IoT Applications

In SAR operations, time is crucial and of vital importance. The detection of survivors in debris after a natural or man-made disaster such as an earthquake, hurricane, or large-scale explosion usually takes hours, putting the victims’ lives in jeopardy. On the other hand, the harsh environment after a disaster occurs makes it dangerous for rescue teams to visit and explore the affected structures. Motivated by the need for time and accurate detection of survivors in debris, Sharma et al. [12] propose a system that combines ML techniques and robots as first responders for the exploration of a calamity site. The robots, equipped with cameras and microphones, explore the area/building to detect any sign of survivors. Robots stream the recording data to a cloud server, in which ML techniques such as YOLO (You Only Look Once) [45] and CNN (convolutional neural networks) are used to detect victims from video and audio sources, respectively. Once a survivor is detected, the survivor’s location and air quality measurements are transmitted to the cloud server, informing the rescue team about possible risks when approaching the victim’s location. Experimental results in the efficiency of the models show that by combining audio and image detection techniques, the overall accuracy of the architecture approaches 95.83%.
In many cases, the affected site may be prohibitive for human involvement. Thus, the robots must be able to act autonomously, being able not only to gain insight into their surrounding environment for reconstructing a map but also being able to localize themselves and other related robots in the field of operations. Chatziparaschis et al. [46] propose a synchronous system consisting of aerial and ground robots that collaborate to accomplish a SAR operation. The purpose of the UAV is to create a 2.5D map of the field of operation using LiDAR and a multi-stereo camera. A Simultaneous Localization And Mapping (SLAM) algorithm [47] is used to estimate the pose of the UAV in the generated map. Once the ground robot is detected, its pose is calculated. On the other hand, the ground robot plans its path using the map provided by the UAV, the R* algorithm, and the search-based FootStep planner [48], searching the affected area for victims. For human detection, the YOLO technique is used. Once a human is detected, the rescue team is notified. The experiments conducted in an urban environment reveal the potential of collaboration between robots in SAR operations, such as earthquakes, that can considerably change the field of the affected site, for which no related maps are available.
SAR missions in collapsed buildings suffer from inadequate and untimely data [49,50], preventing first responders from acting immediately and effectively. To address this challenge, Lakshmi et al. [51] propose an IoT network architecture based on fog computing and UAVs. The proposed system supports rescue teams by providing them with integrated real-time sensor data along with the status of the affected infrastructure. The system consists of sensor nodes, a fog computation node (Raspberry Pi), and a cloud node. A network of sensors is deployed in a smart building generating data before and after the disaster occurs. The generated data are analyzed from the fog node, providing useful insights into the state of the building, while the cloud node is used to further analyze the generated data. The system is resilient enough to keep the communication amongst sensors, ground station, and rescue teams, as a UAV-based LoRa mesh architecture is used [52], even if the communication between the entities involved is damaged. Synthetically generated sensor data illustrating the condition of the building before and after a disaster demonstrate the effective management and response after a disaster in a collapsed building, providing uninterrupted real-time data.
Nazarova et al. [53] propose a pipeline using multi-agent systems such as UAVs and Unmanned Ground Vehicles (UGVs) to facilitate SAR operations in hazardous environments after earthquake disasters. Firstly, based on historical statistical data, the sequence of rescue processes using multi-modal robotic systems is determined. To ensure the communication between the agents involved, an air UAV-based and a ground communication system is deployed. The system analyzes the characteristics of the search process based on probability theory, obtaining the relationship between the probability of target detection, the conditional probability, and other parameters. To effectively execute the necessary tasks, the proposed system allocates them to agents using evolutionary algorithms such as the Particle Swarm Optimization algorithm (PSO) [54] and the Genetic Algorithm (GA) [55], while the problem of movement planning is solved using the A* algorithm.
The search for trapped and possibly injured persons in debris without their senses is a challenging task, especially if low visibility and smoke existence reduce the performance of vision-based systems. Thus, researchers are concerned with finding alternative ways to detect victims in the shortest possible time to reduce the death toll. Sciancalepore et al. [23], leveraging the high penetration rate of smart devices such as smartphones and wearables in daily life, propose a drone-based SAR system, namely, SARDO, that uses victims’ smartphones to detect their location in debris. Fully autonomous and all-in-one-based, the proposed system scans the affected area for victim localization using a novel pseudo-trilateration technique, combining the measurements of Time of Flight (ToF) of user uplink signals in a time interval. The system applies machine learning techniques to predict the future position of the victim, while the drone’s trajectory is updated based on the victim’s predicted position. The experiments conducted in a rural area using commercial off-the-shelf components demonstrated fast and accurate victim localization, proving the feasibility of the proposed solution.
Even if drone-based systems provide great flexibility and efficiency in large-scale SAR missions, they suffer from limitations regarding remote control, energy consumption, unit malfunctions, lack of real-time interactions, and security issues. Motivated by the aforementioned challenges, Nguyen et al. [56] propose an advanced Internet of Drone (IoD) system to support large-scale SAR missions, combining edge computing [57] and edge-AI [58] along with blockchain technologies [59]. The proposed system consists of a swarm of (small and big) drones with different roles, equipped with heterogeneous sensors (e.g., thermal cameras, GPS, etc.) and embedded boards to search a wide area. A powerful edge server is used for storage and heavy computations, bridging the swarm of drones with external entities (e.g., hospitals), being at the same time, a node of the blockchain network. The blockchain network consists of cloud servers and services ensuring data security. Finally, several computation offloading tasks are presented to achieve energy efficiency in a swarm of drones. Experimental results in a controlled environment show that the proposed system can improve the quality of service (QoS) in SAR missions.
Girma et al. [60] are motivated by the lack of a robust collaborative Unmanned Vehicle (UV)-based system that is able to provide effective and uninterrupted wireless communication among the involved resource-constrained devices in network-denied disaster environments. They propose a framework to tackle the communication challenges, which consists of UVs such as UAVs and UGVs and a cloud-based control station for visualization, storage, and control purposes. A lightweight message querying telemetry transport (MQTT) protocol is used, aiming to enable easy collaboration between the involved entities and first responders via the Internet in disaster areas, thus facilitating SAR missions. The UVs are equipped with sensors and actuators to collect and share the sensed data and execute the given commands, respectively. An antenna tracker is used to extend the communication network from a nearby functional base station, while a clustering algorithm is used to maximize the network coverage provided by the UAVs. The experiments conducted under various scenarios demonstrate the effectiveness of the proposed framework. Table 1 compares the existing works regarding sensor-based IoT applications.

4.2. Semantic Modeling and KGs

The evolution of KGs enables artificial intelligence (AI) applications to have access to open, meaningful, and machine-understandable knowledge. A KG for disaster management is presented by Vassiliades et al. [61], which covers specific aspects of situation awareness (SA), facilitating the decision-making process in crucial disaster management incidents. The presented work, namely, XR4DRAMA KG, is part of the XR4DRAMA project and is used to represent information related to disaster management integrating biometric sensor data, textual and visual messages, spatiotemporal data, and response plans, thus helping first responders to effectively tackle a challenging and hazardous situation. Additionally, a Point of Interest (POI) management mechanism is built, which uses reasoning techniques in the constructed KG to detect/update specific places (POIs) with high interest in the affected area, enabling the first responders to act timely. The evaluation results, using Competency Questions (CQs) [62], demonstrate both the consistency and completeness of the proposed KG, while the POI management mechanism achieves high precision, recall, and F1 scores in the conducted experiments.
During disasters, a massive amount of heterogeneous data is generated from various sources such as sensors and third-party applications and services. The efficient management, integration, and exploitation of these data can help decision makers gain useful insights and make informed decisions during missions. Motivated by the aforementioned scenario, Masa et al. [63] propose an ontology-based framework for forest fire emergencies. The proposed system, namely, ONTO-SAFE, uses a lightweight ontology to succeed in a high-level formalism and integrates heterogeneous data from multiple sources such as weather forecasts, sensors, social media data, earth observations, etc. The constructed KG is used along with a semantic reasoning engine to perform high-level event and alert recognition for supporting decision makers with alerts and recommendations. The performance of the proposed reasoning module meets the needs of the users, and the experimental results imply that the whole framework can be optimized for large-scale implementation.
After a disaster occurs, first responders, among others, must address problems related to inadequate information, inconsistencies, and heterogeneous data sources. The fact that the use of many existing studies in the field of disaster management has proven to be inadequate led Angelidis et al. [64] to propose an ontology, namely, the search and rescue model, targeting to define and represent all the necessary entities for supporting first responders in disaster management missions. The proposed ontology is based on other state-of-the-art ontologies such as POLARISCO [28], IMPRESS [65], FOAF [66], and Geonames [67], extending them to provide semantic correlation of situation awareness (SA) in real time, integrating data from various sensors, drones, and robot monitoring systems. The proposed ontology is going to be used in various SAR projects.
During a disaster response, various emergency responders (ERs) such as firefighters, police, etc., must cooperate effectively to respond to emergency challenges on time. However, discrepancies in terminologies and technical vocabularies among the involved ERs lead to semantic integrity and a lack of valuable time. To ensure semantic interoperability between the involved ERs during disaster response operations, Elmhadhbi et al. [28] propose a modular ontology, namely, the POLARISCO ontology. The proposed ontology constitutes the core of the POLARISCO project, which aims to ensure reliable and timely information sharing between all ERs during large-scale crises. It is based on Basic Formal Ontology (BFO) [68] and Common Core Ontology (CCO) [69] and presents seven ontological modules such as the firefighter module, public authorities module, healthcare module, police module, etc., which enable its reuse as separate modules. The ontology was validated by emergency experts, and its evaluation using feedback and data from an exercise simulation of an earthquake demonstrated its ability to respond to the needs of ERs.
Situation awareness during crisis management, especially after a natural or man-made disaster, is highly related to harnessing and exploiting heterogeneous data coming from various sources such as sensors, images, etc. To fully exploit such heterogeneous Big Data constitutes a challenging task. Moreover, the lack of a comprehensive and abstract modeling solution related to the crisis management domain led Gaur et al. [70] to propose an ontology called EMPATHI (Emergency Managing and PlAnning about Hazard crIses). The proposed ontology aims to (a) effectively model the core concepts related to emergency managing and planning during crisis situations such as hazard place, involved actors, place, etc., and (b) automatically recognize the disaster concepts, exploiting data from social media conversations by semantically annotating text, such as posts on Twitter after a disaster. Thus, a word2vector model is trained, while the cosine similarity is used to match words from tweets to related ontology concepts. The experimental results using data from Twitter illustrate the effectiveness of the proposed approach. A comparison of the related works is presented in Table 2.

4.3. Decision Support and Reasoning

In disaster management scenarios, time is a valuable asset that determines the survival of trapped victims. In most cases, decisions in the field of operations are mainly based on the experience of commanders. However, differences in rescue approaches/strategies and equipment, according to the present state, create a complex scenario. To overcome the limited ability of experience-driven rescue decisions, Jiao et al. [71] propose a novel rescue decision algorithm based on knowledge graph reasoning. The proposed approach is based on an earthquake knowledge graph constructed using historic earthquake rescues. A Visual Perception module, trained using a constructed dataset with earthquake scene images, is used to detect the state and the materials of the collapsed building, while the Graph Mapping module produces a particular vector embedding for each entity. Finally, the Decision Reasoning module, using the aforementioned predictions and vector embeddings, reasons the best rescue approach. Extensive experiments and analysis on the image dataset and the knowledge graph demonstrate the effectiveness of the proposed approach.
In crisis scenarios, decision makers come face to face with a huge amount of information generated by heterogeneous data sources. In such situations, the probability of erroneous or inaccurate decisions that put victims’ lives in jeopardy increases. Jain et al. [72] follow a knowledge-driven approach and propose a rule-based reasoning system to support decision makers in making accurate and timely decisions. An ontology constitutes the backbone of the proposed systems, which is used to model the emergency concepts. Moreover, an inference rule-based knowledge mechanism with the Pellet reasoner [73] is used to infer new knowledge, generating related recommendations. SWRL rules are constructed, based on the experience of experts, to further improve the expressiveness of the systems. The proposed approach is used in two sample scenarios including earthquake disasters and viral infectious diseases.
Mehla et al. [74] propose an ontology-supported hybrid reasoning model (OS-HBR) to support decision makers in making timely and effective critical decisions regarding the required resources in emergency scenarios. The proposed approach combines Case-Based Reasoning (CBR) for recalling and reusing related knowledge derived from similar past events and Rule-Based Reasoning (RBR) to provide explanations about the extracted decisions/conclusions [75]. An ontology is used to describe and model the knowledge concerning emergencies such as actions, resources, rescue teams, etc. [76]. The OS-HBR consists of three major modules regarding data storage, CBR, and RBR components, providing inductive and deductive reasoning capabilities according to the situation and input parameters, while a new algorithm is developed to further support the recommendations. The experimental results conducted in earthquake scenarios demonstrate the effectiveness of the proposed system regarding the estimations of necessary resources.
During the last decade, a dozen AI methods/techniques have been developed and greatly incorporated in almost every domain. Among them, the technology that seems to have made a breakthrough is deep learning [77], which demonstrates impressive results. However, a deep learning model works as a black box, which does not offer the privilege of explicability for its decisions. On the other hand, in critical SAR missions, each decision must be clear and explainable. Sun et al. [78] propose an ontology-based system for smart decision-making by a rescue robot. The system consists of a modular ontology to model the entity (robot), the environment, and the tasks. The robot reasons over the ontology using SWRL rules to obtain the appropriate task based on the current conditions in the field of operations. Each task is decomposed into atomic actions, and they are executed sequentially. The robot can obtain the position of victims using Bayesian Reasoning, updating the ontology. The experimental results using a TurtleBot3 verify the efficiency of the proposed system.
During disasters, a massive amount of heterogeneous data is generated from various sources, causing interoperability issues among the involved actors. The lack of a high-level formalism and sharing of these data has a negative impact on the decision-making process. For this purpose, Daher et al. [79] propose an ontology-based framework aiming (a) to alleviate the semantic interoperability between the involved parties and (b) to assist decision makers by proposing evacuation priorities in various POIs in a flooded area. A knowledge graph is constructed based on the proposed ontology [80], integrating static and dynamic data from various sources such as institutional databases, sensors, etc. The constructed graph is shared between the involved entities, solving any interoperability issues. Regarding reasoning about evacuation priorities, the proposed solution evaluates two reasoning approaches using SPARQL (https://www.w3.org/TR/rdf-sparql-query/ (accessed on 24 January 2024)) queries and SHACL (Shapes Constraint Language) (https://www.w3.org/TR/shacl-af/#rules (accessed on 24 January 2024)) rules, with the latter approach outperforming the first by inferring the related evacuation priorities in a shorter time in a series of experiments/scenarios. Table 3 presents the comparison of the related papers regarding semantic reasoning in SAR missions.

5. Discussing Open Issues and Challenges

Based on this state-of-the-art review, it appears that a significant amount of related work on the individual topics that comprise this research domain has been conducted. However, none of the related works (a) fully exploit IoT capabilities, semantic information, and inferred knowledge to effectively and timely recognize high-level events, such as the detection of a victim in an affected area, the severity of the victim’s health, and the danger of the victim’s location, to recommend actions/decisions to be made by decision makers or (b) semantically integrate real-time and time-critical data with historic SAR-related data to predict the evolution of an event’s state in the next/following critical seconds or minutes and recommend alternative responses to decision makers to further enhance awareness and efficiency in responding to the event state, thus ensuring the safety of both victims and personnel.
Particularly, regarding the semantic modeling in the work of Vassiliades et al. [61], even if a mechanism for the creation and update of POIs with high interest in an affected area exists, the severity score of these POIs, indicating the magnitude of the destruction and the sequence of the tasks that need to be performed in each of them, have not yet been implemented. Moreover, regarding IoT entities, the ontology only incorporates biosensors and excludes other entities such as UAVs, ground robots, etc. In the work by Masa et al. [63], the proposed framework uses static data. However, to enhance the accuracy and timeliness of decisions during crisis management, the integration of real-time data and advanced analytic techniques is needed. Moreover, the integration of ML models to learn from historical data to make predictions and the optimization of reasoning response time for data in scale will further enhance the effectiveness of the proposed framework. In the work by Elmhadhbi et al. [28], the proposed ontology addresses interoperability issues by providing a high-level formalism of the shared vocabulary between the involved ERs, including all types of disasters. However, it does not consider more specific events in the field of operations/missions, such as the hazardous level, position of the detected victim(s), air quality, fire existence, etc. It also does not consider IoT entity representation, and the extension of the proposed ontology in the field of smart cities is required. In the work by Gaur et al. [70], the proposed ontology models the core concepts of crisis management successfully and provides a way to semantically annotate text from posts on social media such as Twitter/X, automatically recognizing disaster concepts and events. However, the proposed approach fails to conceptualize IoT entities, which is a core concept in disaster management. Finally, although other related works [64,71,72,74,78] provide ontological approaches for the integration of heterogeneous data toward enabling smart decisions in SAR missions, to the best of our knowledge, the related ontologies are not open-access (they are not available for reuse).
Regarding the reasoning capabilities of the related work, refs. [71,72] do not provide any information about the victims’ position, fitness, and hazard level in the field of operations. Mehla et al., in their work [74], do not incorporate IoT entities, and the related knowledge graph is not open-access, while the combination of ML and SW technologies to more efficiently predict future demands of resources before a disaster occurs is still an open issue. In the work by Sun et al. [78], the incorporation of multiple heterogeneous robots to assist with complex tasks, the application of cloud-based knowledge to reduce the dependency of robots on specific hardware, and the development of advanced task planning for uncertain environments are identified as open issues for further work. Finally, in the work by Daher et al. [79], the transformation of decision makers’ natural language queries into rules for inferring new knowledge to further assist them in the decision-making process is required. Moreover, reasoning about the management of the available resources in the field of operations and a learning approach capable of learning from past disaster data constitute open issues for future work.
Last but not least, concerning the related work on sensor-based IoT applications, the work by Lakshmi et al. [51] does not propose a method to inform decision makers about the position of the detected victim. The works [23,46] provide the location of the victim, but they do not provide any information to first responders about the state of the victim’s location and the risk level. In the work by Sharma et al. [12], optimal methods for exploring disaster-affected sites using multiple robots, the usage of a heat-vision sensor to improve the accuracy of the human-detection model, the detection of the victim’s location using voice frequency, and the search for the shortest risk-free path for the rescue teams to approach the victim’s location are some of the most important open issues and challenges to be implemented (future work) for the effective and timely detection of victims in disaster-affected sites. Moreover, the proposed approach is limited with respect to the dependence on a stable internet connection, and if it is not available, the robot can neither stream the video nor the location of the detected survivors or the sensor data to the centralized cloud server. In the work by Nguyen et al. [56], even though the proposed system detects one or more victims in a wide area, it does not provide any information about the state of the victims’ location, while blockchain technology reduces system performance with all that SAR missions entail. Moreover, the use of edge servers, placed in boats or helicopters, and the lack of other types of robots (e.g., ground robots) make the proposed system inadequate for disasters in which flexibility is of vital importance, such as earthquakes. In the work by Girma et al. [60], the whole system is based on an antenna tracker that extends the network coverage from a nearby functional infrastructure. However, in the case of a massive disaster after a mega-earthquake, it is likely that there is no functional base station in the nearby area, making the proposed framework non-functional. Finally, for performance maximization in works [46,51], the implementation of a fully autonomous UAV able to decide which path to follow in an uncertain and unknown environment and the implementation of ML models to analyze the data are required, respectively.
The review conducted and the proposed approach that follows in this paper is motivated by the lack of an integrated decision support system for life- and time-critical situations that occur in SAR missions and disasters. In Table 4, an evaluation of the related work based on the most important issues and challenges is presented. These issues/challenges are used as requirements (columns Rx) for the proposed framework (Figure 2), focusing on semantic data integration and real-time automated reasoning in life- and time-critical decision support systems. These requirements are enumerated as follows:
R1.
Integration of multiple heterogeneous collaborative IoT entities (e.g., UAVs, ground robots, wearables, etc.) equipped with sensors (temperature, etc.).
R2.
Semantic integration of heterogeneous stream/dynamic (sensor) data using suitable ontologies.
R3.
Analysis of streamed data for recognizing real-time low-level events (e.g., fire at specific lat/long/alt coordinates, image/video analysis for the detection of trapped/injured victims, etc.).
R4.
Dynamic construction and use of KGs for the representation of the current state in the field of missions.
R5.
Recognition of high-level events with automated reasoning over the constructed KG and KG-based recommendations about actions/decisions to be made.
R6.
Integration of actions needed to be performed for a recognized event.
R7.
Transformation of the decision makers’ natural language queries to machine-understandable queries, to further assist in the decision-making process.
R8.
Integration of ML models for making predictions about the evolution of the state in the next few life-critical seconds/minutes, using historical data.
The dynamic environments in which SAR missions take place demand the development and implantation of a robust and independent environmental status framework able to support decision makers/commanders in the harsh field of operations. Toward this direction, the use of a network of edge devices with artificial intelligence (edge intelligence) [81] is proposed. Even though the term edge intelligence already exists, the proposed framework using this paradigm brings the processing of the generated data closer to their sources, providing meaningful and timely insights to stakeholders. Subsequently, response time is another key challenge of the proposed framework. Thus, our goal is to surpass the aforementioned open issues and challenges of the related work implementing them in edge computing.

6. Proposed Framework

Based on the requirements presented in Table 4, we propose a specific framework that orchestrates specific components and functionality in a seamless manner. The overall abstract design of the proposed framework, namely, DS4SAR, is depicted in Figure 2. The following list outlines the proposed specific capabilities and functionalities:
  • Integration of multiple heterogeneous collaborative IoT entities (UAVs, ground robots, weather stations, wearables, etc.) equipped with sensors (temperature, humidity, air quality, etc.) able to sense and/or move the disaster-affected site, collecting valuable data in real-time.
  • Semantic integration of heterogeneous stream/dynamic (sensor) data with static data (e.g., missions plans) using suitable ontologies, in one or more ground base units/controllers on edge devices, operating as an interoperability middleware, e.g., Raspberry Pi, making the proposed framework independent from the environmental status (e.g., destruction of critical infrastructures such as telecommunication base stations).
  • Analysis of streamed data for recognizing real-time low-level events (e.g., fire at specific lat/long/alt coordinates, image/video analysis for the detection of trapped/injured victims, etc.) using ML models and techniques such as YOLO.
  • Dynamic construction and use of KGs for the representation of the current state in the field of operations.
  • Recognition of high-level events with automated reasoning over the constructed KG, and KG-based recommendations about actions/decisions to be made, such as aborting the mission due to a high risk of additional human losses, etc.
  • Integration of actions needed to be performed for a recognized event using SWRL, SPIN, or SHACL rules.
  • Translation of the decision makers’ natural language queries into machine-understandable queries (in SPARQL or Cypher), for inferring new knowledge and further assisting in the decision-making process.
  • Integration of ML models, such as GNN models, for making predictions about the evolution of the state in the next few life-critical seconds/minutes using historical data.
In Figure 2, the architectural design for the proposed framework is depicted. The proposed framework consists of two layers, each of which serves a specific purpose. The first layer concerns the IoT entities that “sense” the field of missions and provide awareness of the environment/context (e.g., a building/area affected by a mega-earthquake). IoT entities are distinguished into three categories based on the way they interact with the environment: sensing entities, e.g., weather stations sensing the affected site by recording weather changes; mobile applications acting as data sources, e.g., health monitoring apps run on wearables of missing people/victims; and posts/messages on social media channels (e.g., tweets related to the disaster) from users in the nearby area. Moreover, there are IoT entities that not only sense and record the environment but are also able to act (e.g., searching for victims or carrying necessary medical equipment), such as a swarm of drones and ground robots (e.g., Spots). Finally, a leader drone that commands the swarm, with capabilities of sending, processing, and actuating, can be also present. The leader drone is equipped with sensors to gather real-time context-aware data (video, audio, weather, environment), but in contrast to the aforementioned agents, it can also process and analyze these data to recognize low-level events (e.g., fire in a room with high-explosive materials) to timely inform the swarm and the other involved agents to act (sending specific actions sequence to them automatically). The streaming data and the recognized low-level events from the IoT entities are transmitted to a ground unit for further processing.
The second layer consists of one or more edge device(s)/controller(s), e.g., a Raspberry Pi controller operating as an interoperability middleware. The received (stream/dynamic) data are processed to recognize high-level events supporting life- and time-critical decisions such as signs of life recognized, sending a rescue team to the position (lat/long/alt coordinates), or aborting the mission due to the presence of uncontrolled fire and high levels of flammable gas. Due to the nature of the received data, a preprocessing step is mandatory. An image and video analysis module are used to detect entities of interest (EoI) in the field of missions such as gas vessels or trapped/injured victims. Based on the detected EoI and the other sensor data (temperature, air quality, etc.), a module for low-level event recognition is utilized, and such events are recognized (e.g., high temperature and low humidity at specific lat/long/alt coordinates). Due to the need for representing, unifying, and reasoning with heterogeneous data, an ontological model is utilized. The ontology-based approach for knowledge representation and reasoning not only alleviates the problem of interoperability between different sensor data and platforms/systems but also facilitates advanced reasoning capabilities. A dedicated module for the semantic annotation of the streaming data is also required. These dynamic semantically annotated data are then automatically integrated with static data (e.g., mission plans, evacuation plans) and data about the EoI—people and objects—building information models (BIMs). The result of this process is an integrated KG that represents all the information related to the current situation.
High-level events are recognized with automated reasoning over the constructed KG. Based on the inferred facts, a decision support system informs both EoI and decision makers/commanders to act accordingly, ensuring the protection of both personnel and equipment. To further enhance the efficiency of the proposed framework, a pre-trained model (GNN) is proposed. The model takes as input the constructed KG and computes valuable insights (predictions) about the evolution of both the event state and the SAR mission, thus providing decision makers with recommendations for alternative approaches to the predicted scenario. The outcome of the model is also used as input for an analytics module that is used to provide the commanders with useful information about the current situation in the field of operations. Due to the lack of time and extra computational resources, the training process of the GGN model takes place in any other third-party device, independent from the proposed system (e.g., a laptop), using historical data, and the trained model is loaded to the edge devices before the mission starts. Last but not least, the commanders can ask for additional information about the current state during disaster management missions. Thus, the decision makers’ natural language queries are transformed into machine-understandable queries (such as SPARQL or Cypher), using Large Language Models (LLMs), to infer new knowledge and further assist in the decision-making process.
One of the most important challenges during disaster management and SAR missions is the destruction of telecommunication infrastructures (e.g., base stations), downgrading the information exchange process between the involved entities. As mentioned earlier, in such scenarios, time is of vital importance. Thus, the existence of a robust and independent environment status decision support system is mandatory. The key factor of the proposed architecture is the moving of computations from the cloud to one or more edge devices (e.g., Raspberry Pi, etc.). Technological advances and improvements in edge computing [82] have led to the widespread application of these devices, providing higher bandwidth and lower latencies compared with the past. Edge computing brings computation and data storage closer to the data sources. This paradigm is independent of the existence of a stable Internet connection (e.g., Wi-Fi), making the whole system more redundant and resilient compared with cloud computing in unstable/harsh environments, in which SAR missions take place. Moreover, edge computing improves performance [83] and enables advanced applications with lower response time compared with cloud-based applications [84,85].

7. Conclusion and Future Work

Natural disasters such as earthquakes, hurricanes, floods, etc., involve life- and time-critical situations, which could lead to severe consequences including threatening citizen’s lives and the destruction of infrastructures. Moreover, in most cases, the post-disaster environment poses risks for first responders, increasing the difficulty level of their work significantly. In such a scenario, time is crucial. The success of a SAR mission depends on the right management of the available resources and the right time decisions in the field of operations. Thus, decision makers must have a clear and complete view of the state to efficiently and timely address emerging challenges, thus ensuring the safety of both personnel and victims. The combination of IoT and semantic web technologies can provide integrated solutions for providing decision makers with meaningful insights about the current state and supporting them with life- and time-critical decision support systems.
In this paper, a review of related technologies and approaches was conducted, and the open issues and challenges were identified, focusing mainly on semantic data integration and reasoning with SAR-related knowledge in time-critical decision support systems, finally proposing a novel approach that goes beyond the state-of-the-art in efficiently recognizing time-critical high-level events, eventually supporting commanders and first responders with meaningful and life-critical insights about the current and predicted states of the environment in which they operate. Our future work includes (a) the implementation of the proposed framework by developing appropriate methods and tools to meet the requirements determined in this work and (b) the evaluation of the proposed solution in a real case scenario, like the one presented here, and the integration of stakeholders’ feedback on the efficiency of the framework.

Author Contributions

Conceptualization, K.I.K. and G.A.V.; methodology, K.I.K. and G.A.V.; validation, A.S., K.I.K. and G.A.V.; formal analysis, A.S. and K.I.K.; investigation, A.S. and K.I.K.; resources, A.S. and K.I.K.; data curation, A.S. and K.I.K.; writing—original draft preparation, A.S.; writing—review and editing, K.I.K.; visualization, A.S.; supervision, K.I.K.; project administration, K.I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bangalkar, Y.V.; Kharad, S.M.; Tech, P.M. An overview on search and rescue robots during earthquake and natural calamities. IJISET-Int. J. Innov. Sci. Eng. Technol. 2015, 2, 2037–2040. [Google Scholar]
  2. Mehmood, S.; Ahmed, S.; Kristensen, A.S.; Ahsan, D. Multi Criteria Decision Analysis (MCDA) of Unmanned Aerial Vehicles (UAVs) as a Part of Standard Response to Emergencies. In Proceedings of the 4th International Conference on Green Computing and Engineering Technologies, Esbjerg, Denmark, 17–19 August 2018; p. 31. [Google Scholar]
  3. Merino, L.; Caballero, F.; Martínez-de Dios, J.R.; Ollero, A. Cooperative fire detection using unmanned aerial vehicles. Proc.-IEEE Int. Conf. Robot. Autom. 2005, 2005, 1884–1889. [Google Scholar] [CrossRef]
  4. Margara, A.; Urbani, J.; Van Harmelen, F.; Bal, H. Streaming the Web: Reasoning over dynamic data. J. Web Semant. 2014, 25, 24–44. [Google Scholar] [CrossRef]
  5. Shin, E.; Yoo, S.; Ju, Y.; Shin, D. Knowledge graph embedding and reasoning for real-time analytics support of chemical diagnosis from exposure symptoms. Process Saf. Environ. Prot. 2022, 157, 92–105. [Google Scholar] [CrossRef]
  6. Gun, Z.; Chen, J. Novel Knowledge Graph- and Knowledge Reasoning-Based Classification Prototype for OBIA Using High Resolution Remote Sensing Imagery. Remote Sens. 2023, 15, 321. [Google Scholar] [CrossRef]
  7. Chen, X.; Jia, S.; Xiang, Y. A review: Knowledge reasoning over knowledge graph. Expert Syst. Appl. 2020, 141, 112948. [Google Scholar] [CrossRef]
  8. Ahmadh, R.K.; Kariapper, R. Awareness of internet of thing among students of south eatern university of sri lanka. J. Crit. Rev. 2020, 7, 4673–4678. [Google Scholar]
  9. Nižetić, S.; Šolić, P.; López-de-Ipiña González-de-Artaza, D.; Patrono, L. Internet of Things (IoT): Opportunities, issues and challenges towards a smart and sustainable future. J. Clean. Prod. 2020, 274, 122877. [Google Scholar] [CrossRef]
  10. Xhafa, F.; Kilic, B.; Krause, P. Evaluation of IoT stream processing at edge computing layer for semantic data enrichment. Future Gener. Comput. Syst. 2020, 105, 730–736. [Google Scholar] [CrossRef]
  11. Rahmaniar, W.; Santoso, A.W. Sensor integration for real-time data acquisition in aerial surveillance. Aust. J. Electr. Electron. Eng. 2022, 19, 117–128. [Google Scholar] [CrossRef]
  12. Sharma, K.; Doriya, R.; Pandey, S.K.; Kumar, A.; Sinha, G.R.; Dadheech, P. Real-Time Survivor Detection System in SaR Missions Using Robots. Drones 2022, 6, 219. [Google Scholar] [CrossRef]
  13. Militano, L.; Arteaga, A.; Toffetti, G.; Mitton, N. The Cloud-to-Edge-to-IoT Continuum as an Enabler for Search and Rescue Operations. Futur. Internet 2023, 15, 55. [Google Scholar] [CrossRef]
  14. Kalatzis, N.; Routis, G.; Marinellis, Y.; Avgeris, M.; Roussaki, I.; Papavassiliou, S.; Anagnostou, M. Semantic interoperability for IoT platforms in support of decision making: An experiment on early wildfire detection. Sensors 2019, 19, 528. [Google Scholar] [CrossRef] [PubMed]
  15. Perez-Grau, F.J.; Ragel, R.; Caballero, F.; Viguria, A.; Ollero, A. Semi-autonomous teleoperation of UAVs in search and rescue scenarios. In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 1066–1074. [Google Scholar] [CrossRef]
  16. Tahir, A.; Böling, J.; Haghbayan, M.H.; Toivonen, H.T.; Plosila, J. Swarms of Unmanned Aerial Vehicles—A Survey. J. Ind. Inf. Integr. 2019, 16, 100106. [Google Scholar] [CrossRef]
  17. Types of Drones—Explore the Different Types of UAV’s. 2022. Available online: http://www.circuitstoday.com/types-of-drones (accessed on 16 January 2024).
  18. Falanga, D.; Kleber, K.; Mintchev, S.; Floreano, D.; Scaramuzza, D. The Foldable Drone: A Morphing Quadrotor that can Squeeze and Fly. IEEE Robot. Autom. Lett. 2018, 4, 209–216. [Google Scholar] [CrossRef]
  19. Shakhatreh, H.; Sawalmeh, A.H.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned Aerial Vehicles (UAVs): A Survey on Civil Applications and Key Research Challenges. IEEE Access 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
  20. Hossein Motlagh, N.; Taleb, T.; Arouk, O. Low-Altitude Unmanned Aerial Vehicles-Based Internet of Things Services: Comprehensive Survey and Future Perspectives. IEEE Internet Things J. 2016, 3, 899–922. [Google Scholar] [CrossRef]
  21. Hayat, S.; Yanmaz, E.; Muzaffar, R. Survey on Unmanned Aerial Vehicle Networks for Civil Applications: A Communications Viewpoint. IEEE Commun. Surv. Tutor. 2016, 18, 2624–2661. [Google Scholar] [CrossRef]
  22. Estrada, M.A.R.; Ndoma, A. The uses of unmanned aerial vehicles –UAV’s- (or drones) in social logistic: Natural disasters response and humanitarian relief aid. Procedia Comput. Sci. 2019, 149, 375–383. [Google Scholar] [CrossRef]
  23. Albanese, A.; Sciancalepore, V.; Costa-Perez, X. SARDO: An Automated Search-and-Rescue Drone-Based Solution for Victims Localization. IEEE Trans. Mob. Comput. 2022, 21, 3312–3325. [Google Scholar] [CrossRef]
  24. Jo, D.; Kwon, Y.; Jo, D.; Kwon, Y. Development of Rescue Material Transport UAV (Unmanned Aerial Vehicle). World J. Eng. Technol. 2017, 5, 720–729. [Google Scholar] [CrossRef]
  25. Rahman, H.; Hussain, M.I. A comprehensive survey on semantic interoperability for Internet of Things: State-of-the-art and research challenges. Trans. Emerg. Telecommun. Technol. 2020, 31, e3902. [Google Scholar] [CrossRef]
  26. Cimmino, A.; Fernández-Izquierdo, A.; Poveda-Villalón, M.; García-Castro, R. Ontologies for IoT Semantic Interoperability. In IoT Platforms, Use Cases, Privacy, and Business Models; Springer: Cham, Swizterland, 2020; Volume 1, pp. 99–123. [Google Scholar] [CrossRef]
  27. A Joint Roadmap for Semantic Technologies and the Internet of Things. 2023. Available online: https://www.researchgate.net/publication/228667796_A_joint_roadmap_for_semantic_technologies_and_the_Internet_of_Things (accessed on 4 December 2023).
  28. Elmhadhbi, L.; Karray, M.H.; Archimède, B. A modular ontology for semantically enhanced interoperability in operational disaster response. In Proceedings of the 16th International Conference on Information Systems for Crisis Response and Management-ISCRAM, Valencia, Spain, 19–22 May 2019; pp. 1021–1029. [Google Scholar]
  29. Noy, N.; Gao, Y.; Jain, A.; Narayanan, A.; Patterson, A.; Taylor, J. Industry-scale knowledge graphs. Commun. ACM 2019, 62, 36–43. [Google Scholar] [CrossRef]
  30. SWRL: A Semantic Web Rule Language Combining OWL and RuleML. 2023. Available online: https://www.w3.org/submissions/SWRL/ (accessed on 5 December 2023).
  31. Hogan, A.; Blomqvist, E.; Cochez, M.; D’Amato, C.; Melo, G.D.; Gutierrez, C.; Kirrane, S.; Gayo, J.E.L.; Navigli, R.; Neumaier, S.; et al. Knowledge graphs. ACM Comput. Surv. 2021, 54, 1–37. [Google Scholar] [CrossRef]
  32. Hao, X.; Ji, Z.; Li, X.; Yin, L.; Liu, L.; Sun, M.; Liu, Q.; Yang, R. Construction and application of a knowledge graph. Remote Sens. 2021, 13, 2511. [Google Scholar] [CrossRef]
  33. Chen, G.; Jiang, T.; Wang, M.; Tang, X.; Ji, W. Modeling and reasoning of IoT architecture in semantic ontology dimension. Comput. Commun. 2020, 153, 580–594. [Google Scholar] [CrossRef]
  34. Sirin, E.; Parsia, B.; Grau, B.C.; Kalyanpur, A.; Katz, Y. Pellet: A practical OWL-DL reasoner. J. Web Semant. 2007, 5, 51–53. [Google Scholar] [CrossRef]
  35. Su, X.; Gilman, E.; Wetz, P.; Riekki, J.; Zuo, Y.; Leppänen, T. Stream reasoning for the internet of things: Challenges and gap analysis. In Proceedings of the 6th International Conference on Web Intelligence, Mining and Semantics, Nîmes, France, 13–15 June 2016. [Google Scholar] [CrossRef]
  36. Kuhn, T.; Dell’aglio, D.; Della Valle, E.; Van Harmelen, F.; Bernstein, A. Stream reasoning: A survey and outlook. Data Sci. 2017, 1, 59–83. [Google Scholar] [CrossRef]
  37. Barbieri, D.F.; Braga, D.; Ceri, S.; Della Valle, E.; Grossniklaus, M. C-SPARQL: SPARQL for continuous querying. In Proceedings of the WWW ′09: Proceedings of the 18th International Conference on World Wide Web, Madrid, Spain, 20–24 April 2009; pp. 1061–1062. [CrossRef]
  38. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Yu, P.S. A Comprehensive Survey on Graph Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4–24. [Google Scholar] [CrossRef]
  39. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and applications. AI Open 2020, 1, 57–81. [Google Scholar] [CrossRef]
  40. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  41. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lì, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
  42. Ye, Z.; Kumar, Y.J.; Sing, G.O.; Song, F.; Wang, J. A Comprehensive Survey of Graph Neural Networks for Knowledge Graphs. IEEE Access 2022, 10, 75729–75741. [Google Scholar] [CrossRef]
  43. Liu, Y.; Yang, S.; Xu, Y.; Miao, C.; Wu, M.; Zhang, J. Contextualized Graph Attention Network for Recommendation with Item Knowledge Graph. IEEE Trans. Knowl. Data Eng. 2023, 35, 181–195. [Google Scholar] [CrossRef]
  44. Wang, Y.; Liu, Z.; Fan, Z.; Sun, L.; Yu, P.S. DSKReG: Differentiable Sampling on Knowledge Graph for Recommendation with Relational GNN. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual Event, 1–5 November 2021; pp. 3513–3517. [Google Scholar] [CrossRef]
  45. Shinde, S.; Kothari, A.; Gupta, V. YOLO based Human Action Recognition and Localization. Procedia Comput. Sci. 2018, 133, 831–838. [Google Scholar] [CrossRef]
  46. Chatziparaschis, D.; Lagoudakis, M.G.; Partsinevelos, P. Aerial and ground robot collaboration for autonomous mapping in search and rescue missions. Drones 2020, 4, 79. [Google Scholar] [CrossRef]
  47. Kohlbrecher, S.; Von Stryk, O.; Meyer, J.; Klingauf, U. A flexible and scalable SLAM system with full 3D motion estimation. In Proceedings of the 9th IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Kyoto, Japan, 1–5 November 2011; pp. 155–160. [Google Scholar] [CrossRef]
  48. Hornung, A.; Dornbush, A.; Likhachev, M.; Bennewitz, M. Anytime search-based footstep planning with suboptimality bounds. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Osaka, Japan, 29 November–1 December 2012; pp. 674–679. [Google Scholar] [CrossRef]
  49. Vinodini Ramesh, M.; Pullarkatt, D.; Geethu, T.H.; Venkat Rangan, P. Wireless Sensor Networks for Early Warning of Landslides: Experiences from a Decade Long Deployment. In Advancing Culture of Living with Landslides; Springer: Cham, Switzerland, 2017; pp. 41–50. [Google Scholar] [CrossRef]
  50. Menon, D.M.; Sai Shibu, N.B.; Rao, S.N. Comparative Analysis of Communication Technologies for an Aerial IoT over Collapsed Structures. In Proceedings of the 2020 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), Chennai, India, 4–6 August 2020; pp. 32–36. [Google Scholar] [CrossRef]
  51. Lakshmi, P.; Rejith, G.; Toby, T.; Sai Shibu, N.B.; Rao, S.N. A Resilient IoT System Architecture for Disaster Management in Collapsed Buildings. In Proceedings of the 2022 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), Chennai, India, 24–26 March 2022; pp. 282–287. [Google Scholar] [CrossRef]
  52. Lee, H.C.; Ke, K.H. Monitoring of Large-Area IoT Sensors Using a LoRa Wireless Mesh Network System: Design and Evaluation. IEEE Trans. Instrum. Meas. 2018, 67, 2177–2187. [Google Scholar] [CrossRef]
  53. Nazarova, A.V.; Zhai, M. The Application of Multi-agent Robotic Systems for Earthquake Rescue. Stud. Syst. Decis. Control 2020, 272, 133–146. [Google Scholar] [CrossRef]
  54. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2017, 22, 387–408. [Google Scholar] [CrossRef]
  55. Genetic Algorithms on JSTOR. 2023. Available online: https://www.jstor.org/stable/24939139 (accessed on 7 November 2023).
  56. Nguyen, T.; Katila, R.; Gia, T.N. An advanced Internet-of-Drones System with Blockchain for improving quality of service of Search and Rescue: A feasibility study. Future Gener. Comput. Syst. 2023, 140, 36–52. [Google Scholar] [CrossRef]
  57. Katila, R.; Gia, T.N.; Westerlund, T. Analysis of mobility support approaches for edge-based IoT systems using high data rate Bluetooth Low Energy 5. Comput. Netw. 2022, 209, 108925. [Google Scholar] [CrossRef]
  58. Nguyen Gia, T.; Nawaz, A.; Peña Querata, J.; Tenhunen, H.; Westerlund, T. Artificial Intelligence at the Edge in the Blockchain of Things. In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering; Springer: Berlin/Heidelberg, Germany, 2020; Volume 320, pp. 267–280. [Google Scholar] [CrossRef]
  59. Nawaz, A.; Gia, T.N.; Pena Queralta, J.; Westerlund, T. Edge AI and Blockchain for Privacy-Critical and Data-Sensitive Applications. In Proceedings of the 12th International Conference on Mobile Computing and Ubiquitous Network, ICMU 2019, Kathmandu, Nepal, 4–6 November 2019. [Google Scholar] [CrossRef]
  60. Girma, A.; Bahadori, N.; Sarkar, M.; Tadewos, T.G.; Behnia, M.R.; Mahmoud, M.N.; Karimoddini, A.; Homaifar, A. IoT-enabled autonomous system collaboration for disaster-area management. IEEE/CAA J. Autom. Sin. 2020, 7, 1249–1262. [Google Scholar] [CrossRef]
  61. Vassiliades, A.; Symeonidis, S.; Diplaris, S.; Tzanetis, G.; Vrochidis, S.; Bassiliades, N.; Kompatsiaris, I. XR4DRAMA Knowledge Graph: A Knowledge Graph for Disaster Management. In Proceedings of the 2023 IEEE 17th International Conference on Semantic Computing (ICSC), Laguna Hills, CA, USA, 1–3 February 2023; pp. 262–265. [Google Scholar] [CrossRef]
  62. Suárez-Figueroa, M.C.; Gómez-Pérez, A.; Villazón-Terrazas, B. How to write and use the ontology requirements specification document. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2009; Volume 5871, pp. 966–982. [Google Scholar] [CrossRef]
  63. Masa, P.; Kintzios, S.; Vasileiou, Z.; Meditskos, G.; Vrochidis, S.; Kompatsiaris, I. A Semantic Framework for Decision Making in Forest Fire Emergencies. Appl. Sci. 2023, 13, 9065. [Google Scholar] [CrossRef]
  64. Angelidis, I.; Politi, E.; Vafeiadis, G.; Vergeti, D.; Ntalaperas, D.; Papageorgopoulos, N. A lightweight Ontology for real time semantic correlation of situation awareness data generated for first responders. In Proceedings of the 2021 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 15–17 December 2021; pp. 1830–1835. [Google Scholar] [CrossRef]
  65. IMproving Preparedness and Response of HEalth Services in Major criseS|IMPRESS|Project|News & Multimedia|FP7|CORDIS|European Commission. 2023. Available online: https://cordis.europa.eu/project/id/608078/reporting (accessed on 15 November 2023).
  66. Amith, M.; Fujimoto, K.; Mauldin, R.; Tao, C. Friend of a Friend with Benefits ontology (FOAF+): Extending a social network ontology for public health. BMC Med. Inform. Decis. Mak. 2020, 20, 269. [Google Scholar] [CrossRef] [PubMed]
  67. Ahlers, D. Linkage quality analysis of geonames in the semantic web. In Proceedings of the 11th Workshop on Geographic Information Retrieval, Heidelberg, Germany, 30 November–1 December 2017. [Google Scholar] [CrossRef]
  68. Arp, O.R.; Smith, B.; Spear, A.D. Building Ontologies with Basic Formal Ontology; Mit Press: Cambridge, MA, USA, 2015; ISBN 9780262527811. [Google Scholar]
  69. Rudnicki, R. An Overview of the Common Core Ontologies; CUBRC: Buffalo, NY, USA, 2019. [Google Scholar]
  70. Gaur, M.; Shekarpour, S.; Gyrard, A.; Sheth, A. Empathi: An Ontology for Emergency Managing and Planning about Hazard Crisis. In Proceedings of the 2019 IEEE 13th International Conference on Semantic Computing (ICSC), Newport Beach, CA, USA, 30 January–1 February 2019; pp. 396–403. [Google Scholar] [CrossRef]
  71. Jiao, Y.; You, S. Rescue decision via Earthquake Disaster Knowledge Graph reasoning. Multimed. Syst. 2023, 29, 605–614. [Google Scholar] [CrossRef]
  72. Jain, S.; Mehla, S.; Wagner, J. Ontology-supported rule-based reasoning for emergency management. In Web Semantics: Cutting Edge and Future Directions in Healthcare; Academic Press: Cambridge, MA, USA, 2021; pp. 117–128. [Google Scholar] [CrossRef]
  73. Parsia, B.; Sirin, E. Pellet: An OWL DL Reasoner. 2023. Available online: http://www.mindswap.org/2003/pellet/ (accessed on 26 November 2023).
  74. Mehla, S.; Jain, S. An ontology supported hybrid approach for recommendation in emergency situations. Ann. Telecommun. 2020, 75, 421–435. [Google Scholar] [CrossRef]
  75. Mehla, S.; Jain, S. Rule languages for the semantic web. Adv. Intell. Syst. Comput. 2019, 755, 825–834. [Google Scholar] [CrossRef]
  76. Mehla, S.; Jain, S. Development and evaluation of knowledge treasure for emergency situation awareness. Int. J. Comput. Appl. 2021, 43, 483–493. [Google Scholar] [CrossRef]
  77. Deep Learning. 2023. Available online: https://mitpress.mit.edu/9780262035613/deep-learning/ (accessed on 23 November 2023).
  78. Sun, X.; Zhang, Y.; Chen, J. High-level smart decision making of a Robot based on ontology in a search and Rescue Scenario. Future Internet 2019, 11, 230. [Google Scholar] [CrossRef]
  79. Bu Daher, J.; Stolf, P.; Hernandez, N.; Huygue, T. Enhancing Interoperability and Inferring Evacuation Priorities in Flood Disaster Response. In IFIP Advances in Information and Communication Technology; Springer: Berlin/Heidelberg, Germany, 2023; Volume 672, pp. 39–54. [Google Scholar] [CrossRef]
  80. Daher, J.B.; Huygue, T.; Stolf, P.; Hernandez, N.J.; Hernandez, N. An Ontology and a reasoning approach for Evacuation in Flood Disaster Response. J. Inf. Knowl. Manag. 2022, 23–24. [Google Scholar] [CrossRef]
  81. Deng, S.; Zhao, H.; Fang, W.; Yin, J.; Dustdar, S.; Zomaya, A.Y. Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence. IEEE Internet Things J. 2020, 7, 7457–7469. [Google Scholar] [CrossRef]
  82. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  83. Groshev, M.; Baldoni, G.; Cominardi, L.; de la Oliva, A.; Gazda, R. Edge robotics: Are we ready? an experimental evaluation of current vision and future directions. Digit. Commun. Netw. 2023, 9, 166–174. [Google Scholar] [CrossRef]
  84. Huang, P.; Zeng, L.; Chen, X.; Huang, L.; Zhou, Z.; Yu, S. Edge Robotics: Edge-Computing-Accelerated Multirobot Simultaneous Localization and Mapping. IEEE Internet Things J. 2022, 9, 14087–14102. [Google Scholar] [CrossRef]
  85. McEnroe, P.; Wang, S.; Liyanage, M. A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges. IEEE Internet Things J. 2022, 9, 15435–15459. [Google Scholar] [CrossRef]
Figure 1. PRISMA research methodology.
Figure 1. PRISMA research methodology.
Electronics 13 00526 g001
Figure 2. High-level architecture of the proposed framework.
Figure 2. High-level architecture of the proposed framework.
Electronics 13 00526 g002
Table 1. Related work integrating IoT entities in SAR missions and disaster management scenarios.
Table 1. Related work integrating IoT entities in SAR missions and disaster management scenarios.
Referenced
Paper
Internet of ThingsProvided InformationComputingAlgorithms/Models
Collaborative IoT EntitiesSensorsSurvivor
Localization
Risk
Level
CloudEdgeIoT
Entity
[12] YOLO, CNN
[46] SLAM
[51]
[53] PSO, GA, A*
[23] CNN
[56] YOLO
[60] K-means
Table 2. Related work on semantic modeling in SAR missions and disaster management scenarios.
Table 2. Related work on semantic modeling in SAR missions and disaster management scenarios.
Reference PaperDomainFormatIoT
Entities
SensorsModular
Ontology
Open
Source
[61]Flood, fireTurtle *
[63]FireOWL *
[64]SA**
[28]Emergency managementOWL
[70]Emergency managementOWL
* Ontology was provided by the author upon our request, ** not provided.
Table 3. Comparison of works using KG reasoning to support decision makers.
Table 3. Comparison of works using KG reasoning to support decision makers.
Reference PaperDomainReasoning MethodRulesOntology
Rule-BasedHybridSWRLSHACLOpen Source
[71]Earthquake
[72]Emergency
management
[74]Emergency
management
[78]SAR
[79]Flood
Table 4. Comparative table of the reviewed related works in terms of requirements.
Table 4. Comparative table of the reviewed related works in terms of requirements.
Referenced PaperR1R2R3R4R5R6R7R8
[12]*
[46]
[51]
[53]
[23]* ***
[56]
[60]
[61]**
[63]******
[64]
[28]
[70]
[71] ****
[72] ****
[74]
[78]*
[79]
* Uses only one IoT entity, ** uses only sensors, *** predicts the victim’s future location, **** no real-time data streams.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Soularidis, A.; Kotis, K.Ι.; Vouros, G.A. Real-Time Semantic Data Integration and Reasoning in Life- and Time-Critical Decision Support Systems. Electronics 2024, 13, 526. https://doi.org/10.3390/electronics13030526

AMA Style

Soularidis A, Kotis KΙ, Vouros GA. Real-Time Semantic Data Integration and Reasoning in Life- and Time-Critical Decision Support Systems. Electronics. 2024; 13(3):526. https://doi.org/10.3390/electronics13030526

Chicago/Turabian Style

Soularidis, Andreas, Konstantinos Ι. Kotis, and George A. Vouros. 2024. "Real-Time Semantic Data Integration and Reasoning in Life- and Time-Critical Decision Support Systems" Electronics 13, no. 3: 526. https://doi.org/10.3390/electronics13030526

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop