1. Introduction
Internet of Things (IoT) technologies [
1] have enabled a variety of sensors and devices inside building, such as light, HVAC (heating, ventilating, and air conditioning), alarm system, surveillance camera, power meters, occupancy sensor, etc. being real-time monitored or controlled. Furthermore, artificial intelligence (AI) provides opportunities for innovative application development, for instance, supervisory automation, occupancy comfort optimization, energy efficiency improvement, indoor health management, security management, thus empower the building to be smart.
The most appealing benefit of smart building technologies is this revolution in building management systems (BMSs) [
2], where the data collected from various sensors is processed and analyzed for enabling energy optimization, automation, and so on. In terms of revenues, researchers estimated that connected devices into the global BIoT market generated revenues of more than
$1.2 billion in 2018. While building automation market will grow at a compound annual growth rate (CAGR) of 44 percent to reach 19.4 million in 2022. This trend will grow at a CAGR of 21 percent to almost
$2.7 billion in 2022.
However, studies show that dynamic automation solutions are still insufficient [
2]. Deploying automation in smart buildings requires a large amount of manual effort and building specific domain expertise. Yet, this vision is far from realization. It is still a challenge for modeling the context including users, sensors, actuators (so-called smart device), spaces, etc, in an effective way for knowledge computation. Various sensory data collected from sensors need to be analyzed by algorithms, transformed into information, and minted to extract knowledge so that machines can have a better understanding of humans [
3]. So far, most existing studies mainly focus on human activity recognition in a small-size space with limited numbers of devices or sensors. These machine learning-based approaches usually treat the building as a black-box. They ignore the building’s physical structure, do not capture the global relations among the deployed sensors, spaces, as well as the observation of both the sensor value and the timestamp in a holistic view.
We consider in most applications of smart building, there must be tight relationships among the users, sensors, and the physical structure of the space. Especially, regarding human motion trace, there are strong spatial dependencies among the sensor observations. Therefore, a holistic and conditional probabilistic approach that considers human activity contexts and human-machine interactions (e.g., elevator motion, door-open, light-on, etc.) could be suggested. As shown in
Figure 1, while a user entered a building and was heading to his/her sit, he/she activates multiple sensors/devices along the path.
Though, camera-based image processing approaches could achieve relatively high accuracy for human trajectory tracking. Contrary to the outdoor environment or other open/public spaces (roads, streets, etc.), privacy preservation is generally required while indoors (e.g., offices, meeting rooms, residential spaces, etc.). Therefore, non-invasive sensing technologies (sensor data of devices/appliances, etc.) are more appropriate than the use of cameras. In this study, we focus on such non-invasive sensor nodes.
However, while deploying machine learning (ML) approaches for detection, a large scale of label data was required if pursuing high level of accuracy. Especially, regarding human motion trace related event, the label data could be collected is small or sometimes incomplete. For an example, the room occupancy event happens several times per day, or rarely happens if the functionalities of the room are restricted or the space is not publicly open. That means it is quite difficult and time-costly to collect such room occupancy label data at a large scale. Moreover, it is still an ad-hoc process of defining which sensor node should be used and how to combine them for ML computation. Finally, it hinders the development of machine learning methods to extract knowledge of event in an automatic manner, for realizing such as room occupancy detection, human motion tracking, and so on.
In this paper, we present our proposal: a practical proposal of knowledge inference on event in IoT-enabled smart building environment. The proposal leverages the Building Topology Ontology (BOT) for constructing spatial graphs among sensors and spaces for further enabling conditional reasoning. In particular, considering the collected data is small, we utilized the Markov Chain Monte Carlo (MCMC) sampling method to approximate the time interval values of two correlated events. The proposal has been put into use in a real smart building environment. Several inference scenarios have been conducted. Moreover, the inference on users’ waiting time revealed the effectiveness of automatic control on elevator for pursuing zero-waiting time. Hence, the primary contributions and novelties of this work can be summarized as follows:
Unified API development of IoT sensor network for event inference based on sensor-data collaboration in the smart building;
Ontology-based graph model for constructing the spatial relations among sensors and space for further enabling automatic event extraction based on conditional reasoning;
Inference engine, in which two types of models: FSMs (finite state machine) and BN (Bayesian Networks) have been proposed for event-mapping. Further, Markov Chain Monte Carlo (MCMC) model has been utilized for enhancing the accuracy of sampling the small-size dataset of time-interval values.
Usage scenario of automatic control on elevator control has been conducted for evaluation. The numerical results demonstrate the potential and effectiveness of automatic control application based on knowledge inference in the smart building.
5. Experiment and Evaluation
The experiments have been conducted in a real smart building named “Daiwa ubiquitous computing research building” in the University of Tokyo.
Figure 10 shows the IoT-enabled environment of our smart building. The building has 5 floors including B2F, B1F, 1F, 2F, 3F, and 43 space entities (i.e., room, hall, or corridor, etc.) and 1 elevator. At the meantime, 846 spatial relation triples have been restored in the BOT graph.
In this experiment, we used the data collection of 114 lights and 1 elevator for knowledge inference on the events of room occupancy and elevator motion.
Figure 11 shows the visualization of the light status in several rooms of the building, spanning 78-days (from 13 September 2020 to 30 November 2020). The sizes of the sensor observations have been listed in
Table 3. It is worth to note that during the covid-19 pandemic, the collected occupancy-related or event-conjunction-related sensor observations were much less than normal periods.
5.1. Trace on Event Conjunction
Based on the MCMC sampling process, the value of
between two corresponding events has been approximated.
Figure 12 shows the results of approximated time intervals between the event of elevator-arriving and light-up of several representative spaces.
Table 4 listed the detail results as well. Further, based on the approximated
, such as the event conjunction of ‘light_turn_off -> elevator_arriving’ could be inferred. As a result, the count of inferred room occupancy (
), event conjunction (
->
) and their conditional probabilities have been summarized (see the details listed in
Table 5). Here, the room of which a total number of events over 80 have been picked on the list.
5.2. Assumption for Automatic Control on Elevator for Zero-Waiting Time
In order to evaluate the usability of the proposed framework, an automatic control scenario on elevator has been assumed. In general, when the user wanted to use the elevator, he/she has to first reach the elevator hall and press the upward or downward button to make a call on the elevator (given the timestamp of
shown in
Figure 13). Then the elevator received the command and arrived at the floor (given the timestamp of
shown in
Figure 13) to pick-up the user. The automatic control scenario was an assumption based on knowledge inference on event conjunction. Suppose:
Room occupancy has been monitored in real-time based on the sensor observations;
If the agent detected a user leaving the room and further predicted the user has an intention of using the elevator;
Then, the agent triggers the elevator in-advance to move to the user’s departure floor.
The above assumption means when the user arrives at the elevator hall at time, the elevator has been ready for picking up the user. Thus, he/she can take a ride on the elevator immediately and no waiting time () was needed.
Therefore, the effectiveness of automatic control on elevator was evaluated by extracting the historical users’ waiting time. The numbers of inferred event conjunction of room occupancy -> elevator arriving (
->
) has been listed in
Table 6. Regarding the elevator usage, the count of non-waiting, waiting have been calculated as well. Furthermore, the percentages of the different waiting time also have been inferred, respectively.
Figure 14 shows the statistical result of users’ waiting time calculated from collected sensor data. The result demonstrated there were 267 waiting events happened among the total of 471 elevator-using events, with the probability of waiting was 56.7%. In addition, the total amount of waiting time was 3085 s and the average waiting time per user was 11.55 s (SD = 5.30 s).
These numerical results of users’ waiting time demonstrated the great potentials of automatic control on elevator for pursuing the goal of zero-waiting of using the elevator. The quantitative results also showed the effectiveness of the knowledge inference on event conjunction in smart building, for further improving the transport efficiency and productivity of indoor users.
6. Discussion
In the experiments, room occupancy was determined as binary-mapping from light state changing. The rule was set as: while the light is turning-on, the room is in occupancy; otherwise, the room is out of occupancy. However, in real usage, more complicated situations could be taken into consideration. Since people left the room without turning off the light sometimes happens, multi-modal sensor fusion should be considered if pursuing the inference accuracy. For example, the state of light and smart lock could be combined for improving the accuracy of room occupancy detection. Nevertheless, the BOT-based graph has provided opportunities for modeling other sensor observations in a structured hierarchical graph. There could be few challenges for modeling other different types of sensors to the existing ontology graph. Thus we consider that our approach could be adapted to other sensor resources in the smart building if available, and the methodology is practical for other smart buildings.
On the other hand, various sensors are currently installed in the smart building. In addition to the diversity of sensors, more benefits of our proposal could be quantified after a diverse range of automatic control application being implemented. For example, tracing on human motion in the smart building, to automatic control the appliance pursuing reducing the energy consumption, improving human comfort, health in the smart building. These mentioned-above application scenarios rely on knowledge inference in smart building. Each part of this proposal:unified API of sensor network, knowledge graph of the physical environment and the inference engine, was considered to be indispensable. In this experiment, we formed a graph with the physical relations. Semantic schema (e.g., users’ identification, preferences, relations, as well as space affiliations, etc.) has not been modeled in the graph. However, adding to these attributes, the inference engine would be capable of analyzing user-related semantics.
7. Conclusions
In this paper, we presented a practical approach of event inference for automatic control in IoT-enabled smart building environment. The proposal consists of unified API development, knowledge base and inference engine. The event inference models based on sensor observations was separated into deterministic and probabilistic. Therefore, two types of models: finite state machine (FSMs) and bayesian network (BN) have been used for capturing the state transition and sensor data fusion. As opposed to earlier straightforward machine learning-based methods, our proposal focused on the conditional conjunction and transition of two correlated events, for which a graph model of the physical environment was considered necessary.
To tackle the problem of the sizes of time interval () observations were too small to derive accurate results, MCMC sampling process has been utilized for approximating the time intervals (). Specifically, linear regression of local distances between two space entities on the ontology graph has been leveraged for the optimization of the sampling process. The proposal has been implemented in a real smart building environment and 78-days data collection of the state on light and elevator has been conducted for evaluation. Event conjunctions on the light and elevator have been utilized for further inferring room occupancy and indoor users’ trajectories.
To show the usability of the proposal, we extracted the knowledge of users’ waiting time on the elevator. The FSM mapping result of elevator-using demonstrated the probability of users’ waiting event was 56.7%, with the total waiting time during the evaluation was 3085 s and average waiting time was 11.55 s. The numerical results demonstrated the potential of automatic control for zero-waiting on elevator based on knowledge inference on event conjunction in smart building.