Next Article in Journal
Spatially Variant Error Elimination for High-Resolution UAV SAR with Extremely Small Incident Angle
Next Article in Special Issue
Novel Grid Collection and Management Model of Remote Sensing Change Detection Samples
Previous Article in Journal
Despeckling of SAR Images Using Residual Twin CNN and Multi-Resolution Attention Mechanism
Previous Article in Special Issue
Hybrid Deep Learning and S2S Model for Improved Sub-Seasonal Surface and Root-Zone Soil Moisture Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrated Node Infrastructure for Future Smart City Sensing and Response

1
National Engineering Research Center of Geographic Information System, Wuhan 430078, China
2
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430000, China
3
School of Geography and Information Engineering, China University of Geosciences, Wuhan 430078, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(14), 3699; https://doi.org/10.3390/rs15143699
Submission received: 21 June 2023 / Revised: 19 July 2023 / Accepted: 20 July 2023 / Published: 24 July 2023
(This article belongs to the Special Issue GeoAI and EO Big Data Driven Advances in Earth Environmental Science)

Abstract

:
Emerging smart cities and digital twins are currently built from heterogenous cutting-edge low-power remote sensing systems limited by diverse inefficient communication and information technologies. Future smart cities delivering time-critical services and responses must transition towards utilizing massive numbers of sensors and more efficient integrated systems that rapidly communicate intelligent self-adaptation for collaborative operations. Here, we propose a critical futuristic integrated communication element named City Sensing Base Station (CSBS), inspired by base stations for cell phones that address similar concerns. A CSBS is designed to handle massive volumes of heterogeneous observation data that currently need to be upgraded by middleware or registered. It also provides predictive and interpolation modelling for the control of sensors and response units such as emergency services and drones. A prototype of CSBS demonstrated that it could unify readily available heterogeneous sensing devices, including surveillance video, unmanned aerial vehicles, and ground sensor webs. Collaborative observation capability was also realized by integrating different object detection sources using advanced computer-vision technologies. Experiments with a traffic accident and water pipeline emergency showed sensing and intelligent analyses were greatly improved. CSBS also significantly reduced redundant Internet connections while maintaining high efficiency. This innovation successfully integrates high-density, high-diversity, and high-precision sensing in a distributed way for the future digital twin of cities.

1. Introduction

Currently, the Internet of Things (IoT) [1,2], spatial–temporal big data [3], and pervasive sensing [4] are essential factors of geospatial observation and smart cities [5]. Spatio-temporal observation systems play vital roles in realizing city sensing purposes. Scientists have strived to improve the quality of city sensing systems [6]. As shown in Figure 1, the earliest experiment-based research entirely relied on manual measurement [7,8]. In the 1980s, geospatial observation functioned on an automatic but isolated system built on sensors and offline computers [9]. Up to now, the Geospatial Sensor Web (GSW)-based and Web Geographic Information System (GIS) services are widely used [10,11,12,13,14,15]. Over the past two decades, several spatial–temporal information systems for cities across the globe have been built, such as Live Singapore, Urban Sensing by Massachusetts Institute of Technology (MIT), Microsoft’s SensorMap [12], Smart RIO [13], and Intelligent Operation Center by International Business Machines Corporation (IBM). The early architecture of these city spatial–temporal observations and information systems was based on Web-GIS services that provided city sensing services via a limited quantity of sensing resources [14,15,16]. Regarding Internet service development, an enterprise such as Alibaba builds spatio-temporal information platforms with plenty of sensors to provide classified services for the public, government, and policymakers [17,18]. The support of big data and cloud services currently constitutes the third generation of spatio-temporal information systems in smart cities, as shown in Figure 1. A current and typical smart city has millions of sensors, and they will become more affluent and prosperous in the future. Therefore, there will be higher requirements for spatio-temporal information systems to improve the diversity of perception to enhance observation accuracy. The key points are as follows: low-latency data access [19], real-time processing [20], cooperative observation, and instant services. Low-latency data access provides a highly efficient capability that enables ubiquitous sensing resources to access the sensor web as quickly as a cell phone accesses a cellular base station. Real-time processing emphasizes that the observation data is carried out at the terminal side or edge side (CSBS) of the smart city, which improves the situation of reduced real-time performance brought about by unified processing by the service side in legacy sensing networks. Co-observation and instantaneous service allow multi-sensors (sensing platforms) to co-observe and actively provide multi-variable and multi-faceted sensing information.
Firstly, the different interface protocols cannot be unified, accessed, and managed in the legacy system. There is an illustration of the in situ site access: The Low-Power Wide-Area Network (LpWA) sensors upload to the Sensor Observation Service (SOS) directly [21,22,23]. The sensors working with the Low-Power Personal Area Network (LoWAN) need to install network infrastructures, such as radio transceivers, gateways, and Remote Terminal Units (RTU) to suit the protocol for data acquisition and upload [24]. As can be seen, the sensor webs currently widely use a “point-to-point” architecture, where the observed data uploads to an SOS server. Undoubtedly, the legacy architecture increases the complexity of the network and leads to unnecessary redundancy in the system. The legacy architecture cannot successfully handle these situations in the diversified data structure, high concurrency, and multiple observation platforms. This is not conducive to low-latency data access for multi-sensors.
Secondly, the data sources of spatio-temporal information systems currently include but are not limited to in situ sites, mobile vehicles [25], high-resolution video surveillance, unmanned aerial vehicles (UAV) [26,27], automatic surveying robots [28], and satellite remote sensing. Due to compatibility issues brought by various IoT protocols, physical interfaces, and radio frequencies, the above observation platforms cannot smoothly join the spatio-temporal information system and automatically operate observation tasks [29,30,31]. It leads to defects in accessing diverse observation platforms and collaborative sensing capability. This is not conducive to cooperative observation for multi-sensors.
Thirdly, the sensing devices of the sensor web did not realize the plug-and-play and web-ready characteristics due to the above issues. The Internet instability and latency will pose great challenges to establishing a stable network to analyze observation results on the server side [29,30]. This is not conducive to real-time processing and instant services for multi-sensors.
Our research team proposed a new concept named “City Sensing Base Station (CSBS)” for the next generation of spatio-temporal observation and information systems to solve the above issues. Figure 2 compares the essential differences between legacy architecture and CSBS. CSBS develops as a combined software and hardware innovation that solves and adjusts the problems of legacy geospatial observation. Through architecture optimization, it provides a further possibility for “heterogeneous connection”, “automatic control”, and “application service” to integrate multi-platform observation.
In this context, we will achieve the three objectives:
  • By integrating access to IoT interface protocols, CSBS solves the problem of heterogeneous observation resources that cannot integrate at a block scale in legacy city observation systems;
  • By integrating access to heterogeneous observation platforms, CSBS provides observation platforms with an automatic observation capability. It enhances the real-time perception ability and perception diversity;
  • By increasing perception diversity, CSBS helps to improve sensing accuracy for a specific event by obtaining manifold datasets.

2. Methods

The CSBS architecture, a cyber–physical infrastructure for pervasive sensing for city, was proposed in Figure 2. It is straightforward to notice that CSBS enables plug-and-play of observation platforms with multiple IoT protocols and reduces access latency at the block scale through non-Internet connectivity. The low-latency data interaction helps CSBS realize the active control of multiple observation platforms and build a collaborative observation system. CSBS provides subscribed observation information service to the server side and reduces the number of requests for multiple observation information transmission on the Internet. In order to implement the CSBS architecture, three key components, i.e., “Connection”, “Control”, and “Service,” should be emphasized. “Connection” realizes the “plug and play” of heterogeneous observation platforms by constructing the fusion access of multiple physical layers and data layer protocols initially. In addition, CSBS “Controls” various observation platforms to establish collaborative observation models of heterogeneous sensing resources. Thus, the CSBS enables accurate observation of city events and objectives. The ultimate goal of CSBS refers to “Service”, which collects adequate observation data to end users and service centers. The above three key components are further elaborated on below.

2.1. Connection: Enable Low-Latency Data Access

“Connection” is the first step to realizing the heterogeneous perception of CSBS, and the radio frequency (RF) access control module is a critical component in the CSBS. Unlike the cellular base station providing voice and data service via Universal Mobile Telecommunications System (UMTS, 3G), LTE (Long Term Evolution, 4G), and 5G New Radio (5GNR, 5G) systems, the module was used for realizing the hybrid protocols access, identification, and parsing capabilities of multi-band LoPAN, LpWA, high-speed RF transmission system, and data protocols parsing. The goal is to enable low-latency data access.

2.1.1. LoPAN and LpWA System

At present, LoPAN and LpWA are the priority choice for the high-density deployment of in situ observation sites of the GSW [31]. The primary reason is that the in situ observation site has an inferior requirement for radio bandwidth, and it can tolerate a delay at second level. These in situ observation sites are sensitive to low power consumption and cost because they need to be deployed with high density. CSBS can access LoPAN protocols like ZigBee™, Bluetooth-Low Energy (BLE), and LoRa™, which are the LoPAN protocols widely used in GSW. The LoPAN does not depend on an Internet Protocol (IP) connection, and the sensing device will directly establish the RF connection with CSBS. The following table summarizes these protocols’ coverage, capability, and application scenarios.
LpWA is also a low-power transmission method, but it requires network communication with service centers via IP, such as the Narrowband Internet of Things (NB-IoT). The narrowband mainly adopts 900 MHz/1800 MHz based on LTE. The capabilities of NB-IoT are indispensable because the CSBS uses as an integrated system [32,33]. The CSBS establishes Internet connections with NB-IoT observation devices in practical applications [32,34,35]. This kind of communication is beyond the reach of LoPAN or the indoor relay station of LoPAN in blind areas of the observation block.
For the fuse access of ZigBee™, LoRa™, Bluetooth Classic, BLE, and RFID, we designed a reliable circuit module and selected the STM32 microcontroller under advanced reduced instruction set computer machines (ARM) architecture. This Microcontroller Unit (MCU) combined controlling, defining the bus address, and parsing the data protocol for the Modbus, Controller Area Network-BUS (CAN), 1-Wire, and Profibus packages. On the PCB shown in Figure 3a, it embedded all the System-On-Chip (SoC) and power amplifier modules for ZigBee, LoRa™, BLE, and RFID 2.4 GHz as the equivalent of basebands. Furthermore, data interaction with the MCU was achieved by both General-purpose input/output (GPIO) and Universal Asynchronous Receiver/Transmitter (UART).
Depending on the sensor manufacturer, they will define the wireless channel (e.g., ZigBee™, BLE) or the frequency (LoRa™) of the sensor before they are shipped. If the receiver channel and the transmitter channel do not correspond, they will not be able to communicate correctly. In order to solve the “plug and play” of these wireless sensors, the method provides a wireless frequency hopping scanning and access method. First, when the CSBS is in the sensor registration state, the RF module continuously changes its receiver channel within a short period to capture the data sent from the sensor to the CSBS. Secondly, after the CSBS captures the data, it detects whether the data conforms to Modbus, Profibus, and other standard IoT devices. The channel will be recorded in the SoC if it conforms to the above data protocol specification. Finally, the RF module of CSBS continuously performs channel scanning and data identification until the sensor registration process finishes. This approach changes the drawback that under the traditional sensing network, one type of RTU transceiver can only accept a single physical protocol, a single communication channel, and a single data transmission protocol.
To further describe the “channel scanning and data identification” mentioned in this method, Figure 3b illustrates the workflow of this procedure. The method consists of a SoC that performs channel polling and scanning functions. The SoC polls the basebands of different channels in sequence at a breakneck speed. When there is a message reception on the baseband, the system waits to complete the current message reception and then performs the subsequent polling. This method can achieve unified access to multi-channel sensor data under a limited number of antennas.

2.1.2. High-Speed RF Transmission System

In our city, it is impossible to make a significant breakthrough in improving observation accuracy with an in situ sensor network alone. A fusion observation method of heterogeneous platforms can be considered. Remote sensing systems using UAVs and ground-based measurement vehicles are the most popular and non-contact observation ways. At the same time, ultra-high pixel camera equipment also plays a role in enhancing sensing capabilities. These observation platforms have the following characteristics: mobility, large real-time data size, stable power supply, and on-demand trigger for a specific task. A high-throughput communication link between CSBS and the observation platforms ensures that the heterogeneous observation platforms can be controlled by CSBS and return the observation data for fusion processing in real time. The implementation of this technology relies on the 2.4 GHz/5.8 GHz dual-band mesh technology with Orthogonal Frequency Division Multiplexing (OFDM). The RF circuit locates in the control box below the antenna, and the control box connects with CSBS through the single-mode optical fiber to ensure stable data transmission.

2.1.3. Data Protocols Parsing

Based on the above methods, CSBS obtained a series of raw data, i.e., unreadable analog electrical signals with different radio protocols. On the one hand, the acquired data are encapsulated with the Modbus-RTU, ProfiBus as data bus protocol and transmitted via LoPAN or LpWA. The encapsulated data will be parsed according to specific coding rules defined in these protocols, respectively, as shown in Figure 3. Then, CSBS stored these parsed data in a MySQL database. On the other hand, data from Bluetooth and RFID sensors work with JavaScript Object Notation (JSON) approach for transparent transmission.
This innovation settled the issue of single and scattered observation data sources in the legacy geospatial sensor web. It achieves unified access and management of heterogenous sensing sites, which is a problem ignored by current researchers.

2.2. Control: Enable Cooperative Observation

“Control” refers to the increment of CSBS relative to the cellular base station. The cellular telecom base stations only enforce a radio frequency access and relay process. However, CSBS achieves the dynamic scheduling of heterogeneous observation platforms through edge data processing. As the physical infrastructure of blocks in the geospatial sensor web, CSBS is scalable, extending from points to surfaces and in three dimensions spaces, rather than serving in situ devices alone. The collaborative observation process in CSBS is fully automated, and then distributed computing allows real-time analysis of “point” data from in situ stations and controls the observation platform, such as UAVs for “surface” observations in case necessary. In addition, distributed computing aims to process observation data on the terminal side, which can reduce the data processing pressure on the edge and service sides and improve the efficiency of cooperative observation.

2.2.1. Intelligent Observation of In Situ Sites

The advantages of the in situ observation system are low-power consumption and high temporal resolution. However, the unitary observation results and low-spatial resolution cannot meet the needs of accurate observation. The advantages of the in situ observation system can be used as trigger conditions for linkage observation. Generally, whether the air environmental variables monitoring, soil humidity monitoring, water quality monitoring, etc. The observation system will reach a steady-state response while the passage of the system has operated for enough time. Even if there is a slight variation, such as transient changes like slow temperature rise and precipitation, the observation system will reach a steady-state in a short time. The forced response is randomization in nature, and it can cause significant instability in the system for a short period, which has a significant impact on the output signal. CSBS will perform a real-time correlation analysis for these in situ sensing sites to identify the anomalous input sources that bring about the forced response.
Correlation analysis includes autocorrelation analysis for each site and intercorrelation analysis for all sites of the same type in the observation area.
The autocorrelation analysis of a single site can be used to determine whether there is an anomaly at that site. The correlation analysis of sites of the same type in the area makes it possible to identify whether the observation system is a global anomaly or a single-point anomaly. Based on the spatial location of the site mapping, the coordinates of the anomaly location will be returned. Equations (1) and (3) are the basic processing algorithm.
R τ = E X t μ X t + τ μ σ 2
where X is a generalized smooth process, and the expectation of Xt and the standard deviation of μt do not vary with time.
The autocorrelation function can be expressed as a time delay τ.
c o v X , Y = i = 1 n X i X ¯ Y i Y ¯ n 1
where Xi or Yi represents the i elements in variable (two sensors here) X or Y. x ¯ or   y ¯ denote the mean value of variable X or Y.
r X , Y = C o v X , Y V a r X V a r Y
where the numerator is the covariance of the two variables (2), which is a value used in statistics to describe the linear relationship between two variables. The larger the covariance of two variables, the more similar the trend their values show over a range of data points (in other words, the curves of the two variables are closer to each other).
In a steady-state environment, sudden changes in the time domain signal through one of the sensors can be used to generate warnings or trigger synergistic observations of other platforms. At the same time, the use of real-time correlation analysis does not require strong arithmetic power. This primary perceptual analysis method is not opportunistic and is highly universal.

2.2.2. Collaborative Observation and Dynamic Scheduling

An in situ station might provide geospatial observation ceaselessly. Still, a single observation source cannot present a complete geospatial mapping, especially when abnormal observation signals occur in the in situ observation area. The CSBS makes in situ site enablement possible and builds a Collaboration Observation Model (COM) with multiple heterogeneous perceptions such as Visible Image, Multispectral, Thermal Infrared, etc. The application of COM includes three parts. First, spatial–temporal information should be determined according to the in situ sites and confirms the range of abnormal observation areas. The method of location determination presents in the following Table 1:
CSBS generates commands and path planning for unmanned vehicles and UAVs in the second step. Abnormal area observations happen via airborne remote sensing cameras and realize edge computing in real time. The observed data transmits to CSBS through the graph/data transmission system. Finally, the prototype of CSBS supports image analysis and processing, including the Normalized Difference Vegetation Index (NDVI) and thermal infrared. These algorithms in CSBS assist in improving spatial perception accuracies, such as soil moisture [34], cyanobacterial blooms [35], indoor fire, and more.

2.3. Service: Enable Real-Time Processing and Instant Services

“Service” embodies CSBS observation results at a block scale, and it releases promptly to ensure the vividness and reliability of the observation. Figure 4 illustrates the positioning of the CSBS architecture in the city sensing system. The information terminal of CSBS includes “end users” and “spatial information service centers.“ End-users refer to the persons who need an intuitive and visual data product, e.g., the residents living in the observed block, the community grid managers, and the administration. The CSBS synchronizes raw files and data products to the geospatial service center. It preserves the core functions of the geospatial sensor web while providing the spatial information service center with a vast data source for later decision making, analysis, and research in a high-stability network condition.

2.3.1. End-User Services

The CSBS provides observation results to end-users both in normal and emergency scenarios.
Typically, the WLAN and cellular can support users in accessing historical information and real-time observation. Administrators have permission to control, manage, and maintain the sensing devices and sensor webs. The Human–Computer Interaction (HCI) system is a web-based software system running on the CSBS, designed to register (access) IoT in situ sites and various observation platforms of different wireless transmission protocols areas and perform sensor observation services, alerts, and analysis. The HCI can also perform preliminary data processing, presentation, analysis, and storage of the observation data acquired by the CSBS.
In case of an emergency is activated, the observation results and messages proactively push the user compulsively. Not limited to HCI, a service converts observed values into text and Text to Sound (TTS) technology to transform the critical data and results into voice. The textual message will be carried in Radio Data System (RDS) specification and broadcast via the FM transmitter module. BLE Beacon technology with RFID pushes observation results activity and positioning. A CSBS with low power consumption, long-lasting power system, and multi-protocol hybrid access enables effective user-to-user and base station-to-base station communication in the case of cellular networks or fixed broadband network interrupts in natural disasters such as earthquakes and floods.

2.3.2. GSW Service Center

A GSW service center is a spatial information infrastructure and an indispensable core in a smart city [36]. The service center might include massive storage, cloud computing, and other server clusters. The center empowers information service and sharing, lays the foundation for spatio-temporal information application and the digital twin of geospatial, and is the hub of spatio-temporal information perception service.
CSBS allows different IoT interface protocols of the sensing platform to achieve plug-and-play. Observation platforms establish physical layer connection communication with CSBS, which does not involve Internet forwarding and IP requests, reducing latency power consumption and improving stability from the source. Therefore, CSBS acts as “a connecting link between the preceding and the following” data interaction architecture for GSW centers.
The spatio-temporal information observation on the city scale can be divided into blocks. The CSBS integrates and synchronizes the observation data to the service center. Therefore, the service of CSBS takes the following increments:
  • The service center can only acquire data, but the data sources are ignored;
  • Due to the non-uniformity of protocols, the newly increased sensors did not achieve the plug-and-play. They need to upgrade and transform the front-end hardware or debug and upgrade at the service side;
  • The legacy GSW cannot actively control the perception network’s physical form;
  • Maximize the use of each Internet and IP packet request to ensure stable transmission of observation data.

3. Experiment and Results

To verify whether the CSBS-based architecture achieves the designed goals, i.e., heterogeneous connection, automatic control, and application service for fusion observation, we conducted a field experiment in the East Lake High-Tech Development Zone of Wuhan, China.

3.1. Radio Access Capabilities

The theoretical range of the CSBS supported by different wireless protocols was calculated to verify the coverage capability of the CSBS prototype. We can estimate the transmission distance, considering only the transmitting power, the receiving sensitivity, and the operating frequency according to (4) and (5). The propagation in free space is ideal, indicating that its energy is not absorbed, reflected, or scattered by obstacles. The method of estimation of the ideal communication range is as follows [33].
L f s = 32.44 + 20 l g d + 20 l g f
where:
  • [Lfs] = the transmission loss, the unit is dB;
  • d = communication distance, the unit is kM;
  • f = communication frequency, the unit is MHz.
Here, lg(X) is the natural logarithm
L f s = P T x + P R x
where:
  • PTx = Transmit power, the unit is dB;
  • PRx = Receive sensitivity, the unit is dB.
In real scenarios, the wireless communication range is estimated by obtaining the signal interference in the field. Simultaneously, we perform a real-scenario experiment within the estimated communication range and the theoretical range of wireless protocols. The goal of this experiment is to extend the communication distance if possible, so all of them add a PA (Power Amplifier) module. Table 2 presents the parameters of RF basebands that are equipped with CSBS.
The theoretically calculated distance helps us complete the reliability test experiment of CSBS communication capability using the following:
  • Non-Internet Access Communication Protocol: Table 3 presents the experimental method with the observation platform using LoPAN and 5.8 GHz Direct-Sequence Spread-Spectrum (DSSS) systems. Since BLE and RFID are communication radios at a short range, the verification experiment of communication distance is not considered. The experiment was conducted using Tektronix RSA607A real-time spectrum analyzer with Signal Vu software for signal characterization measurements and analysis;
  • Internet Access Communication Protocol: The CSBS supports NB-IoT as an auxiliary communication via a public telecom cellular-based network. These observation stations were deployed in the corresponding coverage area of CSBS and “PUSH” the observation data to the nearest CSBS. The intended purpose is an access pressure capability test from NB-IoT stations to CSBS, according to China National Accreditation Service for Conformity Assessment (CNAS).
Figure 5 shows the experimental results and the transmission quality with different protocols. The deployed locations follow the farthest linear distance (within the CSBS coverage area), while the deployed locations cover a building and the field outside. Because signal coverage is often affected by obstacles (e.g., walls, etc.), outdoor scenarios have fewer obstacles, allowing for longer communication distances than indoor environments with the same radio frequency infrastructure. CSBS equipped different radio frequency infrastructure for indoor environments than outdoor ones to ensure coverage distance. Respectively, the experiment discusses the transmission quality of CSBS separately from the outdoor and building scenes. A short-range BLE + RFID hybrid network sensing system has been applied in a building. The RFID can assist the indoor repeater system in identifying and connecting the sites in its surroundings, and BLE realizes the sensing data transmission. Figure 5(c) shows three sites via BLE with different sensing elements for transmission rate testing and obtaining the RSSI. The sensing data can still be transmitted while the average RSSI is −85 dBm (60–80 m away from the indoor repeater). Because the RFID under 2.4 GHz does not affect the BLE module, RFID can assist in quickly registering and deploying BLE indoor sites. Under the indoor environment, CSBS supports a 60–80 m wide connection with a 100–200 kbps transmission rate while also supporting 30 m with a more than 200 kbps rate. This kind of BLE coverage capability CSBS provides is enough for most indoor sensors.
In the outdoor field test, three different sensing platforms, such as LoPAN (Zigbee™ and LoRa™), Sub-6GHz spread spectrum transmission, and NB-IoT, are included. To enhance the reliability of the experiment, three groups of in situ sites and RSA devices were made for the time calibration via the Beidou timing system before the field test. During the experiment, the RSA device is applied to capture the signal power when transmitting data from in situ sensors to the CSBS. As shown in Figure 5b,d, three sets of experiment results are averaged for a reliable signal power analysis.
ZigBee is based on the IEEE 802.15.4 specification, satisfying the medium distance of 150–250 Kbps level of sensed data transmission. Figure 5b shows that three sets of ZigBee stations are roughly 150–200 m away from the CSBS. The quality of the signal captured by RSA is above −55 dBm, and the average peak signal of the three stations is −54.37 dBm after the stations establish transmission with the CSBS three times.
The average quality of the signal captured by RSA was above −68 dBm when the station with LoRa™ protocol was roughly 900 m away from the CSBS. Although the test results are based on the installation of power amplifiers, it is sufficient to show that the prototype of CSBS equipment developed in the laboratory environment has the data access capability of ZigBee™/LoRa™ fusion group sites in a one-square-kilometer block scale.
In order to achieve fusion observation, the access capability of CSBS in UAV and ground surveying robots (GSR) was also shown in Figure 5a. Within 200 m, CSBS successfully provides digital transmission/graphic transmission rate at 200–300 Mbps. When the UAV flies more than 200 m away from the CSBS while still in the visual range, the signal power and transmission rate can be maintained at more than 150 Mbps. This speed means video streaming at 720P can be effectively transmitted. At the ground, the transmission rate and signal between GSR and CSBS are quickly reduced due to interference by buildings. When the signal is totally interrupted by the ground buildings, the connection between CSBS and GSR will automatically switch to an LTE mobile network.
The NB-IoT works with LTE/NR cellular networks to supplement IoT interface protocols. For the access test of NB-IoT protocols, we applied the ElasticSearch environment for stress testing. The client-PC built for the stress test simulated a huge number of NB-IoT sites sending data to the CSBS simultaneously. Figure 5e shows that the CSBS approximately maintains access, even reaching 1600 times/sec for data uploading. Therefore, the CSBS also has the stable access capability of NB-IoT sites.

3.2. The Generalized Automatic Fusion Observation Process

An automatic fusion capability of CSBS was also tested when collaborating heterogeneous sensing platforms, which is critical for improving the accuracy of spatio-temporal information sensing. We conducted two experiments, i.e., emergency sensing tasks driven by underground water pipe damage and traffic accidents, respectively. The main difference between these two cases is that the in situ observation site of the soil moisture is one-dimensional data (point-in-time data). In contrast, the in situ observation site of the traffic accident is two-dimensional data (photos). This kind of experiment setting is necessary to verify the generality of CSBS.

3.2.1. Case 1: Underground Water Pipe Damage

At present, our city pipeline system has been installed with wet underground environmental variables monitoring for a long time, and some of the pipelines are under high pressure year-round. Sudden water pipe damage or the infiltration and diffusion caused by minor ruptures often waste resources and cause economic loss. Even if sensors are deployed around the pipeline, the observation results cannot ensure high confidence. Building off the proposed CSBS, a collaborative sensing mechanism can be established by integrating all-in-one soil sensors that monitor temperature, humidity, conductivity, and pH in the subsurface, a high-precision thermal infrared camera, and an aerial camera with a multispectral/thermal infrared imaging system. This multiple-platform fusion experiment is illustrated below in two steps.
  • STEP I: Real-time observation at fixed sites
As shown in Figure 6, the in situ sites have fixed geographical locations with continuous sensing capability, which can be used as triggers for heterogeneous collaborative observation platforms. Real-time raw data from LoRa™ and ModBus sites were transmitted to CSBS. Then, a correlation analysis was conducted to calculate whether the same site produces values that decrease the correlation in successive periods, i.e., a potential pipeline damage case. These in situ soil moisture records and real-time autocorrelation shown in Figure 4 present the observation data of five soil moisture in situ sites via Lora™ and Modbus, with a temporal resolution of 3 min. Station_501 is a sensor closest to the water leak area during the experiment, and the soil moisture gradually increased at the fourth sampling time (12 min). Station_502–505 were outlying from the leak site, and the soil moisture increased slightly after the 10th sampling time (about 30 min). Meanwhile, the real-time autocorrelation coefficient of Station_501 showed a rapidly decreasing trend from 0.94 to 0.62 within 30 min. Finally, the program in CSBS returned that there was instability in Station_501. Further, CSBS analyzed five surrounding soil sensor sites to determine whether the same changes occurred in the vicinity of the site during successive periods.
  • STEP II: Perceptual synergy and control
CSBS relies on the uninterrupted characteristics of fixed sites to trigger the mobilized platforms to enhance the richness of sensory data. Therefore, CSBS initiates the process of perceptual synergy and control after receiving unstable signals from fixed sites.
CSBS generates a path plan according to the position of the abnormal in situ station for the UAVs and the survey robots. In the field test scenario, the survey robot works with the Mavlink protocol for path planning, remote control, and real-time status feedback. Mavlink applies equally to open-source drones, and CSBS is compatible with drones that work with Mavlink. Because DJI drones are almost the first choice for aerial observation, we used DJI Phantom Enterprise and DJI Matric 300 Real-time Kinematic (RTK) for perceptual synergy experiments. The DJI drones provide the Software Development Kit (SDK) for the second development interface, and it has been integrated as a module of CSBS software.
Once the UAV has obtained the control command and destination coordinates, it needs to complete the flight path planning. CSBS provides centroid coordinates calculation of multi-longitude and latitude for multi-anomaly in situ sites. If there is only a single in situ anomaly site, CSBS supports minimum distance calculation between two points. Upon the arrival of the UAV in the observation area, the aerial survey trajectory is centered on the observation center point and extended 150 m in four directions of 90°, 180°, 270°, and 360°, respectively, forming a square observation area of 150 m × 150 m. The aerial survey system allows the use of “inclined photography around the perimeter” and “vertical photography,” and vertical photography has a heading overlap rate of 80% with the side overlap rate. The measurement method depends on the user’s choice and the type of sensor. The survey robot requests a bike path from the AutoMap Application Programming Interface (API) and automatically navigates to the area for observation and scanning with a Light Detection and Ranging (LIDAR) mapping system and machine vision when CSBS determines the destination. The survey robot and the UAV initially process the observation results at the edge side.
The UAV with the multispectral system provided the NDVI of the observed area, which quantifies the vegetation by measuring the difference between near-infrared (strong vegetation reflection) and red light (vegetation absorption). The value of NDVI ranges from −1 to 1. Generally speaking, when the NDVI index is negative, the surface shows more moisture/water, and when the NDVI index is 0, it may show urbanized areas. This characteristic allows for accurate inundation identification in this scenario. The NDVI results can be returned to the CSBS directly through onboard real-time calculations.
A UAV or robot with thermal infrared photography can identify the area of water infiltration through temperature differences to improve the accuracy and confidence of perception. In contrast, a temperature difference can assist personnel in pinpointing the leakage area.
Reviewing experiment case 1 and Figure 6, they present the initiative and fusion process of multi-observation platforms based on CSBS under one-dimensional data to trigger. According to experiment case 1, it is not difficult to find that the programs are universally for in situ site anomaly recognition in this case. At the same time, CSBS sensor observation capability mapping assists the integration of the mature procedures of real-time NDVI, and real-time thermal infrared temperature tracking is ready for use. Finally, we made a theoretical test based on prototype devices that we aim to verify.

3.2.2. Case 2: Traffic Accident Scenarios

Experiment case 2 presents a traffic accident scenario, with the sensing results obtained by CSBS in a traffic accident scenario where there is smoke and injuries. Simple traffic accidents can be handled quickly between drivers through a web-based mobile app. When complex traffic accidents such as collisions, pedestrian injuries, and spills of transported hazardous materials, road surveillance cameras do not provide complete mapping of the traffic accident scene. Thus, observations cannot be guaranteed with high confidence. The CSBS enabled the automatic observation with “air” and “ground” implemented at the test zone, including environmental parameter measurements provided by a ground-based measurement robot, 3D modeling, and all-around visible photography provided by a drone with thermal infrared assistance. With the CSBS, it can provide an automated, fast, and complete modeling of complex traffic accident cases.
The left side of Figure 7 shows the observation capability model provided by CSBS for the incident. The construction of the observation capability model includes “In-situ Observation”, “Drone Observation”, and “Robots Observation”. The “In-situ Observation” includes environmental variables monitoring sensors located on streetlights in the area, which process in the same way as case 1 by calculating correlation coefficients for changes in SOx and NOx levels in real time. Here, before the traffic accident, the atmosphere in the area was excellent, with no significant concentrations of CO, SOx, and NOx. Upon the traffic accident, the smoke from combustion caused a considerable enlargement in SOx and NOx. Thus, the autocorrelation of SOx reduced to 0.78 in this period (suggesting a significant change in SOx concentration). And the autocorrelation of NOx was decreased to 0.22 (suggesting a substantial shift in NOx concentration). Meanwhile, the fixed traffic camera video stream connected by CSBS can be used for vehicle detection. The CSBS applies the inter-frame difference method to calculate the speed of the vehicle estimated in a period. If the speed is 0 (frames are almost still in 60 s), an accident may have occurred, and the camera coordinates feedback to the CSBS to drive the “Drone Observation” and “Robots Observation.”. The data processing process of the whole traffic accident observation site is explained in the right part of Figure 7. Finally, after acquiring the rich perceptual information, CSBS generalized a 3D model of the accident scene and reproduced all the sensing datasets for related stakeholders.
Referring to case 1, the UAV control program in CSBS receives the instruction and the destination coordinates to invoke the UAV. The control program calculates the route by converting 0.001141° = 1 m for longitude and 0.000899° = 1 m for latitude. For the elevation, a rule tanα = h/s (where α (pitch angles) takes 60, 45, 30; h is the flight altitude. s is the distance from the accident center point) to calculate the four vertices of the square under multiple pitch angles. Respectively, CSBS plans three square routes in counterclockwise order with the longitude and latitude coordinates of the accident point as the center and three-pitch angles of 30°, 45°, and 60°. The drone images vertically downward at the center point of the accident, performs the waypoint task according to the route, and returns to the home after completing the task. While observing, the UAV transmits the video stream of the scene accident to CSBS through a Sub-6GHz communication system, and CSBS performs image processing on the video image and implements vehicle detection through the YOLO v5 algorithm. Finally, it surrounds the vehicle with external rectangles. The intersection ratio of the two rectangles is calculated to determine whether there is a collision between the two vehicles. In the case of an intersection ratio greater than 0, there may be a collision situation, and the overlapping part of the two rectangular boxes is marked as the possible collision area of the two vehicles.
During the flight process, the thermal infrared system on the UAV will identify the fire point (high-temperature point) in the accident area and the presence of radiating objects (e.g., pedestrians, etc.) obscured by smoke. Refer to “Real-Time airborne thermal infrared and visible observation” in Figure 8.
While the drone is working, the measurement robot obtains the riding route from the accident site coordinates to the AutoNavi API. Referring to the way in case 1, the measurement robot arrives at the accident scene and models the ground accident scene through photographic equipment and LIDAR distance measurement. The measurement robot is equipped with acoustic, vibration, optoelectronic, gas, and liquid multi-factor sensors. This sensory information will be matched with the spatio-temporal information at the acquisition time (real-time mobile environmental observation of Figure 8).
From the analysis of the experimental process and results (refer to Figure 8), the results show that based on the CSBS architecture, traffic camera-based accident alerts return the coordinates of the accident location. The UAV flies to the accident area for an aerial survey and records the vehicle collision information. At the same time, the GSR also drives to the target area, and the atmospheric environment sensor carried with the GSR acquires the content of dangerous gases in the air and the temperature of the scene to provide data support for subsequent decision making. The above results can be automatically provided in a three-dimensional visualization of the expression to effectively support the subsequent road rescue work.

4. Discussion

Table 4 below compares the qualitative differences between the legacy sensor web and sensor web based on CSBS architecture. The traditional observation architecture is a WWW-based service for sensors to realize data transmission. It requires building several RTUs for multi-radio access and protocol conversion, and it establishes the connection to the city sensing service center through the RTUs. This process adds unnecessary redundancy to the network, brings more latency, and increases the deployment cost. At the same time, the city sensing service center will receive all the sensing data from the city scale, which increase the process pressure of the city sensing service center. CSBS provides plug-and-play for sensing sites, which is based on the physical layer and the data transmission layer. Physical layer protocols like ZigBee, LoRa™, and BLE, and data communication protocols like Modbus, Mavlink, and Real-time Transport Protocol (RTP) can be accessed and parsed by CSBS. Therefore, in block scale, most of the sensing resources can be accessed by CSBS for an all-in-one. CSBS provides 4G, 5GNR, and wired broadband to provide block-scale observation data to the service center on demand. Compared to traditional architecture, the Information-on-Demand provided by the CSBS approach helps reduce the network pressure on service centers. Under the traditional observation system, the sensing devices do not have the ability to control each other autonomously. Since CSBS establishes a block-scale sensing network, data latency, communication rate, and service capability can be guaranteed. A bidirectional data transmission flow can be built to realize the cooperative scheduling of sensing resources.
For example, the legacy architecture cannot propose fusion access of adaptive multi-radio and IoT protocols access. It resulted in the inability to achieve plug-and-play of smart sensors in the field and the lack of the ability to achieve mutual access to observation resources and automatic collaborative observation. Researchers in information sciences often overlook these critical issues, but these vital issues do bring adverse effects on high-precision spatio-temporal information perception in the future.
The results presented here demonstrate the CSBS architecture provides a service capability for fusion access of observation platforms via different protocols. Thus, the CSBS is like a cellular base station that allows plug-and-play for the observation platforms. By supporting a series of “communication”, “location”, and “sensing” services, it offers the possibility of high-density deployment for block-scale observation in smart cities. It is a critical theory that the geospatial sensor web in the future, with a large number of observation platforms, will generate an infinite amount of data per second. In this study, we proposed the transmission of observation data at the block scale is hardly affected by the quality of the Internet with the CSBS architecture. Simultaneously, the CSBS can process and filter these data at the block scale because the CSBS realizes the fusion access of observation data and the fusion of multiple platforms at a block scale. Finally, non-essential observation data will be deleted while improving the observation accuracy and reducing the request for Internet packets. In case of the connection between CSBS and the server is interrupted unexpectedly, users at the block scale can still access the observation results through WLAN and RDS (Radio Data System) services. Observation data will be synchronized to the service centers upon the connection re-establishment. In addition to heterogeneous observation resources fusion access and management, CSBS supports automated observation progress combined with heterogeneous observation resources. This process realizes the merger of the capability for multiple heterogeneous observation platforms and entirely operates the advantages of observation resources. Furthermore, it avoids the waste of sensing resources caused by isolated islands of information.
It is also noticed that the research presented above is a forward-looking innovation in information science. Still, there is a preference for small-scale integrated observations in smart cities and digital earth, which have higher accuracy and immediacy requirements than satellite inversion techniques. The difference might relate to the method used: A block-scale with the CSBS architecture as the core would have advantages not available in earlier geoscientific studies, especially the automated fusion observation provided by CSBS. The CSBS builds a plug-and-play environment by establishing unified access to the physical layer protocol. As a default, it operates a time-domain-based signal analysis program to provide a pervasive automated observation. Although the CSBS is still a prototype at this stage, the CSBS provides an open capability framework to enable the fusion and extension of enhanced heterogeneous observation capabilities in the future.
In summary, the CSBS architecture and its prototype devices have the adaptive and plug-and-play capabilities of multiple protocols and observation platforms access. The CSBS can enhance the accurate sensing of spatio-temporal information by automatically controlling the sensing units at all locations. Currently, the CSBS has the ability to collaborate the fusion observation in a pervasive condition, and in some scenarios, rarely, it still needs to rely on open capability platforms to complete the first-time configuration. It is why CSBS supports open capabilities.

5. Conclusions and Outlook

Reviewing the past work, researchers have not stopped working on the sensor web, which is designed to continuously improve the accuracy of geospatial information perception. As early as 2006, “Nature” published a cover article named “2020 Vision”. It pointed out that the sensor web would achieve “large-scale” and “Real-time” access to real-world data, and it was the forward-looking information science. In contrast to the research between 2006 and 2014, NASA (National Aeronautics and Space Administration) pioneered the concept of SWE. The group at Wuhan University led by Nengcheng Chen proposed the web-based software Geosensor that was developed under the SWE [37]. The “Geosensor” enables unified presentation and analysis of multiple heterogeneous observation platforms. In 2016, scholars Nengcheng Chen and Chuli Hu proposed a study based on “Geospatial Sensor Integration Management”, which mainly integrates different remote sensing satellites for Earth observation to achieve multiple sensing of surface environmental elements [38,39,40,41,42]. The research aims to improve the reliability and accuracy of macro-environmental observation.
However, the following three problems, indeed, become stumbling blocks for the future development of smart cities: (1) It only collects the observation data, but the data sources are ignored; (2) the new sensors need to be debugged and upgraded on the middleware (like Remote Terminal Unit, RTU) or register the sensors on the observation server; and (3) it cannot control the status (like spatio-temporal resolution) of the sensor webs.
In this context, we focus on the three issues above and propose the architecture and prototype devices of the City Perception Base Station (CSBS). The CSBS architecture includes characteristics like plug-and-play, collaborative observation, supported web services, and open capability for multi-protocol heterogeneous observation platforms. The CSBS enables fusion access for multi-protocol heterogeneous observation platforms at the block scale and provides a resolution with high immediacy and accuracy in the geospatial sensor web and future cities.
Overall, the proposed CSBS will be an essential infrastructure for future smart cities, enabling high-density, high-diversity, and high-precision sensor sensing in a distributed way. In our following research, we will explore the potential capabilities of CSBS [40].

Author Contributions

D.C.: Architecture, methodology, writing—original draft; X.Z.: methodology, writing—review and editing; W.Z.: automatic process realization and visualization; X.Y.: software—image processing algorithms. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key R&D Program of China (no. 2018YFB2100500).

Data Availability Statement

This research paper focuses on proposing a solution for urban sensing, and the research does not rely on data support.

Acknowledgments

We would appreciate the help of the Wuhan Geostar Information Technology Co., Wuhan GAEAWAY SPACE TIME Co., Ltd in Wuhan, P.R.China, and Hubei Huazhong Testing & Assessment Co. in Wuhan, P.R.China to complete the evaluation of the whole set of prototype devices according to the standard under the International Laboratory Accreditation Cooperation–Mutual Recognition Arrangement (ilac-MRA), China National Accreditation Service for Conformity Assessment (CNAS), and China Inspection Body and Laboratory Mandatory Approval (CMA) to verify that CSBS functions reliably.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Do Nascimento, N.M.; De Lucena, C.J.P. FIoT: An agent-based framework for self-adaptive and self-organizing applications based on the Internet of Things. Inf. Sci. 2017, 378, 161–176. [Google Scholar] [CrossRef]
  2. Casadei, R.; Fortino, G.; Pianini, D.; Russo, W.; Savaglio, C.; Viroli, M. A development approach for collective opportunistic Edge-of-Things services. Inf. Sci. 2019, 498, 154–169. [Google Scholar] [CrossRef]
  3. Huang, H.; Lu, Z.H.; Peng, R.; Feng, Z.W.; Xuan, X.H.; Hung, P.C.K.; Huang, S.C. Efficiently querying large process model repositories in smart city cloud workflow systems based on quantitative ordering relations. Inf. Sci. 2019, 495, 100–115. [Google Scholar] [CrossRef]
  4. Amaxilatis, D.; Mylonas, G.; Diez, L.; Theodoridis, E.; Gutierrez, V.; Munoz, L. Managing Pervasive Sensing Campaigns via an Experimentation-as-a-Service Platform for Smart Cities. Sensors 2018, 18, 2125. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Zanella, A.; Bui, N.; Castellani, A.; Vangelista, L.; Zorzi, M. Internet of Things for Smart Cities. IEEE Internet Things J. 2014, 1, 22–32. [Google Scholar] [CrossRef]
  6. Dobre, C.; Xhafa, F. Intelligent services for Big Data science. Future Gener. Comput. Syst. 2014, 37, 267–281. [Google Scholar] [CrossRef]
  7. Garrett, R.G. The determination of sampling and analytical errors in exploration geochemistry. Econ. Geol. 1969, 64, 68–569. [Google Scholar] [CrossRef]
  8. McBratney, A.B.; Webster, R.; Burgess, T.M. The design of optimal sampling schemes for local estimation and mapping of regionalized variables—I: Theory and method. Comput. Geosci. 1981, 7, 331–334. [Google Scholar] [CrossRef]
  9. Hart, J.K.; Martinez, K. Environmental Sensor Networks: A revolution in Earth System Science? Earth Sci. Rev. 2006, 78, 177–191. [Google Scholar] [CrossRef] [Green Version]
  10. Plageras, A.P.; Psannis, K.E.; Stergiou, C.; Wang, H.X.; Gupta, B.B. Efficient IoT-based sensor BIG Data collection–processing and analysis in smart buildings. Future Gener. Comput. Syst. 2018, 82, 349–357. [Google Scholar] [CrossRef]
  11. Visner, M.; Shirowzhan, S.; Pettit, C. Spatial Analysis, Interactive Visualisation and GIS-Based Dashboard for Monitoring Spatio-Temporal Changes of Hotspots of Bushfires over 100 Years in New South Wales, Australia. Buildings 2021, 11, 37. [Google Scholar] [CrossRef]
  12. Nath, S.; Liu, J.; Zhao, F. SensorMap for Wide-Area Sensor Webs. Computer 2007, 40, 90–93. [Google Scholar] [CrossRef]
  13. Gaffney, C.; Robertson, C. Smarter than Smart: Rio de Janeiro’s Flawed Emergence as a Smart City. J. Urban Technol. 2018, 25, 47–64. [Google Scholar] [CrossRef]
  14. Di, L.; Zhao, P.; Yang, W.; Yu, G.; Yue, P. Intelligent geospatial web services. In Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium, Seoul, Republic of Korea, 25–29 July 2005; pp. 1229–1232. [Google Scholar]
  15. Di, L.P.; Moe, K.L.; Yu, G.N. Metadata requirements analysis for the emerging sensor web. Int. J. Digit. Earth 2009, 2 (Suppl. 1), 3–17. [Google Scholar] [CrossRef]
  16. Mandl, D.; Cappelaere, P.; Frye, S.; Sohlberg, R.; Ong, L.; Chien, S.; Tran, D.; Davies, A.; Sullivan, D.V.; Falke, S.; et al. Sensor Web 2.0: Connecting Earth’s Sensors via the Internet. In Proceedings of the NASA Earth Science Technology Office (ESTO) Conference, College Park, MD, USA, 24–26 June 2008; Volume 1. [Google Scholar]
  17. Chen, N.; Yang, X.; Wang, X. Design and Implementation of Geospatial Sensor Web Information Public Service Platform. J. Geo-Inf. Sci. 2013, 15, 887–894. [Google Scholar] [CrossRef]
  18. Caprotti, F.; Liu, D. Platform urbanism and the Chinese smart city: The co-production and territorialisation of Hangzhou City Brain. GeoJournal 2020, 87, 1559–1573. [Google Scholar] [CrossRef] [PubMed]
  19. Judge, M.A.; Manzoor, A.; Khattak, H.A.; Din, I.U.; Almogren, A.; Adnan, M. Secure Transmission Lines Monitoring and Efficient Electricity Management in Ultra-Reliable Low Latency Industrial Internet of Things. Comput. Stand. Interfaces 2011, 77, 103500. [Google Scholar] [CrossRef]
  20. Nawaratne, R.; Kahawala, S.; Nguyen, S.; De Silva, D. A Generative Latent Space Approach for Real-Time Road Surveillance in Smart Cities. IEEE Trans. Ind. Inform. 2021, 17, 4872–4881. [Google Scholar] [CrossRef]
  21. Baggag, A.; Abbar, S.; Sharma, A.; Zanouda, T.; Al-Homaid, A.; Mohan, A.; Srivastava, J. Learning Spatiotemporal Latent Factors of Traffic via Regularized Tensor Factorization: Imputing Missing Values and Forecasting. IEEE Trans. Knowl. Data Eng. 2021, 33, 2573–2587. [Google Scholar] [CrossRef]
  22. Broring, A.; Echterhoff, J.; Jirka, S.; Simonis, I.; Everding, T.; Stasch, C.; Liang, S.; Lemmens, R. New Generation Sensor Web Enablement. Sensors 2021, 11, 2652. [Google Scholar] [CrossRef] [Green Version]
  23. Broring, A.; Maue, P.; Janowicz, K.; Nust, D.; Malewski, C. Semantically-Enabled Sensor Plug & Play for the Sensor Web. Sensors 2011, 11, 7568–7605. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Razzaque, M.A.; Milojevic-Jevric, M.; Palade, A.; Clarke, S. Middleware for Internet of Things: A Survey. IEEE Internet Things J. 2016, 3, 70–95. [Google Scholar] [CrossRef] [Green Version]
  25. Poteko, J.; Eder, D.; Noack, P.O. Identifying operation modes of agricultural vehicles based on GNSS measurements. Comput. Electron. Agric. 2021, 185, 106105. [Google Scholar] [CrossRef]
  26. Chen, W.H.; Liu, B.C.; Huang, H.W.; Guo, S.; Meng, Z.B. When UAV Swarm Meets Edge-Cloud Computing: The QoS Perspective. IEEE Netw. 2019, 33, 36–43. [Google Scholar] [CrossRef]
  27. Wang, W.J.; Peng, Y.P.; Cao, G.Z.; Guo, X.Q.; Kwok, N. Low-Illumination Image Enhancement for Night-Time UAV Pedestrian Detection. IEEE Trans. Ind. Inform. 2021, 17, 5208–5217. [Google Scholar] [CrossRef]
  28. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef] [Green Version]
  29. Chen, N.; Wang, X.; Yang, X. A direct registry service method for sensors and algorithms based on the process model. Comput. Geosci. 2013, 56, 45–55. [Google Scholar] [CrossRef]
  30. Chen, N.; Wang, K.; Xiao, C.; Gong, J. A heterogeneous sensor web node meta-model for the management of a flood monitoring system. Environ. Model. Softw. 2014, 54, 222–237. [Google Scholar] [CrossRef]
  31. Chi, Q.P.; Yan, H.R.; Zhang, C.; Pang, Z.B.; Xu, L.D. A Reconfigurable Smart Sensor Interface for Industrial WSN in IoT Environment. IEEE Trans. Ind. Inform. 2014, 10, 1417–1425. [Google Scholar] [CrossRef]
  32. Sinha, R.S.; Wei, Y.; Hwang, S.-H. A survey on LPWA technology: LoRa and NB-IoT. ICT Express 2017, 3, 14–21. [Google Scholar] [CrossRef]
  33. Chen, D.; Zhang, X.; Chen, N.; Yang, J.; Gong, J. Geospatial Sensor Web Adaptor for Integrating Diverse Internet of Things Protocols within Smart City. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, V-4-2020, 115–121. [Google Scholar] [CrossRef]
  34. Chen, D.; Chen, N.C.; Zhang, X.; Ma, H.L.; Chen, Z.Q. Next-Generation Soil Moisture Sensor Web: High-Density in Situ Observation over NB-IoT. IEEE Internet Things J. 2021, 8, 13367–13383. [Google Scholar] [CrossRef]
  35. Wang, S.Q.; Zhang, X.; Chen, N.C.; Wang, W.J. Classifying diurnal changes of cyanobacterial blooms in Lake Taihu to identify hot patterns, seasons and hotspots based on hourly GOCI observations. J. Environ. Manag. 2022, 310, 114782. [Google Scholar] [CrossRef] [PubMed]
  36. Chen, N.; Di, L.; Yu, G.; Min, M. A flexible geospatial sensor observation service for diverse sensor data based on Web service. ISPRS J. Photogramm. Remote Sens. 2009, 64, 234–242. [Google Scholar] [CrossRef]
  37. Chen, N.C.; Gong, J.Y.; Chen, Z.Q. A High Precision OGC Web Map Service Retrieval Based on Capability Aware Spatial Search Engine. In Advances in Computation and Intelligence, Proceedings of the Second International Symposium, ISICA 2007, Wuhan, China, 21–23 September 2007; Kang, L., Liu, Y., Zeng, S., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4683. [Google Scholar] [CrossRef]
  38. Chen, D.; Zhang, X.; Chen, N.C. Smart City Awareness Base Station: A Prospective Integrated Sensing Infrastructure for Future Cities. Geomat. Inf. Sci. Wuhan Univ. 2022, 47, 159–180. [Google Scholar] [CrossRef]
  39. Hu, C.L.; Guan, Q.F.; Chen, N.C.; Li, J.; Zhong, X.; Han, Y.F. An Observation Capability Metadata Model for EO Sensor Discovery in Sensor Web Enablement Environments. Remote Sens. 2014, 6, 10546–10570. [Google Scholar] [CrossRef] [Green Version]
  40. Azaza, M.; Tanougast, C.; Fabrizio, E.; Mami, A. Smart greenhouse fuzzy logic based control system enhanced with wireless data monitoring. ISA Trans. 2016, 61, 297–307. [Google Scholar] [CrossRef] [PubMed]
  41. Yang, C.; Luo, J.; Hu, C.; Tian, L.; Li, J.; Wang, K. An Observation Task Chain Representation Model for Disaster Process-Oriented Remote Sensing Satellite Sensor Planning: A Flood Water Monitoring Application. Remote Sens. 2018, 10, 375. [Google Scholar] [CrossRef] [Green Version]
  42. Chen, Z.; Chen, N. Provenance Information Representation and Tracking for Remote Sensing Observations in a Sensor Web Enabled Environment. Remote Sens. 2015, 7, 7646–7670. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Development process of the ground observation. There are three main stages: manual processing, which is based on manual field measurements and recorded in written form; automated observation, which is a process of automatic measurement and automatic recording through computer systems and process instruments; and City Intelligent Service, which is The City Intelligent Service, is a widely used service that records and saves all observation data through the Internet and the Internet of Things and provides a visualized intelligent information service system.
Figure 1. Development process of the ground observation. There are three main stages: manual processing, which is based on manual field measurements and recorded in written form; automated observation, which is a process of automatic measurement and automatic recording through computer systems and process instruments; and City Intelligent Service, which is The City Intelligent Service, is a widely used service that records and saves all observation data through the Internet and the Internet of Things and provides a visualized intelligent information service system.
Remotesensing 15 03699 g001
Figure 2. The schematic of legacy sensor web and sensor web with CSBS. This schematic is divided into two parts. The upper describes the legacy sensor web, and the lower describes the architecture via CSBS. The schematic shows that the legacy infrastructure suffers from delays due to layers of network services: data transmission, network forwarding, and data protocol parsing. One-way information transmission also prevents reverse control flow transmission from the service center to sensors.
Figure 2. The schematic of legacy sensor web and sensor web with CSBS. This schematic is divided into two parts. The upper describes the legacy sensor web, and the lower describes the architecture via CSBS. The schematic shows that the legacy infrastructure suffers from delays due to layers of network services: data transmission, network forwarding, and data protocol parsing. One-way information transmission also prevents reverse control flow transmission from the service center to sensors.
Remotesensing 15 03699 g002
Figure 3. (a) Methods and processes for accessing heterogeneous observation platforms of CSBS. At the bottom left is the CSBS prototype device and the core of CSBS is the Access Processing Board. The Access Processing Board designs with a four−layer architecture. The first layer is the control circuit layer, including the SoC (System−on−Chip) and the RAM (Random Access Memory) for running and storing the operating system and data, the ROM (Read-Only Memory), and the power management module. The second layer is cellular data access, including the baseband for 5G+GNSS NB-IoT. The third layer is the multi-protocol wireless access layer, including ZigBee, Lora, BLE, RFID, and high-throughput communication modules with RF shielding between modules. The fourth layer is the extension layer, which can add physical layer communication methods not currently available in CSBS to improve sensory access scalability. On the right side is the schematic diagram of the multi-protocol observation platform fusion access, which enables the hybrid access of the sensing platform at a certain block scale in the city. The bottom right side is a case study of data protocol access resolution, including CSBS resolution schematics for common Modbus and transparent transport. The CSBS observation system realizes the management, access, and fusion control and displays observation resources at the block scale. (b) Multi-Channel Polling Access Model for LoPAN Protocols. It illustrates the workflow of channel scanning and data identification procedure in UAM.
Figure 3. (a) Methods and processes for accessing heterogeneous observation platforms of CSBS. At the bottom left is the CSBS prototype device and the core of CSBS is the Access Processing Board. The Access Processing Board designs with a four−layer architecture. The first layer is the control circuit layer, including the SoC (System−on−Chip) and the RAM (Random Access Memory) for running and storing the operating system and data, the ROM (Read-Only Memory), and the power management module. The second layer is cellular data access, including the baseband for 5G+GNSS NB-IoT. The third layer is the multi-protocol wireless access layer, including ZigBee, Lora, BLE, RFID, and high-throughput communication modules with RF shielding between modules. The fourth layer is the extension layer, which can add physical layer communication methods not currently available in CSBS to improve sensory access scalability. On the right side is the schematic diagram of the multi-protocol observation platform fusion access, which enables the hybrid access of the sensing platform at a certain block scale in the city. The bottom right side is a case study of data protocol access resolution, including CSBS resolution schematics for common Modbus and transparent transport. The CSBS observation system realizes the management, access, and fusion control and displays observation resources at the block scale. (b) Multi-Channel Polling Access Model for LoPAN Protocols. It illustrates the workflow of channel scanning and data identification procedure in UAM.
Remotesensing 15 03699 g003
Figure 4. Positioning of CSBS in the city sensing system. The figure contains three parts. In-Block Info Services, CSBSs in Different Blocks, and City Info Services. With the support of CSBS, the observation information of sensing devices located in the same block can be processed directly by the CSBS. The CSBS also provides web service- and RDS-based text information service to the end-user in the block. There are different blocks in a city, and the CSBS in each block process the observation data effectively. The GSW service center subscribes to these data from CSBS, performs further processing and display, and GSW service center provides the management of CSBS.
Figure 4. Positioning of CSBS in the city sensing system. The figure contains three parts. In-Block Info Services, CSBSs in Different Blocks, and City Info Services. With the support of CSBS, the observation information of sensing devices located in the same block can be processed directly by the CSBS. The CSBS also provides web service- and RDS-based text information service to the end-user in the block. There are different blocks in a city, and the CSBS in each block process the observation data effectively. The GSW service center subscribes to these data from CSBS, performs further processing and display, and GSW service center provides the management of CSBS.
Remotesensing 15 03699 g004
Figure 5. Transmission quality as measured by decay of transmission bandwidth with distance in experiment zone.
Figure 5. Transmission quality as measured by decay of transmission bandwidth with distance in experiment zone.
Remotesensing 15 03699 g005
Figure 6. The process of a collaborative analysis of multiple observation platforms with groundwater pipe rupture. (* in this Figure means multiplication and the date format follows Year-Month-Day).
Figure 6. The process of a collaborative analysis of multiple observation platforms with groundwater pipe rupture. (* in this Figure means multiplication and the date format follows Year-Month-Day).
Remotesensing 15 03699 g006
Figure 7. Model of collaborating heterogeneous sensing platforms in CSBS under traffic accident scenarios. In this figure, the left side shows the co-observation model with the in situ observation station, the UAV, and the robot; the flowchart on the right represents a schematic diagram of the co-observation procedure based on the co-observation model.
Figure 7. Model of collaborating heterogeneous sensing platforms in CSBS under traffic accident scenarios. In this figure, the left side shows the co-observation model with the in situ observation station, the UAV, and the robot; the flowchart on the right represents a schematic diagram of the co-observation procedure based on the co-observation model.
Remotesensing 15 03699 g007
Figure 8. Case 2: CSBS-based multi-scale automatic observation and reconstruction of traffic accident scenes in cities. The left side shows the observation process of a traffic anomaly event, in which a traffic atmosphere sensing site and a traffic road camera carry out anomaly identification and set off a collaborative observation procedure to observe the traffic accident site by drones and robots actively. The right side shows an instant 3D retrospective of the event published through WebGIS service. (In this figure, data format follows Year-Month-Day, m3 means cubic meter. The Chinese characters in the software in the lower right corner indicate “This is the detailed information of the sensing result”).
Figure 8. Case 2: CSBS-based multi-scale automatic observation and reconstruction of traffic accident scenes in cities. The left side shows the observation process of a traffic anomaly event, in which a traffic atmosphere sensing site and a traffic road camera carry out anomaly identification and set off a collaborative observation procedure to observe the traffic accident site by drones and robots actively. The right side shows an instant 3D retrospective of the event published through WebGIS service. (In this figure, data format follows Year-Month-Day, m3 means cubic meter. The Chinese characters in the software in the lower right corner indicate “This is the detailed information of the sensing result”).
Remotesensing 15 03699 g008
Table 1. Method of location determination.
Table 1. Method of location determination.
Observation PlatformsPosition ServicePositioning Level (m)
ZigBee and LoRaManual Input via GNSS receiver0.1
BLEAngle-of-Arrival0.5
RFIDCalculate RSSI from RFID stationReference only
BLE + RFID StationManual Input via GNSS receiver0.1
Camera (CCTV)Manual Input via GNSS receiver0.1
UAV and RobotsGNSS + RTK + UWB (Indoor)0.1
Table 2. Parameters of radio SoC in CSBS and reference ranges.
Table 2. Parameters of radio SoC in CSBS and reference ranges.
BasebandSoC ModuleSignal Power (dBm)Calculate Range
BLEZLG52810Tx:/Rx: 20/−96 @1 Mbps100 m–500 m (+PA)
ZigBeeZLG51682P2Tx:/Rx: 20/−100<800 m (+PA)
LoraAI-Thinker Ra02Tx:/Rx: 10/−105<1600 m (+PA)
RFID Micro WebF2411 2.45G RFIDTx:/Rx: 20/−95/
NB-IoTQuetel BC95Tx:/Rx: 25/−130
5G and GNSSRM500Q-GL/
Sub-6GHzAirfast RX5G372Freq. 3.3 GHz–5 GHz34 dBGround: 200 mAir: 2500 m
+PA means add a power amplifier.
Table 3. Experimental method of LoPAN and 5.8 GHz.
Table 3. Experimental method of LoPAN and 5.8 GHz.
BasebandMeasurement Method
via Sensors *via RSA **
BLETransmission rate vs. Distance/
ZigBeeReturn RSSI by SoCCharacterization measurements of signal power
LoraReturn RSSI by SoCCharacterization measurements of signal power
RFID MicrowebTransmission rate vs. Distance/
Sub-6GHzTransmission rate vs. Distance/
* Via sensors: Connectivity analysis using sensors and CSBS; ** via RSA: Signal analysis using a Real-time Spectrum Analyser (RSA). In the antenna sensitivity, the RSA measurement position is the same as the sensor. RSSI: Received Signal Strength Indicator.
Table 4. Review of legacy sensor web and CSBS architecture.
Table 4. Review of legacy sensor web and CSBS architecture.
TypeTechnical IndexLegacy Sensor WebSensor Web with CSBS
Radio AccessRadio FrequencySingle, depends on the user requirementCover All License-free Frequencies
Channels or Spectrum FactorSingle, depends on the sensor’s parameterFits All
Physical ProtocolsSingleMore than 5 Protocols
Data CarryingData TypeDepends on SensorsStructured and Non-structured data
Protocols ParsingMethodRemote Terminal UnitSoftware Supports
ScalabilityImpossibleConfigure a New Protocol if Necessary
Data FlowUplinkSupportSupport
Downlink InstructionImpossibleSupport
Active Observation CapabilitySingleness PlatformImpossibleSupport
Heterogenous
Platforms
Support a Framework of Active Collaboration
Overall rating-Information IsolateAll-in-One
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, D.; Zhang, X.; Zhang, W.; Yin, X. Integrated Node Infrastructure for Future Smart City Sensing and Response. Remote Sens. 2023, 15, 3699. https://doi.org/10.3390/rs15143699

AMA Style

Chen D, Zhang X, Zhang W, Yin X. Integrated Node Infrastructure for Future Smart City Sensing and Response. Remote Sensing. 2023; 15(14):3699. https://doi.org/10.3390/rs15143699

Chicago/Turabian Style

Chen, Dong, Xiang Zhang, Wei Zhang, and Xing Yin. 2023. "Integrated Node Infrastructure for Future Smart City Sensing and Response" Remote Sensing 15, no. 14: 3699. https://doi.org/10.3390/rs15143699

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop