Next Article in Journal
Rail Wear Evolution on Small-Radius Curves under Mixed Traffic Conditions, In-Field Investigations
Previous Article in Journal
The Impact of Orthodontic Treatment on Teeth Previously Treated with Regenerative Endodontics: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multidimensional Data Collection and Edge Computing Analysis Method

Key Laboratory of Optoelectronics Technology, Ministry of Education, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(1), 211; https://doi.org/10.3390/app14010211
Submission received: 23 November 2023 / Revised: 12 December 2023 / Accepted: 16 December 2023 / Published: 26 December 2023

Abstract

:
With the development of IoT technology, many dimensions of data are generated in the environment where we live. The study of these data is critical to our understanding of the relationships between people and between people and cities. The core components of IoT technology are sensors and control circuits. However, merging various sensor data and real-time data processing is often a difficult problem, usually related to factors such as coverage, lighting conditions, and accuracy of object detection. Therefore, we firstly propose a wireless transmission hardware architecture for data acquisition mainly based on vision sensors, and at the same time, incorporate some sensors for data calibration to improve the accuracy of data detection. The collected data are fed back to the edge computing platform for fast processing. The edge platform is designed with a lightweight target detection model and data analysis model. Through this multidimensional data collection and analysis, a generalised functional model for public space utilization can be fitted, which enables the calculation of utilization rates for any parameter in public space. The technology improves a technical reference for multi-dimensional data collection and analysis.

1. Introduction

The Central Urban Work Conference emphasized that the primary objective of urban development is to cultivate a high-quality living environment, aiming to transform cities into harmonious havens where people coexist seamlessly with their surroundings [1]. Urban public spaces constitute a vital aspect of any city, significantly influencing its developmental trajectory. The effective utilization of high-quality urban public spaces not only defines the essence of a municipality but also serves as a barometer for assessing the advancement of urban construction [2]. Urban public spaces stand as integral facets of a community’s social identity, facilitating resident engagement in leisure pursuits and fostering a profound sense of belonging within the locale. Its inclusive, amiable, and ecologically attuned characteristics are instrumental in fostering a welcoming and diverse environment for all. These spaces hold significant importance in shaping the city’s image, preserving its historical essence, embracing cultural nuances, and amplifying the city’s allure [3]. Within the realm of urban public space studies, the analysis of human behavioral patterns unravels the diverse needs of distinct societal groups for urban spaces. By establishing a bottom-up behavioral model, this approach authentically captures the intrinsic value and potential enhancements of today’s urban public spaces.
However, the majority of pattern analyses within urban public spaces hinge on the foundation laid by the evolution of the Internet of Things (IoT) and the utilization of big data [4]. This methodology necessitates remote collection of diverse information, subsequently channeled to the cloud for comprehensive data processing and in-depth analysis [5]. The analytical approach employed in this mode introduces a discernible time delay in real-time monitoring of public spaces and processing of data, imposing elevated equipment requirements. Consequently, its implementation in public space research proves challenging. In instances of emergencies, prompt processing becomes unattainable.
As an emerging computing paradigm succeeding cloud computing, edge computing decentralizes computations to the network’s periphery, in close proximity to users and data sources. It furnishes data caching and processing capabilities, boasting attributes of low latency, heightened security, and spatial awareness [6]. Edge computing excels in facilitating real-time data processing within public spaces. However, its front-end necessitates a meticulously designed pre-processing module, integrating sensors for seamless execution of data collection and fusion.
Wireless sensor networks (WSNs) have evolved to cater to expansive deployments within urban landscapes and diverse environments alike [7]. By employing wireless sensor networks, the need for physical cabling can be eliminated, enabling sensor nodes to be strategically positioned in every corner of urban spaces [8]. The wireless sensor network is very suitable for urban public space research. At the same time, this architecture can comprehend intelligent instrument monitoring [9], security monitoring [10], medical security [11] and other aspects of the municipality.
Currently, within the post-utilization evaluation phase of urban public space planning, the range of pertinent quantitative evaluation data encompasses questionnaire surveys, behavioral observations, on-site interviews, image records, literature reviews, and various other sources [12]. Relying on manual sorting, the efficiency is moderate, and the acquisition of data is considerably affected by external factors of time, space, and personnel, and it is hard to cover the entire space, time, and population used [13]. The effective integration of the Internet of Things [14,15] and edge computing [16,17,18] can resolve the bottleneck of data collection in urban public space research. In future research endeavours, there is an urgent need for a low-cost adaptive monitoring technique that avoids the overhead of collecting data in public spaces.
The edge computing platform hosts a tailored lightweight target detection model, efficiently scrutinizing public space parameters. This deep learning algorithmic research effectively mitigates data redundancy resulting from multifaceted sensor data collection in public space studies. By concentrating on image sensors and a select few other sensors as data calibration functions, the amalgamation of deep learning and sensor fusion significantly bolsters the accuracy of parameter extraction.The heightened precision in parameter extraction stems from the inherent framework of deep learning algorithms. These algorithms progressively supersede traditional manual feature labeling through autonomous feature learning by convolutional neural networks (CNN). The regional CNN (R-CNN) is one of the simplest two-order algorithmic models. The core idea of this model is to select a predefined region by means of round robin, and use the CNN network for feature extraction and classification [19]. Subsequently, the SPPNet network emerged to improve the characteristics of the existing R-CNN model by employing convolutional kernels of different sizes to perform feature extraction with different weights [20]. When we use different convolutional kernels to extract features to the pooling layer, we have to classify the feature values. However, the rotation time of the pooling layer is long, so a method that uses labelled feature values of interest at the pooling layer is used to classify the feature values better. This is a Fast R-CNN network [21,22]. Also, the core feature of this method is to reduce the generation time of candidate frames. This network has achieved more than 80% accuracy in various neural network competitions.
Although, the two-order algorithm models have made great progress in terms of time, feature extraction, etc., they are more computationally intensive, meaning in the early stage that the accuracy of these models is still not as good as it should be. In order to improve the accuracy of these models, many researchers have proposed single-order algorithm models. Among the models with better accuracy are YOLO [23], SSD [24], RefineDet [25], and EfficientDet [26]. Among these models, the iterative update speed of YOLO is significantly higher than that of other algorithmic models [27]. And for edge computing type hardware platforms, it may be necessary to compress and prune the YOLO model [28]. Effective porting of edge computing platforms is achieved through various compressions of the model. The edge computing platform can effectively complete the algorithm model running in real time.
In this study, we introduce an enhanced target detection model to address the challenge of insufficient accuracy in detecting targets within the intricate settings of public spaces. Leveraging the recent lightweight target detection model YOLOv4-tiny as our foundation, we incorporate the U-Net convolutional neural network to conduct precision-enhancing training on the acquired dataset. Subsequently, this model is fine-tuned for public space scenarios, building upon the YOLOv4-tiny architecture. Our research entails the deployment of sensors and edge computing networks within a residential area in Beijing’s Fengtai district, enabling the comprehensive collection of data to delve into public space usage patterns. Real-time data processing is seamlessly executed on the platform at the edge of the data terminal, thereby alleviating the necessity of uploading data to the cloud. Finally, employing data spanning from March 2022 to April 2022, encompassing a month, we conduct an in-depth analysis of public space utilization using this system’s model.
Our contributions in this paper can be summarized as follows.
  • A nimble wireless communication hardware platform tailored for public spaces is developed, featuring a diverse array of sensors. These sensors encompass image acquisition, temperature measurement, and location detection capabilities. Additionally, an edge computing gateway is integrated to facilitate the execution of algorithmic applications;
  • An enhanced target detection model is meticulously designed, accompanied by the introduction of a sensor calibration mechanism. This integration significantly bolsters the accuracy of target detection, ensuring more effective and precise results;
  • The calculation of image pixel-level spatial utilization and the dynamic monitoring of image 2D mapping are seamlessly executed. This two-dimensional intuitive calculation provides comprehensive insight into spatial accessibility, privacy considerations, and the connectivity of the objects under scrutiny.
The rest of this paper is organized as follows. In Section 2, we introduce the designed image acquisition module, wireless module, and selection of edge computing. Subsequently, the overall structure of this system and the improved target detection model are proposed in detail in Section 3. The calculation of the utilization of public space under the improved target detection model is in Section 4. Lastly, we conclude our work in Section 5.

2. System Hardware Design

The hardware front-end of this system is developed through secondary development on the basis of the STM32 single-chip microcomputer. The industrial-grade layout drawing reliability test is carried out on the hardware, and the circuit board production of the PCBA (Printed Circuit Board Assembly) process is completed. The system hardware is deployed in the public space of a residential area in Beijing to study space utilization. As is shown in Figure 1a, hardware devices are deplyed at each node in the graph.

2.1. Image Acquisition Module

It is a mature technology to utilize a single camera to collect photos. Select a single chip and control the camera on the single chip to complete the image collection. The initial research task was to complete image acquisition and recognition on a lightweight platform. We investigated the STM32H747I-DISCO embedded system which was developed by ST Company [29] as well as the Columbus (STM32F407) embedded system developed by 01STudio Company [30], and lastly we selected the OPENMV H7 embedded system developed by Star Pupil [31,32]. Our original design idea was to complete the functions of image acquisition and detection on OPENMV. Although deep learning can be accomplished on the OPENMV platform, the detection accuracy of the image does not meet the requirements we desire. So, we hope that this part completes the image acquisition and simple correction. Secondary development of the original circuit. We hope to be able to collect continuous video and better study urban public space through the continuity of data in the time domain.

2.2. Sensor Acquisition Module

Figure 1b illustrates the hardware structure of the LoRa based wireless communication module with the inclusion of GPS sensors and temperature sensors. The hardware structure is a signal acquisition side and a data preprocessing side. Likewise, considering the complexity of the deployment environment, the hardware structure is designed to reflect portability and low power consumption, which makes the acquisition and research of spatial data more accessible. Data acquisition by GPS sensor and temperature sensor is used to effectively cooperate with image acquisition and improve target detection accuracy. Note that the video collection is at 10-min intervals, while other sensors only collect every 5 min. The sensor data specifications and sampling rates are shown in Table 1. Among them, there is an age factor, which is extracted from our target detection. We split the residents into children and adults. And we use “1” for children and “2” for adults. The different age identification of adults is not researched in this system.

2.3. Edge Computing Module

We desire to conduct data analysis and research on the data retrieved. If the data is transmitted to the cloud for analysis, the results of the data analysis will not be fed back to the front end of the Internet of Things and respond in time. To resolve this problem, edge computing is proposed. The hardware structure of the data processing end is illustrated in Figure 1c. The low power consumption, edge and low latency of edge computing make it a good platform for real-time data processing. As an intermediate link between the sensor end and the cloud, the results of the data processing are transmitted to the cloud as a type of storage, and the processed results are fed back to the sensor end in time.
There are many embedded platforms for edge computing, such as Raspberry Pi [33], Intel Movidius Neural Compute Stick (NCS) [34], and Nvidia’s Jetson Nano and Jetson AGX Xavier [35]. We evaluate each edge computing platform to finally select one that meets the algorithm requirements and data processing volume. The Jetson Nano was finally selected as the edge computing platform on which the algorithm would run. NVIDIA announced the Jetson Nano hardware architecture at the 2019 NVIDIA GPU Technology Conference, and it provides the power of modern AI with full software programmability on a hardware platform. With a quad-core 64-bit ARM CPU and 128-core integrated NVIDIA GPU, the Jetson Nano can provide 472 G FLOPS of computing performance with 5 W/10 W low power mode and 5 V DC input. The designed algorithm model is ported to the edge computing platform to complete effective monitoring.

3. Overall System Architecture and Improved Target Detection Model

In this section, we will focus on the construction of data processing modules and functions of urban public space utilization. The overall structure of data processing and function construction is shown in Figure 2. The overall system structure is divided into four parts: data collection module, target detection and sensing data calibration module, utilization calculation module, and data visualization module. The data collection module is to collect image data and other sensor data, and this part has been explained in detail in Section 2. The target detection and sensing data calibration module first performs some filtering process on the collected data for wrong data, and then performs target detection on the image data, and this target detection is the improved target detection model proposed in this system. At the same time, the data from the temperature sensor and GPS sensor are effectively separated, where the data acquired by GPS sensor will be used as a calibration method to improve the accuracy of target detection results by acquiring target position information in the improved target detection model. And the temperature data is used as for image comparison of different temperature conditions during image acquisition to study the common space. The two sensors are effectively integrated into the target detection of the images.
The results of the detection are used to extract the parameters used to study the public space. The utilization rate calculation module is mainly used to fit an effective utilization function for the public space from three parameters: the change of temperature, the change of number of people, and the change of age obtained from the space. These functions are used to illustrate the research value of the public space. The data visualization module mainly transforms the three-dimensional form of the public space into two-dimensional for effective display, and through the two-dimensional display we can also better study the distribution of various targets in the space and the value of space utilization.
Through the overall structure of the system, we characterize the data obtained as a dataset consisting of Ft = [Lt, Tt, Ht, Ct, Dt, Nt, At, Wt] as perception sensor, while [At, Wt, Nt] is a key factor in monitoring urban public space utilization. The four parts of the system are described in detail below.

3.1. Data Collection

Firstly, raw data is retrieved from image sensors and other sensors. And data of different nodes are differentiated based on their unique identity. In the process of video collection, due to the limitation of the capacity of the collection terminal, the method of alternate collection of video and pictures is adopted. The process of data acquisition differentiates image-based data from non-image-based data to facilitate consecutive multidimensional data fusion analysis.

3.2. Object Detection and Sensing Data Correction

The improved target detection model proposed in this paper is divided into three main parts: first, due to the consideration of image clarity, U-Net convolutional neural network is introduced to enhance the clarity of the image. Our own dataset with higher definition is then used to train a lightweight YOLOv4-tiny target detection model. An algorithmic model is trained for observing public space. Lastly, with this improved target detection model, temperature and GPS sensors are introduced to successfully detect the age, location, height, coordinates, number of people, distance between image sensors and people, and temperature parameters of people in public spaces through a combination of algorithms and sensors. The extraction of these parameters is used to effectively study the use of public space.

3.2.1. U-Net-Based Image Super-Resolution Pre-Processing Model Construction

In image monitoring of public space, we encounter some acquisition time points with moderate clarity, such as night, cloudy day, etc. The blurred images acquired at this time have an impact on the consecutive target detection and the calculation of spatial utilization. Accordingly, in this system, we first perform sharpness processing on the acquired image dataset. The structure diagram of the image super-resolution processing using U-Net is shown in Figure 3a.
In the training phase of the model, the method utilizes single angle and multiple angle images as training pairs, and feeds single-angle low-resolution images into the U Net convolutional neural network. Taking multi angle high-resolution images as reference images, an optimized end-to-end mapping is obtained through iteration of network weights. In recent years, U Net has been utilized predominantly in the field of medical imaging, as it has super segmentation ability. We apply the U Net network to the image reconstruction network, which consists of an encoding path and a decoding path. The encoding path extracts abstract features by shrinking the input image layer-by-layer, while the decoding path expands the feature map layer-by-layer through the deconvolution operation on the abstract features to retrieve the final reconstructed image. The encoding path consists of five convolutional blocks (each convolutional block consists of two 3 × 3 convolutional layers), combined with a maximum pooling layer with a stride of 2, to accomplish the expansion of the number of feature channels from 1 to 1024 and the feature size is reduced from 608 × 608 to 38 × 38. The decoding path contains a total of five convolution blocks. The number of feature channels is diminished to 1 through the deconvolution layer between each convolution block, and the size of the feature map is expanded from 38 × 38 to 608 × 608. Among them, the copying and splicing layer establishes mirror connections between feature maps with the same resolution in the encoding and decoding paths to compensate for the lack of contextual information caused by the deep network. In the encoder backend, a 1 × 1 convolutional layer is used to enhance the network’s ability to convey complex features. In addition, the network adopts batch normalization (BN) to accelerate the convergence speed of the U-Net model and improve the gradient dispersion problem of the model.
The network chooses the Max out cell as the activation function and its principle is shown in Figure 3b., which can effectively construct the nonlinear mapping relationship and enable the U-Net network to fit any convex function. At the same time, the network can effectively avoid dead cells.
The Max out unit can be defined as:
Z j = X T W j + b j
h ( x ) = max j [ 1 , k ] Z j
Among them, Z j represents the activation value of the j ( j ϵ k ) “hidden layer” node. W j and b j represent the weight and bias of the node, respectively. h ( x ) represents the final output of the Max out activation function. The choice of loss function combines the mean square error and structural similarity information, which can comprehensively learn detailed features and structural features.The loss function is defined as:
L l o s s = i = 1 M [ α × M S E ( i ) + β × ( 1 S S I M ( I ) ) ]
M S E i = p = 1 N | | Y y ( p ) Y x ( p ) | | 2
S S I M i = 2 μ x ( p ) μ y ( p ) + c 1 μ x 2 ( p ) + μ y 2 ( p ) + c 1 × 2 σ x y ( p ) + c 2 σ x 2 ( p ) + σ y 2 ( p ) + c 2
Among them, M represents the total number of training image pairs, M S E i and S S I M i represent the mean square error and structural similarity between the ith reconstructed image and the reference image, respectively. N represents the total number of pixels of the image contained in each image. Y y ( p ) and Y x ( p ) represent the gray value of the reconstructed image and the reference image at the Pth pixel, respectively. The mean μ x ( p ) , μ y ( p ) , variance σ x ( p ) , σ y ( p ) , and covariance σ x y ( p ) in (2) can be obtained by a Gaussian filter with standard variance σ G ( p ) . Its formula is:
μ x ( p ) = G σ G P x
σ x 2 ( p ) = G σ G P x 2 μ x 2 ( p )
σ x y ( p ) = G σ G ( P x × P y ) μ x ( p ) × μ y ( p )
Among them, P represents the center point pixel of the image block P x . stands for convolution operation. The calculation process of μ y ( p ) and σ y 2 ( p ) is similar to that of μ x ( p ) and σ y 2 ( p ) . The images acquired at night with subsequent U-Net processing are given in Figure 4a,b. It can be seen that the clarity of the image has improved.

3.2.2. Improved Target Detection Model

The improved target detection model is based on the original YOLOv4-tiny target detection. First, the U-Net neural network model is introduced in the preprocessing of the dataset to enhance the clarity of the initial image by super-resolution processing, which was introduced in the previous section. Second, the loss function of the YOLOv4-tiny model is modified in the training process. Lastly, in order to extract the public spatial target parameters, the target coordinates, position and height measurement mechanisms are initiated in the model, combined with the introduction of GPS sensors to effectively locate the camera position, and the target parameters are computed through the images captured by the camera. The improved structure is shown in Figure 5.
Vision systems need to ensure that images captured by cameras can be analyzed in real time, so the YOLO model is well suited. Considering the requirement of lightweight models in edge computing platforms, we choose YOLOv4-Tiny released on 25 June 2020 [34]. Compared with YOLOv3-Tiny [36], it is a huge improvement. Although YOLOv4-tiny is able to recognize the input images in real time, the accuracy is still not high enough, because the installation position of the cameras in public space causes different people and objects to be at different distances from the cameras, so the size and location of the detected targets are different. Accordingly, we have improved the model to adapt to the need for accurate real-time detection of images.
The YOLOv4-tiny model achieves model optimization by regressing the minimal loss function by means of parameter iteration, where the loss function in turn incorporates classification loss, confidence loss, and complete intersection over union (CIOU) loss. Classification loss is used to extract the classification information of objects in the image. Confidence loss is used to determine whether the target is included. And CIOU loss is used to determine the overlap area, center distance, and aspect ratio of recognized objects. The model structure and loss function of YOLOv4-tiny are provided in Figure 6.
The classification process is divided into crowds, fitness equipment, bicycles, and vehicles. In the process of classification training, there is a problem that the size of people and fitness equipment are close to each other, leading to indistinguishability. Thus, we insert the binary cross entropy (BCE) loss function in the classification loss function, where the predicted value is denoted by P, which is 0 or 1, respectively. The probabilities of the same type in the same image are estimated on average, as shown in the following equation:
B C E ( P x ) = l o g P x , if P = 1 l o g ( 1 P x ) , otherwise
In order to increase the classification weight of fitness equipment, we introduce a weight factor a by which the classification loss function is effectively modulated. The improved classification loss function is given in the following equation.
B C E ( P x ) = ( 1 P x ) a l o g P x , if P = 1 l o g ( 1 P x ) , otherwise
The improved classification loss function is:
L o s s c l a s s = i = 0 S S I i j o b j c c l a s s e s [ ( P i ^ ( t ) ) a P i ( t ) l o g P i ^ ( t ) + ( 1 P i ^ ( t ) ) l o g ( 1 P i ^ ( t ) ) ]
We do not modify the confidence loss function. We enhance the CIOU loss function primarily considering the problem that people and fitness equipment are not well distinguished, and adjust the weight factor k to the central distance of the target detection. This middle distance weight can effectively enhance accuracy. The improved CIOU loss function is:
L C I O U = 1 I O U + ρ 2 ( b , b g t ) / c 2 + k × ν 2 / [ ( 1 I O U ) + ν ]
ν = ( 4 / π 2 ) [ a r c t a n ( w g t / h g t ) a r c t a n ( w / h ) ] 2
where ρ 2 ( b , b g t ) is the distance between the predicted centroid position and the true centroid position. w and h are the width and height respectively, but this part cannot improve the accuracy of the detection, so we added the k-factor in front of the centroid position parameter ρ 2 ( b , b g t ) to increase the weight of the centroid position. The three improved losses are superimposed to form the final loss function, and we name this improved loss function as the common space loss function, so our improved model can be called the CS-YOLOv4-tiny model. The CS-YOLOv4-tiny model is illustrated in Figure 7.
The improved target detection model in this paper can be divided into a model for detecting people alone and a model for detecting all objects and people. We evaluate our proposed improved target detection method using mAP and FPS.
In our process of detecting and classifying targets, there may be multiple labels for people and other objects in a single image, i.e., each image may have different categories of targets, so the evaluation cannot use the common single-label classification criteria, and the target detection task uses a method similar to information retrieval—mAP, i.e., the average of the accuracy rates for each category. Average precision is defined as the area enclosed by the accuracy and recall curve and the coordinate axis.
The accuracy rate and recall rate are computed as: Precision, TP/(TP + FP); Recall, TP/(TP + FN). True positive (TP) means that the model predicts a positive sample, and the actual prediction is accurate; false positive (FP) means that the model predicts a positive sample, and the detection is mistaken; and false negative (FN) means that the model predicts a negative sample, and the detection is mistaken.
FPS is currently a common indicator for evaluating model efficiency. In two-dimensional image detection, it is defined as the number of portraits that can be processed per second. We conducted training analysis on the samples and selected different a-parameters and k-parameters to observe their training accuracy (mAP); The results are shown in Table 2. From Table 2, we also determine the optimal values of a and k. That is, k = 1.5 and a = 0.3 .
Likewise, we compared some current target detection models, such as: Faster R-CNN, YOLOv4, SSD, and RefineDet. These target detection models with our CS-YOLOv4-tiny, as shown in Figure 8. Our improved model has been effectively improved in terms of accuracy (mAP). Figure 9 shows the actual acquired image detection results. By training our improved model for various models, we can see that the requirements for our parameter acquisition are satisfied.

3.3. Public Space Parameter Acquisition and Data Visualization

After the accurate detection of the targets obtained in machine vision in the previous section, we can recognize all the people in the public space. Due to the limitations of the edge computing platform’s own computing power, we can extract screenshots from the video file at time intervals to form a file stream for identification and positioning. The size of the same target in the image is varied due to the difference of the distance between the target and the camera in the image. We use GPS sensors to calibrate the position of the camera during the actual installation of the camera to provide a reference for the successive parameter extraction. The data acquired by the GPS sensor is mapped to the position coordinates of the corresponding target in the image, and the distance between the target and the camera and the height of the target are essentially computed by the coordinate position. Figure 10a,b illustrates the 3D and 2D plan view of the whole parameter calculation process, and the specific calculation is derived from Figure 10c,d.
In the completion of the recognition function, a part is used to frame the recognized target, and the pixel height of the recognized target in the image can be retrieved, and the pixel coordinates of the upper left corner of the frame can also be obtained. The principle of selecting the reference object to determine the distance is: for example, the height of the selected reference object at the same angle in the actual scene is H (cm), the distance from the camera in the actual scene is D, and the pixel height in the image is h, we assume pedestrian height Hr: 170 cm, it can be retrieved from the program that the pedestrian height is Ht. According to a similar ratio, the distance between the pedestrian and the camera can be obtained as (14):
d = ( L × H ) / ( H r × h ) × D
In the actual scene, the actual height D of the camera can be retrieved, which is the distance between the target and the lower part of the camera, as shown in Figure 10a. The principle of determining the target position: We can retrieve the coordinates of the upper left corner of the recognition frame in the recognition program, compare the pixel coordinates in the image with the pixel width of the entire image, and then obtain the target distance from the center below the camera according to the width in the actual scene. For the distance of the line, and then for the distance between the actual distance and the center line, use the Pythagorean theorem to calculate the actual position. In this project, the parameters of a scene are actually selected for data calibration and verification. The actual height of the reference object is 200 cm, the pixel height is 337, the actual distance is 12 m, the scene width is 15 m, and the length is 30 m. The calculation principles of parameters in different scenes in the community are compatible.
After identification, the pixel height of the object in the video is about 337 pixels, the actual height is about 2 m, and the actual distance from the camera is about 12 m. It is estimated according to the pedestrian’s height of 170 cm, and h is the pixel height of the pedestrian in the picture, which can be retrieved in the picture. The pixel height of the pedestrian recognition frame, and the center coordinates of the recognition frame, the calculation formula of the actual distance of the pedestrian: The symbol ju represents the actual distance of the target from the camera. Then, according to the Pythagorean theorem and trigonometric functions and the same proportion to complete the calculation of other parameters. Formulas such as (15)–(17).
h e n = [ x + ( w / 2 ) ( w i d t h / 2 ) ] / [ ( w i d t h / 2 ) × 7.5 + 7.5 ]
z o n g p = m a t h . p o w ( j u , 2 ) m a t h . p o w ( h e n , 2 ) 16
z o n g h = m a t h . s q r t ( ( z o n g p ) )
Among them, hen is the actual distance between the target and the center line, x is the abscissa of the lower-left corner of the target frame, w is the width of the target frame, width is the width of the screen, and 7.5 is the actual width of the scene. In (14), the abscissa of the lower left corner is plus half of the target frame is the abscissa of the center of the target frame, minus half of the screen width is the screen distance (negative number) from the frame to the center line, the ratio of the entire to half the screen width multiplied by the actual width 7.5 is the distance between the target and the actual center (negative number), and finally add half of the actual width 7.5 to the horizontal distance from the actual target to the far left of the scene. In (15), the square of the ordinate of the target in the schematic diagram of zongp, 16 is the square of the actual height of the camera, which is obtained according to the Pythagorean theorem. In (16), the ordinate of the target in the schematic diagram of zongh is the root of zongp. The program calculates the total number of all on-site pedestrians by looping.
After the calculation by the algorithm in the field, the parameters are extracted, and at the same time, the scene of the public space can be generated into a two-dimensional display plane. The generation of the two-dimensional plane is selected to be generated by the algorithm of the Cellular automata researched by our team, and the simulation results are shown in Figure 10c. The two-dimensional plane is a dynamic figure, and we intercept one of the two-dimensional planes. In Figure 10c, green represents the fitness equipment, black represents the pedestrian, and the interaction between green and black represents the pedestrian utilizing the fitness equipment. At the same time, the size of the cellular can be dynamically modified according to specific scenarios, providing technical support for subsequent studies of more factors. Figure 10d shows how the data were obtained. The parameters of two of these nodes are selected and organized into tables in the appendix, as shown in Table 3 and Table 4, where the parameters of the common space at different time periods are organized. Tcc stands for target center coordinates, Tph stands for target pixel height, Ead stands for estimate actual distance, Atc stands for actual target coordinates, and Ald stands for actual location.

4. Public Space Utilization Calculation

There are many ways to study the use of space, mainly including accessibility, connectivity and privacy of space. In this paper, we study the space utilization from the perspective of accessibility of space. It mainly includes the number of people in the space, the age distribution in the space, and the change of temperature in the space. Also, these three factors can be combined to study the utilization of space. The three parameters of any two nodes in the common space are collected, Due to space limitations, we have graphically displayed the parameters collected by the two nodes, as shown in Figure 11.
The temperature, number of people, and age of the two nodes are fitted nonlinearly, and it should be noted that the age is divided into two categories: adults and children, and it is assumed that “1” is the number of children and “2” is the number of adults.
The temperature fitting function for the two nodes is
y T 1 = 90.55 × x 3 + 96.96 × x 2 16.90 × x + 29.20
y T 2 = 93.10 × x 3 + 100.14 × x 2 17.94 × x + 29.60
The number fitting function for the two nodes is
y N 1 = 2.55 × x 4 10.32 × x 3 + 11.43 × x 1.97
y N 2 = 3.82 × x 4 11.34 × x 3 + 11.21 × x 1.91
The adult fitting function for the two nodes is
y a d 1 = 18.93 × x 4 24.78 × x 3 + 8.56 × x + 0.69
y a d 2 = 18.63 × x 4 23.85 × x 3 + 8.37 × x + 0.58
The child fit function for the two nodes is
y c h 1 = 2.41 × x 4 + 0.44 × x 3 + 3.28 × x 0.16
y c h 2 = 2.11 × x 4 0.49 × x 3 + 3.48 × x 0.05
The nonlinear fitting functions of all its nodes are averaged, and finally three universal common space occupancy functions for temperature, number of people, and age can be derived, as shown in (26)–(28).
y T = 92 × x 3 + 98 × x 2 17 × x + 29
y N = 3 × x 4 10 × x 3 + 11 × x 2
y a d = 18 × x 4 24 × x 3 + 8 × x + 0.60
y c h = 2 × x 4 + 0.5 × x 3 + 3 × x 0.10
The functions fitted from temperature, number of people, adults, and children can be found to be universal for each node in the public space, and can be applied to other studies on the effective use of public space, and it is instructive to observe the characteristics that the public space possesses from the functions and visualization. Using the formulas for each parameter, we can derive the general formula for the efficient use of public space as:
y x = A × x n + B × x n 1 + . . .   N × x 0
The effective calculation of the utilization rate of various public spaces can be realized through our formula for effective utilization of public spaces, and the effective study of the relationships between people and people, people and space, and psychology and space in public spaces can be completed.

5. Discussion

For the study of public space, the method of our research in the early stage is on the one hand through a large number of questionnaires and so on, which is labor intensive. Therefore, many teams researching public space have introduced the method of cloud computing for data collection, which is an effective method of data collection, but the invalid data in the collected data cannot be processed effectively, and need to be screened at a later stage of the data processing, which in turn increases the workload. The introduction of the method of edge computing fulfilled the requirements of real-time processing on the data side, but the cost of edge computing also needs to be considered. In addition, for the study of public space, there may be time to detect the environment as well as the dimensions of detection all need us to design a lightweight edge computing platform. Especially when we need multiple nodes to collect and process. Therefore, this system designs a low-cost front-end hardware acquisition circuit, through which the circuit acquires and inputs the data into the edge computing real-time processing. Meanwhile, the circuit data acquisition from each node can be sent wirelessly to the nearest edge platform for processing. In this way, both data acquisition and data processing have been well designed, but there are still some improvements in this system, such as the optimisation of the power consumption of the whole system, the design of the future network, and the wireless power supply, etc. There is still room for improvement.

6. Conclusions

In this paper, we propose a hardware system for public space data acquisition and analysis. The system realises multi-dimensional data acquisition and pre-processing, can realise various lightweight algorithmic models for data processing, and designs an improved algorithmic model to deal with the problem of data ambiguity detection in the process of data processing. Finally, an effective mapping of 3D data to 2D data is achieved. The system realises the expected purpose.

Author Contributions

Conceptualization, Y.J. and W.W.; methodology, Y.J. and W.W.; software, Y.J.; formal analysis, B.Z.; investigation, B.Z.; resources, W.W.; writing—original draft preparation, Y.J.; writing—review and editing, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology-National Key R and D Program, grant number 2020YFC1807-903.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hjort, M.; Martin, W.M.; Stewart, T.; Troelsen, J. Design of Urban Public Spaces: Intent vs. Reality. Int. J. Environ. Res. Public Health 2018, 15, 816. [Google Scholar] [CrossRef] [PubMed]
  2. National Commissioners on High Quality Health Systems. National commissioners on high quality healths. Lancet Glob Health 2019, 7, e179–e180. [Google Scholar] [CrossRef] [PubMed]
  3. Mitchell, K. The culture of urban space. URban Geogr. 2000, 21, 443–449. [Google Scholar] [CrossRef]
  4. Salas-Olmedo, M.H.; Quezada, C.R. The use of public spaces in a medium-sized city: From Twitter data to mobility patterns. J. Maps 2017, 13, 40–45. [Google Scholar] [CrossRef]
  5. Luo, J. Research on Vitality Measurement of Village Public Space Based on Big Data and Multidimensional Module. Iop Conf. Ser. Earth Environ. Sci. 2020, 558, 042003. [Google Scholar] [CrossRef]
  6. Satyanarayanan, M. Edge Computing. Computer 2017, 50, 36–38. [Google Scholar] [CrossRef]
  7. Mitton, N. QoS in Wireless Sensor Networks. Sensors 2018, 18, 3983. [Google Scholar] [CrossRef]
  8. Diety, G.; Hamouda, S. Energy Optimisation in Wireless Sensor Network. Engineering 2017, 8, 880–889. [Google Scholar] [CrossRef]
  9. Liu, X. Analysis on the influencing mechanism of informational policy instrument on adopting energy consumption monitoring technology in public buildings. Energy Effic. 2020, 13, 1485–1503. [Google Scholar] [CrossRef]
  10. Fuentes-García, M.; Camacho, J.; Maciá-Fernández, G. Present and Future of Network Security Monitoring. IEEE Access 2021, 9, 112744–112760. [Google Scholar] [CrossRef]
  11. Magdy, M. Security of medical images for telemedicine: A systematic review. Multimed. Tools Appl. 2022, 81, 25101–25145. [Google Scholar] [CrossRef] [PubMed]
  12. Cao, J.; Kang, J. The influence of companion factors on soundscape evaluations in urban public spaces. Sustain. Cities Soc. 2021, 69, 102860. [Google Scholar] [CrossRef]
  13. Xu, H.; Xue, B. Key indicators for the resilience of complex urban public spaces. J. Build. Eng. 2017, 12, 306–313. [Google Scholar] [CrossRef]
  14. Euchner, J. The Internet of Things. Res.-Technol. Manag. 2018, 61, 10–11. [Google Scholar] [CrossRef]
  15. Reddi, V.J.; Kim, H. On the Internet of Things. IEEE Micro 2016, 36, 5–7. [Google Scholar] [CrossRef]
  16. Chang, W.J.; Chen, L.B.; Sie, C.Y.; Yang, C.H. An Artificial Intelligence Edge Computing-Based Assistive System for Visually Impaired Pedestrian Safety at Zebra Crossings. IEEE Trans. Consum. Electron. 2021, 67, 3–11. [Google Scholar] [CrossRef]
  17. Hu, B.; Li, J. An Edge Computing Framework for Powertrain Control System Optimization of Intelligent and Connected Vehicles Based on Curiosity-Driven Deep Reinforcement Learning. IEEE Trans. Ind. Electron. 2021, 68, 7652–7661. [Google Scholar] [CrossRef]
  18. Chang, Y.C.; Lai, Y.H. Campus Edge Computing Network Based on IoT Street Lighting Nodes. IEEE Syst. J. 2020, 14, 164–171. [Google Scholar] [CrossRef]
  19. Gao, C.; Li, P.; Zhang, Y.; Liu, J.; Wang, L. People counting based on head detection combining Adaboost and CNN in crowded surveillance environment. Neurocomputing 2016, 208, 108–116. [Google Scholar] [CrossRef]
  20. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
  21. Li, J.; Liang, X.; Shen, S.; Xu, T.; Feng, J.; Yan, S. Scale-Aware Fast R-CNN for Pedestrian Detection. IEEE Trans. Multimed. 2018, 20, 985–996. [Google Scholar] [CrossRef]
  22. Ding, X. Local keypoint-based Faster R-CNN. Appl. Intell. 2020, 50, 3007–3022. [Google Scholar] [CrossRef]
  23. Hsu, W.Y.; Lin, W.Y. Ratio-and-Scale-Aware YOLO for Pedestrian Detection. IEEE Trans. Image Process. 2021, 30, 934–947. [Google Scholar] [CrossRef] [PubMed]
  24. Wang, Z.; Boukhobza, J.; Shao, Z. Preserving SSD lifetime in deep learning applications with delta snapshots. J. Parallel Distrib. Comput. 2019, 133, 63–76. [Google Scholar] [CrossRef]
  25. Zhang, S.; Wen, L.; Lei, Z.; Li, S.Z. RefineDet++: Single-Shot Refinement Neural Network for Object Detection. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 674–687. [Google Scholar] [CrossRef]
  26. Gautam, A. Neural style transfer combined with EfficientDet for thermal surveillance. Vis. Comput. 2022, 38, 4111–4127. [Google Scholar] [CrossRef]
  27. Li, C. A YOLOv4 Model with FPN for Service Plates Detection. J. Electr. Eng. Technol. 2022, 17, 2469–2479. [Google Scholar] [CrossRef]
  28. Yao, Y.; Han, L.; Du, C.; Xu, X.; Jiang, X. Traffic sign detection algorithm based on improved YOLOv4-Tiny. Signal Process. Image Commun. 2022, 107, 116783. [Google Scholar] [CrossRef]
  29. Garofalo, A. PULP-NN: Accelerating quantized neural networks on parallel ultra-low-power RISC-V processors. Philos. Trans. A Math. Phys. Eng. Sci. 2019, 378, 20190155. [Google Scholar] [CrossRef]
  30. Chu, T.D.; Chen, C.K. Design and Implementation of Model Predictive Control for a Gyroscopic Inverted Pendulum. Appl. Sci. 2017, 7, 1272. [Google Scholar] [CrossRef]
  31. Sütő, J. Embedded System-Based Sticky Paper Trap with Deep Learning-Based Insect-Counting Algorithm. Electronics 2021, 10, 1754. [Google Scholar] [CrossRef]
  32. Yao, C. Floating Garbage Collector Based on OpenMV. J. Phys. Conf. Ser. 2021, 1952, 032058. [Google Scholar] [CrossRef]
  33. Watanabe, W.; Maruyama, R.; Arimoto, H.; Tamada, Y. Low-cost multi-modal microscope using Raspberry Pi. Optik 2020, 212, 164713. [Google Scholar] [CrossRef]
  34. Gamanayake, C.; Jayasinghe, L.; Ng, B.K.K.; Yuen, C. Cluster Pruning: An Efficient Filter Pruning Method for Edge AI Vision Applications. IEEE J. Sel. Top. Signal Process. 2020, 14, 802–816. [Google Scholar] [CrossRef]
  35. Tabani, H.; Mazzocchetti, F.; Benedicte, P.; Abella, J.; Cazorla, F.J. Performance Analysis and Optimization Opportunities for NVIDIA Automotive GPUs. J. Parallel Distrib. Comput. 2021, 152, 21–32. [Google Scholar] [CrossRef]
  36. Ge, P.; Guo, L.; He, D.; Huang, L. Light-weighted vehicle detection network based on improved YOLOv3-tiny. Int. J. Distrib. Sens. Netw. 2022, 18, 15501329221080665. [Google Scholar] [CrossRef]
Figure 1. Hardware deployment and hardware structure diagram: (a) Hardware deployment floor plan; (b) Sensor acquisition modules; (c) Edge computing module.
Figure 1. Hardware deployment and hardware structure diagram: (a) Hardware deployment floor plan; (b) Sensor acquisition modules; (c) Edge computing module.
Applsci 14 00211 g001
Figure 2. The overall structure of the system and the construction of public space utilization.
Figure 2. The overall structure of the system and the construction of public space utilization.
Applsci 14 00211 g002
Figure 3. Super-resolution image network structure: (a) U-Net composite network structure; (b) Maxout activation function.
Figure 3. Super-resolution image network structure: (a) U-Net composite network structure; (b) Maxout activation function.
Applsci 14 00211 g003
Figure 4. Comparison of slices before and after treatment: (a) Night images; (b) Results of U-Net image processing.
Figure 4. Comparison of slices before and after treatment: (a) Night images; (b) Results of U-Net image processing.
Applsci 14 00211 g004
Figure 5. Improved object detection model.
Figure 5. Improved object detection model.
Applsci 14 00211 g005
Figure 6. Model structure and loss function of YOLOv4-tiny.
Figure 6. Model structure and loss function of YOLOv4-tiny.
Applsci 14 00211 g006
Figure 7. The CS-YOLOv4-tiny model.
Figure 7. The CS-YOLOv4-tiny model.
Applsci 14 00211 g007
Figure 8. Comparison chart of accuracy of different models.
Figure 8. Comparison chart of accuracy of different models.
Applsci 14 00211 g008
Figure 9. Improved target detection model for practical detection: (a) Various target detection charts; (b) Pedestrian and fitness equipment detection chart; (c) Nighttime pedestrian detection map.
Figure 9. Improved target detection model for practical detection: (a) Various target detection charts; (b) Pedestrian and fitness equipment detection chart; (c) Nighttime pedestrian detection map.
Applsci 14 00211 g009
Figure 10. Parameter calculation and display: (a) Parameter calculation 3D map; (b) Parameter calculation 2D map; (c) 2D display platform structure diagram; (d) Parameter acquisition.
Figure 10. Parameter calculation and display: (a) Parameter calculation 3D map; (b) Parameter calculation 2D map; (c) 2D display platform structure diagram; (d) Parameter acquisition.
Applsci 14 00211 g010
Figure 11. Three-parameter curve for two-node acquisition. (a) Temperature data collection for node 1 over a 24 h period; (b) Headcount data collection for node 1 over a 24-h period; (c) Number of adults and children captured in a 24-h period for node 1; (d) Temperature data collection for node 2 over a 24 h period; (e) Headcount data collection for node 2 over a 24-h period; (f) Number of adults and children captured in a 24-h period for node 2.
Figure 11. Three-parameter curve for two-node acquisition. (a) Temperature data collection for node 1 over a 24 h period; (b) Headcount data collection for node 1 over a 24-h period; (c) Number of adults and children captured in a 24-h period for node 1; (d) Temperature data collection for node 2 over a 24 h period; (e) Headcount data collection for node 2 over a 24-h period; (f) Number of adults and children captured in a 24-h period for node 2.
Applsci 14 00211 g011
Table 1. Sensor data specification and sampling rate.
Table 1. Sensor data specification and sampling rate.
SensorData RangeSampling Rate
Age (At)1–2Image recognition
Location (Lt)Latitude and longitudeevery 5 min
Temperature (Tt)−10∼80 ℃very 5 min
High (Ht)80∼200 cmevery 10 min
Coordinate (Ct)0 × 0 ∼ 1920 × 1080 pixelsevery 10 min
Distance (Dt)Noneevery 10 min
Number of People (Nt)Noneevery 10 min
Table 2. The effect of k and a on accuracy. Maximum values are in bold.
Table 2. The effect of k and a on accuracy. Maximum values are in bold.
Map
aNone0.90.60.50.40.30.2
k
None90.94%90.19%91.10%91.32%91.56%91.71%91.11%
1.191.25%90.95%91.58%91.67%91.87%91.92%91.51%
1.290.27%90.15%90.77%91.06%91.22%91.36%90.82%
1.391.19%91.08%91.37%91.72%92.02%92.21%91.31%
1.491.54%91.42%91.88%92.13%92.25%92.33%91.51%
1.592.27%92.21%92.52%92.79%92.85%92.92%92.83%
1.690.56%90.05%91.02%91.29%91.33%91.89%91.01%
1.791.71%91.56%91.87%91.95%92.17%92.36%91.85%
1.891.63%91.45%91.75%91.81%92.11%92.23%91.77%
1.989.46%89.37%90.19%90.55%90.82%91.07%90.84%
Table 3. Public space node 1 parameter acquisition.
Table 3. Public space node 1 parameter acquisition.
00:00–7:5908:00–15:5916:00–23:59
TccTphEadAtcAldTccTphEadAtcAldTccTphEadAtcAld
1(412, 512)16671.98(96.56, 135.91)28.02(166, 325)11290.86(68.09, 88.12)9.17(167, 300)19262.90(86.20, 70.15)38.42
2(94.366)11091.56(26.36, 75.51)8.75(382, 257)16074.09(98.15, 67.21)25.93(311, 75)30623.06(82.10, 32.11)77.01
3(477, 413)16273.39(90.21, 113.57)27.70(112, 65)11091.56(45.62, 33.00)8.75(332, 280)15874.78(95.00, 66.45)25.23
4(399, 315)10692.96(121.35, 88.56)7.47(364, 266)16472.69(88.62, 82.20)28.41(158, 246)11091.56(56.58, 70.15)8.75
5(301, 316)16074.08(60.34, 97.58)26.65(344, 300)16273.39(87.00, 85.29)26.91(115, 348)16273.39(42.99, 82.00)27.70
6(202, 322)18864.30(41.20, 90.02)36.86(388, 398)10692.96(69.91, 85.33)7.47(188, 320)16472.69(55.08, 70.27)27.62
7(287, 189)18665.00(56.65, 78.21)36.35(120, 366)18465.70(40.28, 96.44)35.43(350, 456)10692.96(88.22, 98.10)7.47
8(266, 287)17070.59(87.66, 65.48)30.84(80, 325)14280.38(50.60, 88.82)19.97(256, 224)16472.69(49.11, 69.20)28.06
9(350, 325)10692.96(102.56, 66.21)8.55(305, 188)18266.40(80.53, 67.82)34.92(393, 256)18067.07(91.28, 42.55)34.01
10(266, 288)19063.60(72.86, 87.36)37.14(211, 245)17269.89(72.69, 77.00)31.55(405, 488)14081.08(86.10, 71.55)19.26
11(334, 142)21255.91(75.36, 63.02)44.10(133, 384)10692.96(55.15, 71.62)8.54(215, 447)18067.10(56.88, 90.11)34.21
12(233, 61)19063.60(102.68, 36.48)36.40(389, 267)19262.90(80.66, 54.12)37.85(161, 335)17269.89(54.88, 71.92)31.55
13(366, 22)27633.55(87.65, 12.33)66.85(302, 220)20658.01(82.41, 35.12)42.00(121, 382)10892.26(45.11, 81.46)9.25
14(452, 66)17668.50(132.80, 45.71)32.72(155, 108)18864.30(67.01, 55.10)35.70(378, 108)19462.20(83.66, 45.00)38.56
15(388, 50)31021.67(92.36, 16.22)78.41(299, 53)28829.35(88.17, 15.55)71.10(208, 71)20060.11(89.10, 40.58)39.90
Table 4. Public space node 2 parameter acquisition.
Table 4. Public space node 2 parameter acquisition.
00:00–7:5908:00–15:5916:00–23:59
TccTphEadAtcAldTccTphEadAtcAldTccTphEadAtcAld
1(208, 284)18465.70(80.05, 70.51)34.30(289, 295)20259.41(78.88, 68.07)40.60(207, 225)17469.19(79.08, 91.22)32.26
2(279, 122)28231.45(84.58, 38.81)68.97(300, 364)18665.00(68.05, 88.45)35.00(312, 125)20458.71(85.38, 59.82)42.10
3(450, 244)18465.70(94.52, 53.07)35.58(372, 140)27434.25(80.07, 45.92)66.14(360, 207)20060.11(95.38, 63.88)39.90
4(318, 51)30623.06(81.22, 37.44)77.01(507, 358)19262.90(108.46, 83.58)38.43(384, 308)17668.49(109.77, 79.22)31.51
5(133, 245)12287.37(48.08, 69.01)12.67(410, 51)30822.36(86.30, 22.11)77.71(349, 244)19262.90(88.49, 70.67)38.44
6(112, 275)11091.56(44.65, 72.00)8.75(286, 438)12088.06(45.00, 102.46)11.97(155, 208)16175.18(55.90, 62.81)26.85
7(259, 366)16472.69(72.71, 89.66)28.40(246, 280)17668.49(50.48, 72.68)31.55(105, 356)10793.25(42.80, 89.00)7.51
8(237, 450)16671.99(85.90, 88.40)28.32(156, 370)11091.56(48.69, 89.99)8.75(255, 260)18364.66(67.55, 76.08)34.43
9(265, 502)10692.96(95.07, 99.33)7.47(116, 285)16472.69(59.87, 84.22)28.40(240, 275)17370.89(65.11, 77.68)31.75
10(220, 344)16671.99(66.75, 72.19)28.77(130, 275)16670.99(64.80, 81.37)28.32(321, 180)20758.11(84.89, 65.13)42.03
11(256, 224)18266.40(80.46, 60.18)34.72(349, 122)10692.96(79.56, 52.90)7.47(308, 108)28929.75(83.15, 40.83)71.59
12(361, 381)13881.77(87.16, 82.99)18.56(320, 285)16671.99(64.85, 86.88)28.77(420, 366)15975.08(95.15, 90.10)25.53
13(362, 352)18067.10(90.15, 96.02)34.21(403, 255)18465.70(105.77, 84.68)35.43(404, 351)16573.08(89.89, 87.55)28.36
14(377, 379)17469.19(98.45, 91.56)32.26(242, 423)13881.77(86.80, 109.87)18.56(380, 301)18167.49(84.25, 75.10)34.51
15(321, 335)20060.11(79.99, 65.28)40.69(328, 256)18067.10(81.00, 69.58)34.21(395, 211)19563.20(90.58, 54.21)38.96
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ji, Y.; Li, J.; Zhao, B.; Wang, W. A Multidimensional Data Collection and Edge Computing Analysis Method. Appl. Sci. 2024, 14, 211. https://doi.org/10.3390/app14010211

AMA Style

Ji Y, Li J, Zhao B, Wang W. A Multidimensional Data Collection and Edge Computing Analysis Method. Applied Sciences. 2024; 14(1):211. https://doi.org/10.3390/app14010211

Chicago/Turabian Style

Ji, Yanping, Jiawei Li, Boyan Zhao, and Wensi Wang. 2024. "A Multidimensional Data Collection and Edge Computing Analysis Method" Applied Sciences 14, no. 1: 211. https://doi.org/10.3390/app14010211

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop