Next Article in Journal
Failure Analysis of Resistance Spot-Welded Structure Using XFEM: Lifetime Assessment
Previous Article in Journal
Enhanced Effect of Phytoextraction on Arsenic-Contaminated Soil by Microbial Reduction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Community Outdoor Public Spaces Based on Computer Vision Behavior Detection Algorithm

School of Architecture and Art, North China University of Technology, Beijing 100144, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(19), 10922; https://doi.org/10.3390/app131910922
Submission received: 9 August 2023 / Revised: 20 September 2023 / Accepted: 28 September 2023 / Published: 2 October 2023

Abstract

:
Community outdoor public spaces are indispensable to urban residents’ daily lives. Analyzing community outdoor public spaces from a behavioral perspective is crucial and an effective way to support human-centered development in urban areas. Traditional behavioral analysis often relies on manually collected behavioral data, which is time-consuming, labor-intensive, and lacks data breadth. With the use of sensors, the breadth of behavioral data has greatly increased, but its accuracy is still insufficient, especially in the fine-grained differentiation of populations and behaviors. Computer vision is more efficient in distinguishing populations and recognizing behaviors. However, most existing computer vision applications face some challenges. For example, behavior recognition is limited to pedestrian trajectory recognition, and there are few that recognize the diverse behaviors of crowds. In view of these gaps, this paper proposes a more efficient approach that employs computer vision tools to examine different populations and different behaviors, obtain important statistical measures of spatial behavior, taking the Bajiao Cultural Square in Beijing as a test bed. This population and behavior recognition model presents several improvement strategies: Firstly, by leveraging an attention mechanism, which emulates the human selective cognitive mechanism, it is capable of accentuating pertinent information while disregarding extraneous data, and the ResNet backbone network can be refined by integrating channel attention. This enables the amplification of critical feature channels or the suppression of irrelevant feature channels, thereby enhancing the efficacy of population and behavior recognition. Secondly, it uses public datasets and self-made data to construct the dataset required by this model to improve the robustness of the detection model in specific scenarios. This model can distinguish five types of people and six kinds of behaviors, with an identification accuracy of 83%, achieving fine-grained behavior detection for different populations. To a certain extent, it solves the problem that traditional data face of large-scale behavioral data being difficult to refine. The population and behavior recognition model was adapted and applied in conjunction with spatial typology analysis, and we can conclude that different crowds have different behavioral preferences. There is inconsistency in the use of space by different crowds, there is inconsistency between behavioral and spatial function, and behavior is concentrated over time. This provides more comprehensive and reliable decision support for fine-grained planning and design.

1. Introduction

Urban community outdoor public spaces, as the most direct and easily accessible places for urban residents’ outdoor activities, are closely related to the lives of urban residents. With people’s pursuit of quality of life, the quality of urban outdoor public spaces is receiving more and more attention. In the mutual relationship between people and urban spaces, people and their activities are at the core of urban spaces. The process of people’s activities and living places interweaving with each other, that is, the diversity of urban life, makes a city vibrant [1]. Therefore, in order to optimize urban space supply and improve the happiness of citizens’ lives, it is important to take citizens as the main body, start from citizens’ spatial behavior, understand the behavioral models of different citizens, and explore the interactive relationships between their behavior and urban spaces.
In the 1960s, influenced by humanistic thought, behaviorism was gradually introduced into urban research [2]. Behaviorism links the daily behavior of urban individuals with urban space, recognizes the impact of urban space on people, studies the characteristics of residents’ daily activities and the use of space, and provides a human-based perspective and methodology for urban planning. The American scholar Chapin proposed the conceptual framework of the urban activity system, emphasizing the relationship between human activities and the constraints of time and space. He clearly pointed out that time and space have an impact on behavioral patterns and that residents’ activities can interact with their urban spatial structures [3]. As cities shift from extensive to high-quality development, urban research based on individual behavior and its planning have become a hot topic of concern in related disciplines in China. Some scholars examine the changes in the macro urban spatial structure from the micro perspective of behavioral space, such as the interpretation of urban space based on commuting behavior [4,5,6,7,8]. Some scholars focus on the correlation between behavior and space and explore spatial optimization strategies [9,10,11]. For example, Wang (2017) introduced interaction relationship theory in ecological psychology to study the interaction between the behavior of the elderly and the environment [12].
However, due to the limitations of the methods used to collect behavioral data, relevant research is limited to single activities and travel behaviors, and urban behavioral analysis consists of mostly small-scale, manual comparative analysis. The development of artificial intelligence has provided new conditions for behavioral research, such as using computer vision to detect pedestrian trajectories to examine the interaction between pedestrians and space [13,14,15,16,17], to some extent, addressing the limitations associated with manual methods. However, the current use of computer vision for behavioral research is still in its infancy. The information extracted from video data is only limited to the temporal changes in individuals in spatial positions and cannot represent the specific activities of individuals in space. The behavioral activities of urban residents in urban spaces are full of richness and complexity. Fine-grained collection of urban residents’ behavior is the basic premise for understanding, analyzing, diagnosing, and optimizing the interactions between people and space. Most of the related studies have focused on single activities, ignoring the richness of users and the diversity of daily activities. There are deficiencies in accurate behavior recognition, which also hinder the more detailed study of the relationship between behavior and space.
Therefore, this study uses computer vision to propose a fine-grained method for recognizing crowd behavior and establishes a research framework for the interaction between behavior and space. This framework is conducive to a more fine-grained analysis of the diversity of pedestrian behavior. Specifically, our framework integrates the high-level features of crowds (such as gender, age) as a supplement, thereby dividing different crowds. In addition, the time dimension was added to extract information on different behaviors of different crowds at different times and in different spaces. The main contribution of this paper is that the proposed behavior detection framework can automatically extract more fine-grained behavioral data, supporting more detailed research on behavior and space. In addition, the framework proposed in this paper has a certain generalizability and can be applied to large-scale scenarios, such as using large-scale urban image data collected by urban surveillance cameras to achieve behavioral analysis at the urban scale.
The rest of this paper is arranged as follows. Section 2 reviews the current attempts of using computer vision technology in behavioral research and the research on the correlation mechanism between behavior and space. Section 3 determines the basic framework of this paper, integrates pedestrian detection, pedestrian attribute recognition, and behavior recognition to build a behavior detection and recognition framework, and then further proposes the relationship between behavior and space. In Section 4, we will actually build and train the behavior detection algorithm. Section 5 will show the application of our framework in the actual community outdoor public space, focusing on the relationship between crowd attributes and space, crowd attributes and behavior, and behavior and space. In Section 6, we summarize the correlation mechanism between behavior and space, extract the main contributions of this paper, and finally discuss the potential applications of this research in the future.

2. Literature Review

2.1. Behavioral Research Based on Computer Vision

The combination of computer vision and behavioral imaging has changed the situation where previously, data was difficult to obtain and behavioral data was difficult to use efficiently, providing new possibilities for urban research. Scholars have introduced computer vision into urban research, and the basic method is to use cameras and drones to collect urban resident activity images, use computer vision to detect and obtain basic behavioral data, and then carry out research on urban-related issues. Currently, the types of basic data obtained based on computer vision and the research directions are as follows.
Pedestrian count statistics are obtained to explore the use of space. For example, Wei (2005) used a head detection algorithm to count the number of users entering a space and the number of users sitting at a fountain, etc., to evaluate the quality of architectural design from the perspective of use [18]. Hou et al. (2020) detected the specific positions of people in pedestrian images, mapped them to real-world positions, and explained their use of space [19].
Pedestrian spatial distribution data are obtained to explore spatial characteristics. Hu et al. (2017) and Ding (2018) used optical flow to extract pedestrian trajectories on urban pedestrian streets [16,17]; Wu and Hu (2022) collected pedestrian video data from campuses and shopping malls and used multi-target tracking technology to extract pedestrian speed and spatial distribution differences; and Liu Bo et al. [13] (2022) and Niu [20] (2022) used multi-target tracking technology in videos to extract crowd movement trajectories to characterize spatial vitality.
Pedestrian attribute data are obtained to explore spatial characteristics from the perspective of different populations. For example, Wong et al. [21] (2021) further subdivided pedestrians by age and gender, such as setting up three categories of young, adult, and elderly populations. Through four stages of pedestrian detection, feature extraction, identity tracking, and motion analysis, pedestrian attribute recognition and tracking are achieved, and refined pedestrian trajectory information is obtained.
Pedestrian motion data are obtained to explore their influencing factors. For example, Liang [22] (2020) and others used computer vision technology to record pedestrian walking speed and explored how the weather and climate affect walking speed and to what extent. Jiao [23] (2023) counted street pedestrian data and found that the walking speed in different sub-areas of the street was significantly different and proposed that walking speed is related to the category, brand, and decoration of shops.
Overall, the use of computer vision for urban behavioral analysis is in its infancy. Behavioral data mostly consist of basic statistical parameters such as population count, pedestrian trajectory, and walking speed. They are limited to describing the spatiotemporal characteristics of single activities and mobile behaviors, simplifying the user’s stay in space as their use of space, and ignoring the diversity of activities and the exploration of interactions with space.

2.2. Research on the Relationship between Behavior and Space

In the wave of reflection on the oversimplification of spatial issues in traditional urban planning, scholars have begun to examine the role of people, emphasizing individual micro behaviors, the interaction between urban space and human spatial behavior, and presenting different behaviors and environments in a differentiated manner. In the study of the interaction between behavior and space, the following analysis models have been formed.
Spatial-behavioral interaction model at different spatial scales: The study of human behavior includes micro-level research at the individual level and macro-level research at the aggregate level. The micro level includes community-scale and street-scale spaces, including streets, squares, green spaces, parks, shopping malls, etc. Xuan [24] (2022) studied the correlation between street quality and pedestrian walking speed and flow. At the macro level, it includes the urban scale and intra-urban scale, specifically exploring different behavioral patterns between cities and between different regions within cities and understanding the spatial characteristics of residents’ behavior [4,5,6,20].
The spatial–behavioral interaction model from the perspective of different groups: By comparing different spaces and different groups, the specificity of the interaction pattern between group behavior and space is explored. Specifically, it explores behavioral differences from multiple perspectives such as gender, age, income, and mobility and explains the interaction with space. For example, Zhou [25] proposed that there is a clear differentiation in the clustering behavior of different income groups. Ekawati [26] (2015) analyzed the impact of the physical environment of urban streets on children’s behavior. Dong et al. [9] (2022) studied the factors influencing the risky behavior of the elderly in residential spaces. Finally, Chen et al. [11] (2022) observed the use preferences of the elderly in different park spaces.
This study explores the differences in behavior and space of different groups of people by using computer vision to detect people’s daily behavior in community square spaces on a microscopic scale.

3. Methodology

Based on the computer vision behavior detection algorithm, this study proposes a framework to recognize pedestrian attributes and behavioral information intelligently and jointly analyze the correlation mechanism between behavior and space with manually collected spatial data. This study mainly includes the following three steps: (1) collecting original data of citizens’ behavior in community outdoor public spaces and spatial data through cameras; (2) using the improved behavior detection algorithm in this paper for behavior detection (the behavior detection algorithm consists of two parts, being pedestrian detection and pedestrian attribute and behavior recognition) and analyzing the detection results for behavioral characteristics, while also conducting spatial characteristic analysis on spatial data and; (3) combining the results of behavioral characteristic analysis and spatial characteristic analysis, utilizing data statistics, data analysis, data visualization, and other methods to explore the mechanism of correlation between behavior and space and then uncover the inconsistencies between behavioral needs and spatial status quo, providing a basis for optimizing community outdoor public spaces (Figure 1).

3.1. Behavior Detection Algorithms

The behavior detection proposed in this paper consists of two parts: pedestrian detection and behavior recognition. Pedestrian detection was undertaken by the YOLO model, whose main task was to extract the coordinates of the pedestrian category in an image: the coordinates of the upper left corner and the width and height of the target box (x, y, w, h). The original image was cropped according to the coordinates to obtain a box diagram for each pedestrian. Then, the SE-ResNet network was used to recognize pedestrian attributes and behaviors for all pedestrian box diagrams and finally obtain a pedestrian attribute and behavior label vector (Figure 2). In addition, since the datasets for pedestrian attribute recognition and behavior recognition belonged to different datasets, pedestrian attribute recognition and behavior recognition could not be completed through a single feature extraction network, so they were performed independently.

3.1.1. Pedestrian Detection

Pedestrian detection is a recently developed improvement in object detection in computer vision. The essence of object detection is to find the target object in a given image and determine the category and position of the target. Pedestrian detection is used to find pedestrians as targets in a given image, which can be regarded as a specific application of object detection.
The pedestrian detection in this paper used the existing YOLOv7x model [27]. You only look once (YOLO) is an object detection algorithm introduced by Joseph Redmon and Ali Farhadi in 2015 [28]. The main idea behind YOLO is to apply a backbone network to the entire image, which can simultaneously predict multiple target bounding box positions and their corresponding categories. YOLO transforms the object detection task into a regression task instead of traditional sliding window or region proposal methods. This is an end-to-end model structure, which makes the inference speed much faster than other object detection algorithms, and the YOLOv5s model especially can achieve an amazing 140PFS inference speed. Today, the YOLO model has multiple versions including YOLOv1 [28], YOLO9000 [29], YOLv3 [30], YOLOv4 [31], YOLOv5, YOLOv6 [32], YOLOv7 [33], YOLOv8, etc., and each version is divided into n, s, m, l, and x different series according to the model’s complexity, where the n model has the smallest space complexity and is the fastest, and the x model has the largest space complexity and highest accuracy. Today, YOLO has released maturely trained parameter models that can be directly called from the YOLO official website [27,34].

3.1.2. Pedestrian Attribute and Behavior Recognition

The pedestrian attribute recognition and behavior recognition tasks were implemented using the SE-ResNet model [35], which is a combination of the Squeeze and Excitation (SE) [36] module and the ResNet [37] architecture. The main idea is to use the channel attention mechanism of SE to weigh the feature channels in the ResNet network (Figure 3), enhancing important feature channels and suppressing unimportant ones, thereby improving the network’s generalization ability, performance, and computational efficiency. Figure 3 shows the SE-Residual Block, which consists of a convolutional layer and channel attention, and the SE-ResNet is formed by stacking several SE-Residual Blocks.
ResNet, as a classic neural network architecture, was first proposed by He et al. [37] in 2015 to solve the problem of gradient disappearance caused by continuously stacking layers of neural networks. ResNet introduces residual block skip connections, allowing the network to use information extracted from previous layers in subsequent layers directly. With its simple structure and ease of use, ResNet has become a fundamental architecture for feature extraction. The SE network is an attention mechanism in convolutional neural networks, proposed by Hu et al. 2018 [36]. The design inspiration of the SE module comes from the attention mechanism in the human visual system. By generating feature weights corresponding to the input feature tensor to adjust the representation of each channel in the input, the network can more specifically focus on valuable feature channels and achieve improved results.

3.2. Analysis of the Mechanism of Pedestrian–Space Association

By utilizing the behavior detection algorithm proposed in the previous section to analyze pedestrian behavior within a specific space, we could obtain data on population attributes and behavior. Combined with the analysis of spatial characteristics, three major coupling relationships could be derived as the following: the coupling relationship between population attributes and behavior, the coupling relationship between population attributes and space, and the coupling relationship between behavior and space. By counting the number of population attributes and behaviors, the behavioral preferences of different populations could be obtained; by counting the number of population attributes in different spaces, the main users of different spaces could be found; and by counting the relationship between space and behavior, the inconsistency between space and behavior could be explored (Figure 4).

4. Behavior Detection Model Construction

4.1. Site Selection and Behavioral Data Collection

The research site selected for this study was the Bajiao Cultural Square in Bajiao Street, Shijingshan District, Beijing. The surrounding buildings and facilities have a long history of construction, and there are many old residential areas. The need for outdoor activity space optimization is more urgent, so this research was carried out there (Figure 5).
In the construction of the behavior detection model, this paper used a combination of public datasets and self-made datasets to build the dataset required by the behavior detection model, and the image data collected in the research area could also be used as a source of self-made data.

4.2. Dataset and Data Preprocessing

Research institutions and companies often provide public datasets and have characteristics such as a large data volume, rich variety, and wide coverage. However, in actual engineering practice, facing specific application scenarios with special recognition needs is often necessary, emphasizing the specificity rather than the universality of the algorithmic model and having higher requirements for fine-grained detection. For example, autonomous driving target recognition necessitates high recognition accuracy in the driver’s perspective and multi-environment scenarios, whereas intelligent security in buildings often necessitates the recognition of a single space scene from an overhead perspective. Therefore, this requires a customized dataset so that the model can cope with most scenarios while achieving higher detection capabilities for personalized targets and special application scenarios. Furthermore, the large number of categories in public data will eventually increase the algorithmic model’s complexity. The self-made data set the required recognition categories, which can be used to perform the necessary pruning and optimization of the algorithmic model.
(1)
Public datasets
Common public datasets for pedestrian attributes include PETA [38], RAP [39], PA-100k [40], etc. Among them, the PETA dataset was compiled by Yubin Deng et al. from the Chinese University of Hong Kong in 2014. The dataset contains a total of 8705 pedestrians and 19,000 images with resolutions ranging from 17 × 39 to 169 × 365 pixels. Each pedestrian is labeled with 61 binary and 4 multi-category attributes. The PETA dataset is a collection of smaller pedestrian re-identification datasets annotated with attributes.
Following the actual needs of this study, the PETA dataset was cropped using a Python script according to Table 1 to extract age and gender attributes and generate label files.
(2)
Custom dataset
The collected video data were extracted frame by frame, and 3000 images with large differences in scenes, crowds, and behaviors were selected through manual screening as the dataset required for training. According to the categories set in Table 1 and Table 2, the training set was manually labeled using LabelImg v1.8.1 software [41] to obtain label files for each image (Figure 6).
Different attribute values were extracted from the label files using a Python script, and the target label files were finally generated (Figure 7 and Table 1, Table 2 and Table 3), constructing a combination vector of pedestrian image attributes and behavior labels. It is worth mentioning that since gender belongs to a binary classification problem and could be represented by 0 and 1, it could be represented by a single label.

4.3. Hyper-Parameters and Evaluation Metric

This article built a SE-ResNet50 model based on the PyTorch deep learning framework, with the hardware configuration of NVIDIA RTX A5000 24GB GPUs. The network was optimized using the Adam algorithm, with an initial learning rate (lr) of 0.001, and the batch size of the dataset was set to 128 (when batch size is set too small, it causes greater fluctuations during training and slower convergence). The model was trained for 50 epochs, and Table 4 summarizes the values of some hyper-parameters in our training stage. The PETA+ self-made dataset used for network training was randomly divided into training and test sets at a ratio of 6:4, with a total of 10,200 images in the training set and 6800 images in the test set. The training set was used for model training, while the test set was used for performance validation of the model.
Gender division is a binary classification problem, while age and behavior division belong to multi-classification problems. For binary classification problems, the accuracy, precision, recall, and F1 score can be used as evaluation indicators of model performance. For multi-classification problems, each category is usually transformed into multiple binary classification models, calculated separately, and then summarized using certain criteria.
Accuracy: the proportion of correctly predicted samples to the total number of samples, calculated as follows in Equation (1).
A c c u r a c y = T P + T N T P + T N + F P + F N
Precision: the proportion of samples correctly predicted as positive to the number of samples predicted as positive. The calculation formula is as follows in Equation (2):
P r e c i s i o n = T P T P + F P
Recall: the proportion of samples correctly predicted as positive to the actual number of positive samples, also known as sensitivity or the true positive rate. The calculation formula is as follows in Equation (3):
R e c a l l = T P T P + F N
F1 score: an indicator that comprehensively considers accuracy and recall and is the harmonic mean of precision and recall. The higher the F1 score, the better the performance of the model. The calculation formula is as follows in Equation (4):
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
where true positive (TP) represents the number of positive labels correctly predicted; true negative (TN) represents the number of negative labels correctly predicted; false positive (FP) represents the number of negative samples incorrectly predicted; and false negative (FN) represents the number of positive samples incorrectly predicted.

4.4. Model Construction and Model Training

In the pedestrian attribute recognition model, there are binary classification problems for pedestrian gender and multi-classification problems for age, which belong to multi-task classification problems. The model was improved in the final fully connected layer (FC) (Figure 8).
The experimental results can be seen in Figure 9 and Table 5.
By observing Figure 9a–c,g,h, it can be seen that when the number of iterations was within 10, the errors of the training and test sets of pedestrian attributes and behaviors quickly decreased, and the accuracy quickly rose; when the number of iterations reached about 30, at this time, the error value of the training set was still decreasing, while the error value of the test set tended to converge and reach the lowest value, and the accuracy also gradually converged to the maximum value; and when the number of iterations was greater than 30, the error value of the training set still decreased and the accuracy still increased, but at this time, the error value of the test set increased and the accuracy increased slowly, indicating that overfitting occurred in the model at this time. Therefore, this model can find optimal model parameters around 30 iterations.
According to Table 5, the accuracy of model’s recognition was above 83%, and the model achieved good results and could be used for behavior detection in subsequent actual scenarios.

5. Analysis of Pedestrian Movement Patterns

Two video collection points, A and B, were set up in the Bajiao Cultural Square (Figure 10 and Figure 11). Each point was set to record one frame per minute, and 9 hours of continuous collection were carried out from 9:00 to 18:00, with a total of 1080 valid pictures collected. By detecting and recognizing the collected pictures one by one through the model trained in the previous section, we obtained a pedestrian attribute statistics chart (Figure 12) and behavior statistics chart (Figure 13) that changed over time.

5.1. Spatial Feature Analysis

In order to analyze the behavior of crowds in different spaces, according to Figure 10, the Bajiao Cultural Square was decomposed into three types of spaces: square space, leisure space, and fitness space. These three types of spaces could further be decomposed into different spatial elements (Table 6).

5.2. Analysis of Population Attributes and Spatial Relationships

The statistical results show that there were significant differences in the use of different spaces. Point A represents a square space and Point B represents a leisure and fitness space.
Point A can be roughly divided into several stages through Figure 12a.
The first stage: from 9:00 to 10:00, the total number of people was gradually increasing, with a total number of around 10 people, among which the proportion of elderly people was relatively large, and the proportion of adults and children was relatively small. After 10:00, the total number of people began to fall. In this stage, due to the morning exercise habits of some elderly people, the main users of the space were elderly people.
The second stage: around 10:30, the total number of people rose again and reached the peak of the morning, with a maximum of over 20 people. There was a slight decline in the middle period, but it remained above 10 people overall, still exceeding the total number of people in the first stage. It started to gradually decline around 11:30 and no longer rose around 12:00. In this stage, the number of children increased, while adults also continued to increase. On the contrary, the number of elderly people decreased. At this time, children were the main users of the space, and adults appeared at the same time to take care of the children.
The third stage: from 12:00 to 14:30, the total number of people was zero, with only a brief stay of 1–3 people. In this stage, no one used the space and it was just for pedestrians to pass through.
The fourth stage: from 14:30, the total number of people continued to climb and gradually rose to more than 10 people, exceeding the first stage, and started to decline before 17:00. At this time, the proportion of elderly people was relatively high, and the proportion of adults and children was relatively low. The main users were elderly people.
The fifth stage: from 17:00, the total number of people rose sharply, reaching the maximum value of the day, with a maximum of over 40 people. The number of children rose sharply, and the number of adults also increased significantly. Around 17:40, the number of people started to drop sharply, but the total number still exceeded the morning. In this stage, although the number of adults and elderly people had increased, their responsibility was often to take care of children, and children become the main users of the space.
Point B can be roughly divided into several stages through Figure 12b:
The first stage: from 9:00 to 12:00, at this stage, the number of people started to rise gradually from around 10 people to more than 20 people. The proportion of elderly people was the largest, and the number of adults and children remained flat. Elderly people were the main users.
The second stage: from 12:30 to 13:30, in this stage, the number of people remained zero, which is similar to the third stage at point A. The difference is that the time interval here was relatively short.
The third stage: from 13:30 to 16:50, this stage initially maintained steady growth and then slowed down and started to rise further around 15:30. Around 16:50, the number of people dropped sharply and rose again. At this moment, this was mostly caused by adults or elderly people picking up children from school.
The fourth stage: from 16:50 to 18:00, the number of people in this stage rose again, but the total number of people grew relatively less. The main activity area for newly added children after school was area A, and the number of elderly people and adults did not change much.
Through statistical analysis of areas A and B, there was no significant difference in spatial usage in terms of gender, but there were significant differences in age and time. Different ages have significant differences in different spaces. In area A, the elderly only used it from 9:00 to 10:00, and children were the main users at other times. Although the number of adults increased in the statistical chart, through on-site observation, adults and some elderly people mainly appeared in this space to take care of children and did not “really” use the space. The total number of people in the afternoon far exceeded that in the morning, and the number of children rose sharply at 17:00 due to school dismissal. In Area B, the total number of people was lower than that in Area A. This is because space B was a leisure and fitness space with a relatively low capacity compared to area A. In terms of user groups, the overall proportion of elderly people remained high, and around 17:00, when children were dismissed from school, the number of people in this area did not increase significantly. This shows that area B had a relatively limited appeal to children and the user group was relatively stable. Elderly people were the main users throughout the entire period.

5.3. Analysis of Mechanisms Linking Population Attributes and Behaviors

According to Figure 13, by comparing the behavioral statistics of different age groups at points A and B, it can be observed that different ages had different behavioral preferences. As people age, their activity types shift from intense activities to soothing activities. Children are keen on running, often intertwined with walking, and this was particularly prominent at point A (Figure 10a, square space). Adults rarely engaged in intense activities such as running and standing, walking and sitting. Their activity types were relatively simple, which is related to their responsibility to take care of children. Adults followed their children’s activities. The activities of the elderly were relatively rich. Except for running, many people participated in other activities such as exercising, walking, sitting, and playing chess and cards. Among them, chess and cards had a strong time characteristic and were concentrated from 10:00 to 11:40 and from 15:00 to 17:00. Compared to other activities, the number of people playing chess changed at a relatively stable rate and lasted for a relatively long time, up to 2 hours. This shows that the elderly had a high demand for chess and cards.

5.4. Analysis of the Mechanisms Linking Behavior and Space

The Bajiao Cultural Square can be analyzed from the characteristics of its spatial environment and its elements of spatial composition. Through the correspondence between behavior and space, this was organized into Table 7.
Based on the preferences of children for running and walking and the preferences of the elderly for exercising, walking, sitting, and playing chess and cards mentioned in the previous section, combined with the characteristics of square space, leisure space, and fitness space, the following analysis was performed.
From the perspective of children, the openness of the square space satisfied their running behavior, while the shaded and small-scale characteristics of the leisure space affected their running activities. When the number of children increased significantly at 17:00 when school was dismissed, the number of people in the square increased significantly, while there was no significant increase in the number of children’s activities in the leisure area. This shows that leisure and fitness spaces have a relatively low appeal to children’s activities. The appearance of running behavior in leisure spaces can be regarded as an inconsistency between behavior and space.
Since adults were not the main users of space, they will not be analyzed for now.
From the perspective of the elderly, they had relatively few activities in square spaces and more in the leisure and fitness spaces. Their main activities were exercising, walking, and sitting. These three types of activities are consistent with space. However, there is some inconsistency between the preference of the elderly people for playing chess and cards and the non-independent, non-enclosed, and noisy characteristics of leisure spaces. Through actual observation, many elderly people brought their own chess and card equipment to put on seats. Many elderly people were also standing around watching. The spatial environment and spatial elements did not meet the needs of elderly people for chess and cards.
There is consistency between behavior and space, as specific behaviors have specific spatial requirements, just as training activities require fitness equipment and sitting requires seats and shaded spaces. Conversely, specific spaces attract specific behaviors. Fitness spaces have a large number of training activities rather than other activities. Leisure spaces attract a large number of people sitting and resting. When behavior is inconsistent with space, two situations will occur. One is to “squeeze out” people with specific behavioral needs so that their behavioral needs are not met. The other is that people change the existing environment to meet specific behavioral needs. For example, many elderly people brought their own chess and card equipment to play chess under seats. However, objectively speaking, this will “squeeze out” other people who should sit there to a certain extent, and there may be competition for space.

5.5. Conclusions of the Analysis and Planning Recommendations

Through comprehensive analysis, children’s spatial preference is for square spaces, and their behavioral preferences are mostly for running and walking. The openness of the square is consistent with children’s preferred behavior, whereas leisure spaces are inconsistent. The elderly prefer fitness and leisure spaces, and their behavioral preferences are mostly for exercising, sitting, and playing chess and cards. Among them, exercising, sitting, and walking are consistent with fitness and leisure spaces, while chess and cards are inconsistent with leisure spaces. Adults have no obvious preference for space and prefer to walk and sit. Through actual observation, adults took care of their children, so they appeared to follow their children and did not use space independently.
The primary activities of children are chasing and playing. Their activities are narrow, and their demand for leisure and fitness facilities is low. This is because most spaces have equipment designed for elderly people’s activities, which is not appealing to children. There is a general shortage of facilities for children. It is suggested that facilities such as swings and seesaws be added to the fitness area to improve children’s activities.
The elderly have a strong demand for chess and cards, but there is a certain lack of facilities. Many elderly people brought their own chess and card equipment to play chess in the leisure area. Due to their preference for a shaded, enclosed, and quiet environment for chess and card activities, and the fact that the leisure space was adjacent to the square space and fitness space and the chess and card activities lasted for a long time, it is recommended to add temporary movable chess and card equipment on the west side of the square. When elderly people want to play chess, they can set it up temporarily and put it away when finished.

6. Conclusions and Prospects

This paper took a community square in Beijing as the practice site and used a behavior detection algorithm of computer vision to effectively extract the population’s attributes, behavior types, and time information, while establishing statistical charts. Through the analysis of the population types and spatial types, it was found that there was no obvious difference in the use of space by gender, while there was some difference in the use of space by age. Therefore, space has different use values for different age groups, and it is proposed that the needs of different age groups should be considered in spatial design. Through the analysis of crowds and behaviors, it was found that different crowds have different behavioral preferences, thus determining their behavioral needs. Through the analysis of behavior types and spatial types, it was found that there is inconsistency between some behaviors and spatial functions, reflecting the places where the current spatial functions are misplaced for behavioral needs, thus determining the focus points for spatial optimization. Through the analysis of behavior and time, it was concluded that different activity types have relatively concentrated time periods. Therefore, time factors are to be considered in meeting behavioral activity needs, and spatial time planning is proposed, that is, the same space should be transformed into different types of spaces at different time periods and different behavioral needs should be met at different time periods.
In summary, our research makes contributions in the following aspects:
  • We propose a method that can extract crowd attributes, behavior types, and time information from video images. Compared with traditional behavior research that mostly relies on the manual collection of behavior, this method realizes automation and efficiency. Compared with the current use of computer vision for behavior research, this method also realizes the extraction of fine-grained behavior. Our integrated framework can effectively extract more fine-grained behavioral information, providing a basis for precise spatial optimization to meet the behavioral needs of different crowds.
  • We introduce an attention mechanism to enhance important feature channels and suppress irrelevant feature channels, improving the efficiency of behavioral recognition. In model training, we integrated public datasets and self-made data, achieving higher robustness for specific scenarios.
  • Using the method proposed in this paper, we took a community outdoor public space as the practice site, effectively extracting crowd attributes, behavior types, and time information and combining the division of spatial types to propose an analysis framework for the correlation mechanism between behavior and space. We drew the following conclusions: different crowds have different behavioral preferences, there is inconsistency in the use of space by different crowds, there is inconsistency between behavioral and spatial function, and behavior is concentrated over time.
The strong advantage of a computer vision behavior detection algorithm in automatically extracting video information provides conditions for fine-grained behavioral research at the macro scale in the future. Fine-grained behavioral research at the urban scale and with a timespan of more than one year will be of great value and of a broad scope.

Author Contributions

Conceptualization, methodology, review, and funding acquisition, L.W.; investigation, software, writing—original draft, W.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the 2023 North China University of Technology Organized Scientific Research Project (Grant No. 110051360023XN278-13) and the Beijing Municipal Science and Technology Commission Project (Grant No. Z181100009218001).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are not publicly available due to privacy reasons.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jacobs, J. The Death and Life of Great American Cities; Vintaqe Books: New York, NY, USA, 1961. [Google Scholar]
  2. Ning, Y.M. Urban Geography in Western Countries. Urban Probl. 1985, 2, 29–34. [Google Scholar]
  3. Chapin, F.S. Human Activity Patterns in the City: Things People Do in Time and in Space; John Wiley and Sons: New York, NY, USA, 1974. [Google Scholar]
  4. Niu, X.Y.; Ding, L.; Song, X.D. Understanding Urban Spatial Structure of Shanghai Central City Based on Mobile Phone Data. Urban Plan. Forum 2014, 24, 61–67. [Google Scholar]
  5. Long, Y.; Sun, L.J.; Tao, S. A Review of Urban Studies Based on Transit Smart Card Data. Urban Plan. Forum 2015, 3, 70–77. [Google Scholar] [CrossRef]
  6. Long, Y.; Zhang, Y.; Cui, C.Y. Identifying Commuting Pattern of Beijing Using Bus Smart Card Data. Acta Geogr. Sin. 2012, 67, 1339–1352. [Google Scholar]
  7. Gao, Q. Big Data-Driven Analysis on Urban Activity Space Dynamics. Ph.D. Thesis, Wuhan University, Wuhan, China, 2019. [Google Scholar]
  8. Chai, Y.; Shen, J. Trave L-Activity Base Dresearch Frame of Urban Spatial Structure. Hum. Geogr. 2006, 21, 108–112. [Google Scholar]
  9. Dong, H.X.; Xie, Q.Y. Research on health risks to the elderly in residential spaces from the perspective of behavioral safety: Theoretical methods, risk formation rules, and assessment and prevention. City Plan. Rev. 2022, 46, 77–89. [Google Scholar]
  10. Chen, J.Z.; Zhang, J.J. Research on the Relationship between Community Park Space and Characteristics of Outdoor Activities of the Elderly. Chin. Landsc. Archit. 2022, 38, 86–91. [Google Scholar] [CrossRef]
  11. Chen, X.W.; Liu, W.F.; Yang, C.H. Research on the Influence of Urban Park Built Environment Elements on the Activities of the Elderly. South Archit. 2022, 12, 93–103. [Google Scholar]
  12. Wang, H.Y. A Study on Architectural Space Design of Elderly Institution Based on the Interaction between Space and Behavior. Ph.D. Thesis, Dalian University of Technology, Dalian, China, 2017. [Google Scholar]
  13. Liu, B.; Qing, L.B.; Han, L.M.; Long, Y. Research on Public Space Vitality Representation Based on Space Trajectory Entropy. Landsc. Archit. 2022, 29, 95–101. [Google Scholar] [CrossRef]
  14. Wu, S.J.; Hu, Y.K. A Visualized Research of Human’s Behavior in Public Spaces Based on Deep Learning. Landsc. Archit. 2022, 29, 106–111. [Google Scholar] [CrossRef]
  15. Wu, S.J.; Hu, Y.K. Research on Spatial Hyper-Links in Commercial Complexes Based on Deep Learning. South Archit. 2022, 1, 61–68. [Google Scholar]
  16. Hu, Y.K.; Ding, M.Y.; Wang, Z.Q.; Zhang, K. The Application Potential Research on Computer Vision Technology in Urban 489 Street Spaces. Landscape Architecture. Landsc. Archit. 2017, 10, 50–57. [Google Scholar] [CrossRef]
  17. Ding, M.Y. The Crowd Behavior Prototype Study of Urban Street Pedestrian Space Based on Computer Vision Technology. Master’s Thesis, Tianjin University, Tianjin, China, 2018. [Google Scholar]
  18. Wei, Y.; Forsyth, D.A. Learning the Behavior of Users in a Public Space through Video Tracking. In Proceedings of the 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION’05)-Volume 1, Breckenridge, CO, USA, 5–7 January 2005; IEEE: Breckenridge, CO, January, 2005; pp. 370–377. [Google Scholar]
  19. Hou, J.; Chen, L.; Zhang, E.; Jia, H.; Long, Y. Quantifying the Usage of Small Public Spaces Using Deep Convolutional Neural Network. PLoS ONE 2020, 15, e0239390. [Google Scholar] [CrossRef]
  20. Liu, J.; Meng, B.; Yang, M.; Peng, X.; Zhan, D.; Zhi, G. Quantifying Spatial Disparities and Influencing Factors of Home, Work, and Activity Space Separation in Beijing. Habitat Int. 2022, 126, 102621. [Google Scholar] [CrossRef]
  21. Wong, P.K.-Y.; Luo, H.; Wang, M.; Leung, P.H.; Cheng, J.C.P. Recognition of Pedestrian Trajectories and Attributes with Computer Vision and Deep Learning Techniques. Adv. Eng. Inform. 2021, 49, 101356. [Google Scholar] [CrossRef]
  22. Liang, S.; Leng, H.; Yuan, Q.; Wang, B.; Yuan, C. How Does Weather and Climate Affect Pedestrian Walking Speed during Cool and Cold Seasons in Severely Cold Areas? Build. Environ. 2020, 175, 106811. [Google Scholar] [CrossRef]
  23. Jiao, D.; Fei, T. Pedestrian Walking Speed Monitoring at Street Scale by an In-Flight Drone. PeerJ Comput. Sci. 2023, 9, e1226. [Google Scholar] [CrossRef] [PubMed]
  24. Xuan, W.; Zhao, L. Research on Correlation between Spatial Quality of Urban Streets and Pedestrian Walking Characteristics in China Based on Street View Big Data. J. Urban Plan. Dev. 2022, 148, 05022035. [Google Scholar] [CrossRef]
  25. Zhou, S.; Deng, L.; Kwan, M.-P.; Yan, R. Social and Spatial Differentiation of High and Low Income Groups’ out-of-Home Activities in Guangzhou, China. Cities 2015, 45, 81–90. [Google Scholar] [CrossRef]
  26. Ekawati, S.A. Children–Friendly Streets as Urban Playgrounds. Procedia Soc. Behav. Sci. 2015, 179, 94–108. [Google Scholar] [CrossRef]
  27. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7. Available online: https://github.com/WongKinYiu/yolov7 (accessed on 23 April 2023).
  28. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
  29. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
  30. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement 2018. arXiv Preprint 2018, arXiv:1804.02767. [Google Scholar]
  31. Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection 2020. arXiv Preprint 2020, arXiv:2004.10934. [Google Scholar]
  32. Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications 2022. arXiv Preprint 2022, arXiv:2209.02976. [Google Scholar]
  33. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2022. [Google Scholar]
  34. Jocher, G. YOLOv5. Available online: https://github.com/ultralytics/yolov5 (accessed on 21 April 2023).
  35. Zhuo, L.; Yuan, S.; Li, J.F. Pedestrian Multi—Attribute Collaborative Recognition Method Based on ResNet50 and Channel Attention Mechanism. Meas. Control Technol. 2022, 41, 1–8, 15. [Google Scholar] [CrossRef]
  36. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  37. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition 2015. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  38. Deng, Y.; Luo, P.; Loy, C.C.; Tang, X. Pedestrian Attribute Recognition At Far Distance. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; ACM: Orlando, FL, USA, 2014; pp. 789–792. [Google Scholar]
  39. Li, D.; Zhang, Z.; Chen, X.; Ling, H.; Huang, K. A Richly Annotated Dataset for Pedestrian Attribute Recognition 2016. arXiv Preprint 2016, arXiv:1603.07054. [Google Scholar]
  40. Wang, X.; Zheng, S.; Yang, R.; Luo, B.; Tang, J. Pedestrian Attribute Recognition: A Survey 2019. Pattern Recognit. 2021, 121, 108220. [Google Scholar] [CrossRef]
  41. Github. LabelImg. Available online: https://github.com/HumanSignal/labelImg/releases (accessed on 21 April 2023).
Figure 1. A technical road map.
Figure 1. A technical road map.
Applsci 13 10922 g001
Figure 2. Behavior detection algorithm framework diagram.
Figure 2. Behavior detection algorithm framework diagram.
Applsci 13 10922 g002
Figure 3. SE—Residual Block.
Figure 3. SE—Residual Block.
Applsci 13 10922 g003
Figure 4. Crowd attributes, behaviors, and spatial relationships map.
Figure 4. Crowd attributes, behaviors, and spatial relationships map.
Applsci 13 10922 g004
Figure 5. Location map.
Figure 5. Location map.
Applsci 13 10922 g005
Figure 6. Annotation with LabelImg v1.8.1 software.
Figure 6. Annotation with LabelImg v1.8.1 software.
Applsci 13 10922 g006
Figure 7. Pedestrian multi-attribute and behavior labels.
Figure 7. Pedestrian multi-attribute and behavior labels.
Applsci 13 10922 g007
Figure 8. Schematic diagram of the fully connected layer of RE-ResNet for pedestrian attribute recognition.
Figure 8. Schematic diagram of the fully connected layer of RE-ResNet for pedestrian attribute recognition.
Applsci 13 10922 g008
Figure 9. Training performance of the SE-ResNet50 model. (a) Error value of the pedestrian attribute model. (b) Accuracy of gender attributes. (c) Accuracy of age attributes. (d) Precision of gender attributes. (e) Recall of gender attributes. (f) F1 value of gender attributes. (g) Error value of behavior model. (h) Accuracy of behavior model. (i) Precision of behavior model. (j) Recall of behavior model. (k) F1 value of behavior model.
Figure 9. Training performance of the SE-ResNet50 model. (a) Error value of the pedestrian attribute model. (b) Accuracy of gender attributes. (c) Accuracy of age attributes. (d) Precision of gender attributes. (e) Recall of gender attributes. (f) F1 value of gender attributes. (g) Error value of behavior model. (h) Accuracy of behavior model. (i) Precision of behavior model. (j) Recall of behavior model. (k) F1 value of behavior model.
Applsci 13 10922 g009aApplsci 13 10922 g009b
Figure 10. Bajiao cultural plaza floor plan.
Figure 10. Bajiao cultural plaza floor plan.
Applsci 13 10922 g010
Figure 11. Live video capture view. (a) Real-life image of point A and (b) real-life image of point B.
Figure 11. Live video capture view. (a) Real-life image of point A and (b) real-life image of point B.
Applsci 13 10922 g011aApplsci 13 10922 g011b
Figure 12. Statistical chart of pedestrian attributes. (a) Statistical chart of pedestrian attributes at point A and (b) statistical chart of pedestrian attributes at point B.
Figure 12. Statistical chart of pedestrian attributes. (a) Statistical chart of pedestrian attributes at point A and (b) statistical chart of pedestrian attributes at point B.
Applsci 13 10922 g012
Figure 13. Statistical map of pedestrian behavior. (a) Statistical graph of children’s behavior at point A, (b) statistical graph of adults’ behavior at point A, (c) statistical graph of elderly people’s behavior at point A, (d) statistical graph of children’s behavior at point B, (e) statistical graph of adults’ behavior at point B, and (f) statistical graph of elderly people’s behavior at point B.
Figure 13. Statistical map of pedestrian behavior. (a) Statistical graph of children’s behavior at point A, (b) statistical graph of adults’ behavior at point A, (c) statistical graph of elderly people’s behavior at point A, (d) statistical graph of children’s behavior at point B, (e) statistical graph of adults’ behavior at point B, and (f) statistical graph of elderly people’s behavior at point B.
Applsci 13 10922 g013aApplsci 13 10922 g013b
Table 1. Population attribute labels of the dataset.
Table 1. Population attribute labels of the dataset.
Attribute OrderAttributeAttribute Category
1GenderMale
1Female
2AgeChild
3Adult
4Geriatric
Table 2. Pedestrian attribute classification table.
Table 2. Pedestrian attribute classification table.
AttributeCategoryLegend
GenderFemaleApplsci 13 10922 i001
MaleApplsci 13 10922 i002
AgeGeriatricApplsci 13 10922 i003
AdultApplsci 13 10922 i004
ChildApplsci 13 10922 i005
Images were processed for privacy.
Table 3. Behavior type classification table.
Table 3. Behavior type classification table.
Behavior CategoryBehavior CodeLegend
Exercise1Applsci 13 10922 i006
Running2Applsci 13 10922 i007
Sitting3Applsci 13 10922 i008
Walking4Applsci 13 10922 i009
Standing5Applsci 13 10922 i010
Chess or Card Games6Applsci 13 10922 i011
Images were processed for privacy.
Table 4. Hyper-parameters for model training.
Table 4. Hyper-parameters for model training.
Hyper-ParameterSetup
Batch size128
Learning Rate0.001
Epochs50
Table 5. Quantitative comparisons against various methods on the PETA+ custom dataset.
Table 5. Quantitative comparisons against various methods on the PETA+ custom dataset.
DatasetPETA+ Custom DatasetCustom Dataset
MetricAccu.Prec.RecallF1Accu.Prec.RecallF1
SE-ResNet5083/96.384848485868386
Source: Created by the authors.
Table 6. Spatial characteristic analysis.
Table 6. Spatial characteristic analysis.
Space TypeSpace EnvironmentSpace Elements
Square spaceLarge scale, open space, unobstructedHard paving, stage, street lights
Leisure spaceModerate scale, shaded, with a certain degree of enclosure at the topSeats, tree crowns, tree pools
Fitness spaceNarrow and long spaceFitness equipment, shade, seats
Table 7. Analysis of behavioral and spatial characteristics.
Table 7. Analysis of behavioral and spatial characteristics.
Behavioral ActivityRequired PlaceActual Place
Spatial EnvironmentSpatial ElementsSpatial EnvironmentSpatial Elements
ExerciseOpenSoft paving, fitness equipmentNarrowHard paving, fitness equipment
RunningOpen spaceLarge area pavingOpen spaceLarge area paving
SittingSmall scale, shadedTree canopy, seatsSmall scale, shadedTree canopy, seats
WalkingShadedSeats, walkwaysShadedSeats
Chess and cardsIndependent, enclosed, quietTable and chair facilities, tree canopy, shadeNon-independent, not enclosed, noisySeats
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; He, W. Analysis of Community Outdoor Public Spaces Based on Computer Vision Behavior Detection Algorithm. Appl. Sci. 2023, 13, 10922. https://doi.org/10.3390/app131910922

AMA Style

Wang L, He W. Analysis of Community Outdoor Public Spaces Based on Computer Vision Behavior Detection Algorithm. Applied Sciences. 2023; 13(19):10922. https://doi.org/10.3390/app131910922

Chicago/Turabian Style

Wang, Lei, and Wenqi He. 2023. "Analysis of Community Outdoor Public Spaces Based on Computer Vision Behavior Detection Algorithm" Applied Sciences 13, no. 19: 10922. https://doi.org/10.3390/app131910922

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop