Next Article in Journal
Assessing the Role of Environmental Covariates and Pixel Size in Soil Property Prediction: A Comparative Study of Various Areas in Southwest Iran
Previous Article in Journal
Continuous Decline in Direct Incomes for Farmers Threatens the Sustainability of the Grain for Green Project
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting and Optimizing Restorativeness in Campus Pedestrian Spaces based on Vision Using Machine Learning and Deep Learning

1
School of Architecture and Design, Harbin Institute of Technology, Harbin 150001, China
2
Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, Harbin 150001, China
*
Authors to whom correspondence should be addressed.
Land 2024, 13(8), 1308; https://doi.org/10.3390/land13081308 (registering DOI)
Submission received: 12 July 2024 / Revised: 15 August 2024 / Accepted: 16 August 2024 / Published: 18 August 2024
(This article belongs to the Section Land Planning and Landscape Architecture)

Abstract

:
Restoring campus pedestrian spaces is vital for enhancing college students’ mental well-being. This study objectively and thoroughly proposed a reference for the optimization of restorative campus pedestrian spaces that are conducive to the mental health of students. Eye-tracking technology was employed to examine gaze behaviors in these landscapes, while a Semantic Difference questionnaire identified key environmental factors influencing the restorative state. Additionally, this study validated the use of virtual reality (VR) technology for this research domain. Building height difference (HDB), tree height (HT), shrub area (AS), ground hue (HG), and ground texture (TG) correlated significantly with the restorative state (ΔS). VR simulations with various environmental parameters were utilized to elucidate the impact of these five factors on ΔS. Subsequently, machine learning models were developed and assessed using a genetic algorithm to refine the optimal restorative design range of campus pedestrian spaces. The results of this study are intended to help improve students’ attentional recovery and to provide methods and references for students to create more restorative campus environments designed to improve their mental health and academic performance.

1. Introduction

Given their high cognitive demands, educational environments are challenging, especially for university students requiring prolonged focus for learning. This makes them more vulnerable to psychological issues, including stress, anxiety, mental fatigue, and depression [1,2]. Research indicates that restorative environments aid individuals by regaining their depleted attention, alleviating psychological stress, and triggering positive physiological and psychological responses [3,4]. As pedestrian spaces are indispensable parts of campuses, their vitality and attractiveness significantly impact pedestrian students’ health. Optimizing environmental features and visual conditions can significantly reduce stress levels, whereas exposure to tense visual environments may decrease efficiency and focus [5]. Numerous studies have revealed the crucial role of visual environmental perception in restoring students’ attention, with visual perception quality correlating positively with restorative benefits [6,7,8], emphasizing the importance of restorative design in enhancing students’ attention restoration.
Studies have employed various methods to explore the impact of visual environmental perception on spatial factors, including field visits [9], scene photos [10], questionnaire surveys [11], street view images [12], and virtual reality (VR) technology [13]. Advancing computer vision technology has made the exploration of visual environmental perception as a quantifiable regression problem possible by analyzing feedback from millions of users on places. Virtual environments, with rich perceptual stimuli and vivid experiences, provide a high level of “presence”, effectively eliciting participants’ emotional responses [14]. Omar integrated physiological indicators to compare the relationship between physical and virtual walking [15]. Eye trackers enabled data collection on the number and duration of focal points and areas of interest (AOI), which was then used to filter architectural elements [16]. Combining VR and eye trackers enables efficient and accurate visual data collection from built environments [17]. Pei revealed visual attention patterns by analyzing physiological signals and eye-tracking data collected while roaming in a virtual environment, which were used to evaluate architectural design proposals [18]. Immersive Virtual Environments (IVEs) provide scenarios with high degrees of presence and immersion, and Ma’s study on lighting design proved the similarity in participants’ responses between IVE simulations and real environments [19]. Subjective data, obtained primarily through interviews and questionnaire surveys, often inaccurately reflect the effective connection between visual perception and spatial factors. Limitations in studying how spatial feature parameters impact the visual environment in real physical settings are significant. However, VR provides accurate data for visual environmental research.
In 1989, Stephen Kaplan and Rachel Kaplan introduced the Attention Restoration Theory (ART), focusing on the restorative and enhancing effects of environments on human attention [20]. Subsequently, Hartig developed the Perceived Restorativeness Scale (PRS) in 1997 to assess environmental impacts on attention restoration [21]. Perceived space quality is crucial in predicting space restorativeness [22]. Ulrich highlighted the decisive role of the quantity difference between natural- and human-made attributes in environmental perception and attention restoration [23]. Silvia revealed that natural factors (e.g., vegetation and water bodies) and constructed factors (e.g., community buildings, squares, and sculptures) significantly promote restoration [24]. They found, especially in urban settings, that landscapes were more conducive to restoration than some natural landscapes [24]. Jenny demonstrated how urban walking facilitates recovery in adults with poor mental health [25]. Catherine identified urban pedestrian environment features impacting pedestrian psychology; environmental characteristics had different health impacts on different groups [26]. Some researchers have compared the differences in restorative experiences during outdoor walking activities across seasons, thoroughly discussing restorative effects [27]. Lu identified key spatial perception factors within campus environments crucial for bolstering students’ attention restoration [28]. Yasser devised a comprehensive campus ART evaluation tool that synergizes restorative landscape design concepts, visual landscape preferences, and strategic campus planning [29]. Although existing research indicates a strong positive correlation between visual perception and psychological restoration and mature studies have explored urban pedestrian spaces, research on campus spaces generalizes spatial feature classifications too broadly, ignoring the contributions of specific physical dimensions (e.g., fences and paths) and the impact of walking activities on environmental perception. Moreover, studies have generally considered small sample sizes and have not gone in depth to fully understand how different spatial features in campus environments affect students’ stress recovery.
In recent years, advancements in technology have opened new avenues for exploring space perception and its effects on student well-being. Among these advancements, machine learning (ML) and deep learning (DL) have emerged as powerful tools for analyzing complex data and uncovering patterns that are not immediately apparent through traditional statistical methods [30,31,32]. Machine learning techniques are widely applied in built environment studies, including building energy consumption [33,34], thermal comfort [35,36], wind environment [37], and environmental pollution [38]. DL, particularly suited for handling high-dimensional and large-scale datasets, is used in image and video analysis for object recognition, image segmentation, and behavior detection, aiding in monitoring environmental changes and user interactions [39,40]. For instance, analyzing video data can assess the impact of changes in natural environments on users’ psychological states [41]. Convolutional Neural Networks (CNN) are adept at automatically extracting and learning features from images, making them ideal for vision-oriented spatial studies [42,43]. The application of ML and DL in user behavior analysis helps understand activity patterns and preferences in different environments [44]. This information can optimize environmental design and enhance the effectiveness of restorative environments [45]. Despite the widespread application of ML and DL in architectural and environmental studies, research on spatial quality prediction based on visual perception remains limited. Most studies have focused on urban or indoor environments, often neglecting campus-specific scenarios. Additionally, horizontal comparisons between the predictive accuracies of different ML and DL models are scarce, limiting the understanding of model performance and the further application and development of these technologies.
The main purpose of this study was to comprehensively understand the factors in the campus pedestrian environment that influence students’ attention restoration and to propose optimization schemes. This study aimed to achieve the four scientific objectives. 1. Key Spatial Factors for Attention Restoration: To systematically identify and extract the restorative spatial factors that significantly impact students’ attention restoration during campus walks, using eye-tracker experiments and questionnaires to establish a foundational understanding of these factors.2. Mechanistic Analysis of Spatial Factors: To investigate the underlying mechanisms by which the identified spatial factors influence the attention-restoration process in college students, employing restorative orthogonal experiments to uncover causal relationships and interaction effects. 3. Predictive Modeling of Spatial Restorativeness: To develop the spatial restorativeness prediction model, we compared the performance of four machine learning models: Decision Tree (DT), k-Nearest Neighbors (KNN), Artificial Neural Networks (ANN), and CNN. We selected the model with the best training performance as the final predictive model. 4. Optimization of Spatial Design Parameters: To explore the optimization mechanisms of spatial design parameters for campus pedestrian spaces using Genetic Algorithms (GA), we aimed to uncover the theoretical underpinnings and empirical validation of design features that enhance attention restoration.

2. Mterials and Methods

2.1. Study Design and Restoration States Assessment

2.1.1. Study Design

This study investigates the influence of campus spatial factors on college students’ attention restoration during walking and determines the optimal parameter range of campus pedestrian space restorativeness. The methodology mainly consists of three parts, as shown in Figure 1. First, an SD questionnaire was developed to gather subjective data from college students in pedestrian spaces, followed by a correlation analysis to identify key restorative perception factors. We then conducted eye-tracking experiments in both real-world and VR scenarios to validate the findings of the SD questionnaire. In the second phase, orthogonal experiments systematically varied these factors to determine their impacts and mechanisms on attention restoration by analysis. Finally, we used data from these experiments to build predictive restorativeness models and employed genetic algorithms to optimize the design parameters for restorative campus pedestrian space.
We recruited subject students through a combination of online and offline methods; we released information on experimental recruitment through social media and campus bulletin boards. The questionnaire was disseminated to a total of 537 students (267 females) across 10 universities. Additionally, 200 university students aged 18–35 (mean age, 22.4 years), including 106 females, participated in the offline experiments, which encompassed both the eye-tracking and orthogonal experiments. These 200 participants could engage in one or two experiments, with at least 36 h between sessions to mitigate fatigue and practice effects. The specific number of participants for each part of the experiment will be introduced in the corresponding sections. Participation was voluntary, and all participants provided informed consent prior to participation. The Medical Ethics Committee of Harbin Institute of Technology (ethics number: HIT-2024003) approved the experimental protocol.

2.1.2. Restoration State Assessment Method

The restorative state was evaluated using the Restorative State Scale (RSS), composed of nine statements as Table 1 [46], based on the Perceived Restorativeness Scale (PRS) [47]. PRS is widely used in studies of restorative environments, but because PRS can only be used to assess environmental settings (e.g., scene photographs), it is not suitable for studies that focus on changes in restorative states over time [46]. Instead, RSS settings focus on capturing the overall experience of nature and measuring different levels/functions of the restorative experience of nature [21,48]. In the present study, restorative state measurements were recorded during the SD questionnaire, at the orthogonal experiment baseline (S1), after the pressure source (S2), and after the simulated walk (S3). The restorative states were measured in different orders during the three varying measurement phases for each participant; thus, the 9-item RSS is more applicable to studies with multiple assessments of restorative states than other, more voluminous scales. Hence, the nature and volume of RSS are much more suitable for our research aim of examining the interrelation and temporal dynamics of restoration levels with the environment. Response options of RSS ranged from 1 to 5, as shown in Table 2, aligning with 5 adjectives from “Do not feel at all” to “Feel very strongly”. At the end of each experimental stage, participants assessed the environment based on their stress state using a 5-point scale. The average score of the nine items was considered the current evaluated restorative state.

2.2. Selection of Restorative Factors in Campus Pedestrian Spaces

2.2.1. SD Questionnaire

An SD questionnaire was employed to screen for spatial factors (space components and their attributes and characteristics) in the campus pedestrian space that significantly impact students’ restorative states. This questionnaire was designed based on the basic components of campus pedestrian spaces. It consisted of 23 items, including 9 questions from the Restorative State Scale (RSS) used to assess the respondents’ restorative states. Participants answered the questionnaire based on the pedestrian space they were in or the pedestrian space they most frequently visited.
To characterize the attributes of the spatial factors, 10 sets of adjectives were developed based on general visual perception and cognition, as detailed in Table 2. The remaining 16 questions combined these spatial factors and adjective groups to describe the restorative attributes of the campus pedestrian spaces. The questionnaire utilized a 5-point scale, with 1 representing very poor and 5 representing very good. The responses to the 9 RSS questions were averaged to quantify the participants’ restorative states, providing a clear measure of how different spatial factors influence restoration. The SD questionnaire is available in the Supplementary Materials. The questionnaire was distributed to 537 students (267 females) across 10 universities through field and online surveys, with 29% collected in the field. A total of 446 valid responses were received, yielding an 83.05% validity rate. The data collected includes students’ ratings of spatial perception attributes and their corresponding restorative states. Correlation analysis will be conducted to identify spatial elements strongly associated with restorative states.

2.2.2. Eye-Tracking Experiment

To validate the subjective data from the SD questionnaire, we conducted eye-tracking experiments in both real-world and VR scenes to obtain objective gaze characteristics for verification. Additionally, the feasibility of VR technology was confirmed by comparing eye-movement data between real-world and VR scenes, supporting subsequent experiments.

Eye-Tracking Experiment in Real Scene

To investigate the visual environmental factors of campus pedestrian spaces that concern college students, we selected five pedestrian spaces from Harbin Institute of Technology, Harbin Engineering University, and Harbin Normal University as research sites, as shown in Table 3. These sites were chosen based on the type of institution and the construction period of the buildings. Images of these spaces were captured during both the vegetation and frost periods to account for seasonal variations. However, the analysis focuses on the overall visual environmental factors affecting pedestrian spaces rather than seasonal differences.
In this study, eye-tracking data were recorded and analyzed using Tobii Pro Glasses 2 and Tobii Pro Lab (Tobii, Stockholm, Sweden), as shown in Figure 2b [49]. Eye movements were recorded in pixels using the top-left screen corner as the coordinate origin [50]. Raw data were manually mapped to scene snapshots in Tobii Pro Lab [51], with snapshots categorized into AOIs: ground (20%), buildings (30%), trees (15%), shrubs (10%), sky (15%), facilities (5%), and pedestrians (5%). Visual attention was assessed using total fixation (TFD) and glance (TGD) durations, percentages of fixation (PFD) and glance (PGD) durations, number of fixation points (NF), and number of visits (NV) [52]. And, for this study, 32 students participated, with 6–7 students recruited from each site. Participants were briefed on the tasks before the eye-tracking experiments, which were conducted outdoors between 9 a.m. and 5 p.m. The procedure and participants are shown in Figure 2a,c. After being equipped with calibrated eye-tracking devices, participants walked freely in designated campus pedestrian spaces for 2 min while their eye movements were recorded. To ensure data quality, the accuracy of the eye-tracker was evaluated, confirming its sufficient precision, as detailed in Appendix A.

VR Eye-Tracking Experiment in VR

To validate the VR scenarios, virtual environments were developed based on five pedestrian spaces, where eye-tracking experiments were conducted. Ensuring that participants’ gaze behavior in VR matched real-world contexts was crucial. Participants navigated these virtual landscapes by physically walking in a secure indoor space. The VR and real-world eye-tracking experiments were aligned, as shown in Figure 3a. For consistency, VR scenarios were based on both the vegetation and frost periods of the real pedestrian spaces, as shown in Figure 3c, maintaining visual proportions of elements like buildings, ground, sky, trees, shrubs, and amenities. For this part, 35 students participated, with 7 students recruited from each site. This study was the first to use HTC VIVE Pro Eye, a VR headset with integrated eye-tracking [53]. The Software Development Kit (SDK, version 1.7.1.1081) was integrated into Unity [54], with unfiltered data accessed via Tobii Pro SDK and filtered data processed using the Vive SRanipal SDK. The eye-tracking method aligns with that of Tobii Pro Glasses 2, mapping gaze data to panoramic images, as shown in Figure 3b.

2.3. Restorative Orthogonal Experiment

To thoroughly investigate the effects of the restorative spatial factors identified in the previous section on college students’ restorative states, as well as the underlying mechanisms, we employed an orthogonal experimental design. Orthogonal experiments are an efficient statistical technique that systematically analyzes the impact of multiple factors through a reduced number of trials by systematically arranging the experimental conditions. According to the generated orthogonal experimental scheme, the standard model was modified to generate dozens of pedestrian spaces. Subsequently, virtual walking and attentional recovery were performed in these VR pedestrian spaces.

2.3.1. VR Scenes Setting

To conduct subsequent orthogonal experiments, we first needed to design the necessary VR scenes and scene parameters. Using Rhino and the Unity 3D platform, we created a standard model of campus pedestrian spaces situated between driveways and campus buildings, reflecting typical campus environments with wide applicability.
Based on the restorative spatial factors identified in the previous section and the current situation of Chinese campuses, we determined the scene parameters for each restorative spatial factor obtained in the previous section, as detailed in Table 4.
Shrub coverage in the visual field was calculated using Photoshop by measuring the pixel area occupied by shrubs, dividing it by the total pixel count, and multiplying by 100 to obtain the percentage of the view occupied by shrubs. Ground texture was manipulated by adjusting the size of paving tiles, incorporating six tile sizes based on common Chinese pedestrian walkway and plaza designs. This approach ensured a realistic and visually engaging simulation of pedestrian spaces, reflecting the complexity and diversity of urban landscapes. Ultimately, through the generated orthogonal experimental scheme, we established 49 experimental scenarios, as shown in Figure 4.

2.3.2. Experimental Procedure

In the experiment, as shown in Figure 5a, participants were briefed on the procedure and provided consent. A 2 min relaxation session followed to calm participants before commencing the experiment. Initial assessment of the participants’ restorative state was conducted using the RSS (S1). Participants were subjected to stress-inducing stimuli created using E-Studio, compiling visual and mathematical challenges to distract participants from orientation and reduce their restorative state. Visual challenge is a sequence of 15 images depicting violence, disasters, and extreme emotional states, chosen randomly from 50 images, as shown in Figure 5b. A total of 15 images displayed every 2 s, and 3 of these images were shown more than once; participants were to notify the experimenter immediately upon recognizing repeats. This method can make participants more focused on the content of the picture, so as to more effectively evoke stress by eliciting feelings of discomfort and anxiety. The mathematical challenge involved summing the number presented in a circle’s center with its predecessor and selecting the correct sum from options around the circle’s edge, all within a stringent 3 s timeframe (which was reduced to 2 s after 10 rounds). Correct answers scored points, whereas incorrect answers or timeouts were met with a noise deterrent. After stressor exposure, the participants’ restorative states were measured again (S2). Thereafter, participants embarked on a 2 min virtual walk through a simulated campus pedestrian space equipped with an HTC VIVE Pro Eye headset. After the virtual stroll, the RSS was administered again to evaluate the restorative state (S3). The duration of the experiment, designed to explore how environmental stimuli affect psychological restoration, was approximately 24 min.
A total of 180 university students participated in this experiment, with ages ranging from 18 to 35 years and an average age of 22.3 years. Among them, 106 were female. To prevent the subjects from becoming immune to the stress procedure, each subject could choose to participate in up to two orthogonal VR experimental scenarios. Additionally, to avoid VR-induced dizziness, fatigue, and practice effects, each session was limited to a maximum of 30 min, with a minimum interval of 24 h between sessions. In this part of the study, 227 experimental sessions were conducted, of which 200 yielded valid data, resulting in an effective rate of 88.1%.

2.3.3. Analysis of Influence Mechanism

To investigate the impact mechanisms of restorative spatial factors in campus pedestrian spaces on the restorative state (ΔS) of college students, we employed multivariate non-linear fitting. Using Origin, the dataset fitting extended beyond linear modeling by incorporating polynomial terms to capture potential non-linear interactions between variables. Additionally, a comparative analysis was conducted between linear and non-linear regression. The goodness-of-fit of the model to the data was evaluated using R2. Its value ranges from 0 to 1, with an R2 closer to 1 indicating a stronger explanatory power.
Multiple statistical analyses were employed to comprehensively analyze the data and address various research questions. Linear regression was used to identify overall trends and relationships between environmental factors and attention restoration. Additionally, non-linear models, such as polynomial regression, were applied to capture complex interactions and non-linear effects that could not be detected through linear models alone.

2.4. Training of Predictive Models

To develop the spatial restorativeness prediction model, this study evaluated and compared the capabilities of Decision Tree, KNN, ANN, and CNN in learning and extracting data features to select the surrogate model with higher accuracy of predication. The training phase utilized classification ML for predictions, with the RSS as the evaluation criterion for the restorative state, using a 5-point rating scale. In the ΔS range (S3 − S2), the training data were divided into five for training. The training dataset comprised 200 sets of selected environmental factors and their corresponding restorative capabilities, ΔS. The values of the environmental impact factors were used as the input dataset when training the surrogate model, with ΔS scores as the output.

2.4.1. Decision Tree (DT)

Decision Tree, consisting of nodes and directed edges, are conditional probability distributions defined on feature and decision spaces and are used for both classification and regression [55]. Formula p i t is the ratio of i in node t , i represents the proportion of i , and c indicates the category.
E ( t ) = i 1 c p ( i | t ) log 2 p ( i | t )

2.4.2. k-Nearest Neighbor (KNN)

KNN is a non-parametric, instance-based learning algorithm used for classification and regression [56]. Inputs consist of the k-closest training examples in feature spaces. This study used KNN classification, where the object is assigned to the class most common among its k-nearest neighbors. The region adjacent to x covering k point is referred to as N k x . Class y of x is determined in N k x according to the classification decision rule. x i is the eigenvector of the instance; y i is the class of the instance; i = 1,2 , , N .
y = arg max x i N k ( x ) I ( y i , c j )
i = 1 , 2 , , N ;   j = 1 , 2 , , K
where I is the indicator function; I = 1 when y i = c j . Otherwise, I = 0.

2.4.3. Artificial Neural Network (ANN)

ANN models contain an input layer, one or more hidden layers, and an output layer [57], where each Multilayer Perceptron (MLP) neuron is connected to all neurons in the next layer, as shown in Figure 6.
The MLP network of Input u k and Target h ( k ) in hidden layer 1 was used for restorative prediction of the campus pedestrian space with different parameters, as follows:
h ( k ) = 2 ( w 2 × x ( k ) + b 2 )
x ( k ) = 1 ( w 1 · u ( k ) + b 1 )
where x k is the output vector of the hidden layer; w 1 is the connection weight matrix from the input layer to the hidden layer; w 2 is the connection weight matrix from the hidden layer to the output layer; and b 1 and b 2 are the numbers of deviations in the hidden and output layers, respectively [58]. The following transfer algorithm was used between the hidden and output layers:
( P ) = 1 e 2 P 1 + e 2 P
P = ( w i · x i )
During dataset building, the dataset containing five spatial factors (HDB, HT, AS, TG, and HG) as the input, ΔS (S3 − S2) was the output, and 200 sets of experimental datasets were gradually fed into the ANN model. During ANN training, the data were usually divided into training and validation datasets. Regarding the results, 70% and 15% of the datasets were used for training and validation, respectively; the rest were used for testing.

2.4.4. Convolutional Neural Network (CNN)

The image dataset consisted of 200 scene screenshots, with the location, angle, and size of each screenshot maintained consistently. After preprocessing, the input image size for the CNN was set to a 224 × 224 pixel RGB three-channel image (3, 224, and 224), with the corresponding 200 pedestrian space restorative capabilities ΔS as the CNN’s output. The dataset was divided into 70% training, 15% validation, and 15% testing. The CNN architecture used in this study included an input layer, convolutional, pooling, and fully connected layers, and an output layer, as shown in Figure 7.
Once processed by the convolutional layer, the input images, as detailed in Equation (8), underwent activation through the ReLU function in Equation (9), followed by max pooling, as shown in Equation (10). Throughout this phase, the configuration maintained a 3 × 3 filter size and strides of 1 and 3 for the convolutional and pooling layers, respectively. Following multiple iterations of convolution, activation, and pooling operations, the resultant feature maps (32, 7, and 7) were flattened. Subsequently, the architecture integrated a fully connected layer (fc1) with a dimensionality of 512, followed by a dropout layer with a 0.09 dropout probability, aimed at reducing overfitting, while the ReLU activation function was utilized to introduce non-linearity into the model. The data then advanced through another fully connected layer (fc2) with a dimensionality of 256, where the ReLU activation function was applied again, leading to the final predictive output through a fully connected layer (fc3) with a dimensionality of 1. In the concluding stages, the fully connected layers further refined the data processing, employing the ReLU activation function to ensure non-linearity, culminating in the output of predictive results for the classification task through the dimensionally singular fully connected layer (fc3).
( I × K ) ( i , j ) = m n I ( i + m , j + n ) · K ( m , n )
f ( x ) = max ( 0 , x )
f ( R ) = max x R x
where ( i , j ) denote the position within the output feature map, and m and n represent the dimensions of the convolutional kernel.

2.5. Settings of Restorative Optimization

The Genetic Algorithm (GA) is a sophisticated optimization technique that mimics evolution observed in nature [59]. Inspired by natural selection and genetic inheritance in biological evolution, it forms part of the broader family of Evolutionary Algorithms [60]. Within the GA framework, the fitness level of each individual in a population is determined using a specific fitness (objective) function. This selection favors higher-quality solutions by making the selection probability directly proportional to the individual’s fitness level. To prevent the algorithm from becoming stuck in local optima, a mechanism occasionally selects fewer fit solutions, thereby broadening the algorithm’s capacity for global exploration.
GA optimization was performed using an integrated MATLAB environment. The predicted restorative state (ΔS) was the GA fitness index, pinpointing the most conducive environmental parameter range for optimizing campus pedestrian spaces. Following existing guidelines, GA parameters were defined as having a population size, crossover rate, and migration rate of 200, 0.8, and 0.2, respectively. With inputs of 200 distinct HDB, HT, AS, TG, and HG sets, 200 respective ΔS values were evaluated to ascertain the peak value in each generation, until a satisfactory convergence level or the maximum generation limit was reached.

3. Results

3.1. Selection of Influencing Factors in Campus Pedestrian Spaces

3.1.1. Restorative Factors Correlation Analysis

The mean of the nine questions from each questionnaire was determined to assess restorative states ( S ¯ ); Spearman’s rank correlation was utilized to examine relationships between the 12 factors within campus pedestrian spaces and the restorative state, as shown in Figure 8. The R value was used to determine correlation strength. Among the studied environmental factors, significant correlations were observed for |RHDB| = 0.31 > 0.20, |RHT| = 0.33 > 0.20, |RAS| = 0.27 > 0.20, |RTG| = 0.32 > 0.20, and |RHG| = 0.28 > 0.20. Consequently, the building height difference (HDB), tree height (HT), shrub area (AS), ground hue (HG), and ground texture (TG) exhibited strong correlations with the restorative state, suggesting that optimizing the design parameters associated with these factors in campus pedestrian spaces may enhance environmental restoration.

3.1.2. Eye-Tracking Experiment Analysis

We analyzed data from 32 eye-tracking experiments of real scenes and hotspot distribution maps, as shown in Figure 9. Metrics such as total fixation duration (TFD), total glance duration (TGD), percentages of fixation (PFD) and glance (PGD) durations, number of fixation points (NF), and number of visits (NV) were evaluated. Ground surfaces attracted the greatest attention, followed by buildings, trees, shrubs, the sky, pedestrians, and facilities, with significantly greater attention given to ground, buildings, and vegetation compared to the sky, pedestrians, and facilities. These findings align with the spatial component results from the questionnaire, confirming the validity of the screening process.
Hotspot distribution maps of the VR eye-tracking experiment from the 35 sets of valid experimental data are shown in Figure 10, which also illustrates the NF and NV results. A visual attention hierarchy existed across environmental factors, with buildings garnering the highest attention, followed by ground surfaces, trees, shrubs, pedestrians, the sky, and facilities. The TFD, TGD, PFD, and PGD showed similar trends, with buildings being the most salient, followed by ground surfaces, trees, shrubs, pedestrians, the sky, and facilities, albeit with varying degrees of significance.
These results validated that the five restorative spatial factors identified through the SD questionnaire—HDB, HT, AS, TG, and HG—are the primary elements in pedestrian spaces that attract students’ attention and influence their restorative state. The eye movement characteristics in the VR experiment are similar to those in the real world, as seen by examining the gaze characteristics in Figure 9 and Figure 10, indicating that the VR experiment is sufficiently credible. However, there was a significant difference in the level of attention to the ground between the two eye-tracking experiments. This may be because, in the real-world scenario, participants needed to focus on the ground to identify potential safety hazards. In contrast, the VR experiment provided a spacious and secure environment, reducing the need for participants to pay as much attention to the ground.

3.2. Restorative Factors Influence Mechanism

The correlations between environmental factors such as HDB, HT, AS, TG, and HG and the restorative state (ΔS) were examined, as shown in Figure 11. The coefficient of determination (R2) for the quadratic fitting function between HDB and ΔS (R2 = 0.283) surpassed that of the linear fitting function (R2 = 0.147). Notably, HDB and ΔS correlated negatively (i.e., as HDB increased, ΔS decreased). However, beyond a critical point (ΔS = 3), ΔS increased with increasing HDB. The HDB range conducive to restoration (ΔS = 5) was 3.42–15 m. Similarly, the R2 value for the quadratic fitting function between HT and ΔS (R2 = 0.225) exceeded that of the linear fitting function (R2 = 0.114). HT and ΔS correlated positively, with ΔS increasing and then decreasing with increasing HT. The optimal HT range for achieving a maximum ΔS (ΔS = 3) was 16–20.9 m. Regarding AS and ΔS, the quadratic fitting function (R2 = 0.213) exhibited a higher R2 than the linear fitting function (R2 = 0.117). The relationship between AS and ΔS first increased and then decreased with increasing AS. The peak ΔS value (ΔS = 5) was within the AS range of 0.07–0.12%. For TG and ΔS, the quadratic fitting function (R2 = 0.225) demonstrated superior performance compared to the linear fitting function (R2 = 0.146). TG and ΔS correlated positively, with ΔS decreasing as TG increased. The maximum ΔS value (ΔS = 5) occurred within the TG range of 0.31–0.62 m2. The quadratic fitting function between HG and ΔS (R2 = 0.160) exhibited a higher R2 than the linear fitting function (R2 = 0.026). The relationship between HG and ΔS first increased and then decreased with increasing HG. The optimal HG range for achieving a maximum ΔS (ΔS = 3) was 136–224°.

3.3. Prediction of Restorativeness

3.3.1. Machine Learning

This section presents a comparative analysis of various classification algorithm models. Utilizing the MATLAB Classification Learner toolbox, Decision Tree (DT), k-Nearest Neighbor (KNN), and Artificial Neural Network (ANN) were deployed to train the prediction model. This model takes environmental factors as inputs to predict the restorative state, denoted as ΔS. A crucial tool for assessing the performance of these classification models, the confusion matrix, is shown in Figure 12, which presents the accuracy confusion matrices for each algorithm in predicting ΔS, thereby comprehensively summarizing their average prediction accuracies. The ANN model utilized a two-layer neural network with 10 neurons per layer, employing the ReLU activation function. Remarkably, the accuracy rate achieved by the ANN was 94.5%, exceeding the other classification results. Conversely, the Decision Tree exhibited a relatively lower accuracy rate (78.0%), whereas the KNN performed similarly, albeit inferior, to the ANN, with a 90.5% accuracy rate.

3.3.2. Deep Learning

Environmental factors were utilized as labels to train a predictive model using a Convolutional Neural Network (CNN) for the restorativeness of campus pedestrian spaces. The model’s efficacy was validated using a dedicated test set derived from the study’s dataset. The model convergence during training is depicted in accuracy curves for both the training and validation sets. The model’s accuracy improved consistently over 200 epochs, achieving a peak of 97.2%, as shown in Figure 13a. Concurrently, the error rate stabilized at a low level toward the end of training iterations.
To assess the predictive performance of the model, we examined confusion matrices, as shown in Figure 13b. The confusion matrices delineated the predictive outcomes across different categories along with their respective proportions within the dataset. The high precision and recall rates across classes indicated the model’s robust classification capability. Specifically, the model demonstrated formidable proficiency in forecasting the restorativeness of campus pedestrian spaces, with an overall accuracy of 95.2%.

3.4. Optimization of Restorativeness

To optimize the restorativeness of campus pedestrian spaces, denoted as ΔS, this study constructed 49 VR scenarios based on different environmental parameters, including HDB, HT, AS, TG, and HG. For each scenario, we focused on its impact on the restoration state, ΔS, which served as the output metric for the Genetic Algorithm (GA). The best fitness value rapidly decreased from an initially high value and then stabilized, as shown in Figure 14. This indicated that, after the algorithm found a preliminarily suitable solution, it gradually refined the search to enhance the solution quality. The change in mean fitness was relatively smooth, demonstrating that the algorithm maintained diversity within the population.
Through GA optimization of the datasets, we successfully obtained the values of environmental factors that most significantly impacted the restoration state. We summarized and analyzed the values of these key environmental factors, and the results are presented in Table 5. According to the outcomes of the GA, in the restorative design of campus pedestrian spaces, the recommended ranges for HDB, HT, AS, TG, and HG were 20–28 m, 8.5–15.5 m, 12–15%, 0.20–0.50 m2, and 140–245°, respectively.

4. Discussion

4.1. Restorative Environmental Factors in Campus Pedestrian Spaces

The spatial factors that significantly affect the recovery of students’ attention during walks on campus are the building height difference (HDB), tree height (HT), shrub area (AS), ground hue (HG), and ground texture (TG), identified using an eye-tracking experiment and a questionnaire. Specifically, the results of our experiment showed that natural elements such as trees and shrubs attracted significant attention from students on campus, consistent with the strong restorative perception associated with natural environments [46]. At the same time, buildings and grounds also had a very significant effect on the state of restoration, which is further evidence of the restorative potential of urban elements as well as their positive impact on health [24]. This is also consistent with the findings of similar studies on the critical role of these elements in urban and campus environments [61]. Moreover, the strong link between spatial factors and students’ state of recovery evidenced by the study supports the Attention Recovery Theory (ART) [62,63].
However, some differences emerged between the eye-tracking results from real and virtual environments. In the virtual setting, participants showed slightly longer fixation durations and fewer saccades than in the real world. This may be due to the VR scenes’ realism and fidelity, which influence how participants perceive and interact with the environment [54]. The controlled and possibly less stimulating VR environment could result in more focused and prolonged fixations, with fewer distractions than the real-world setting [64]. Future studies could use VR to simulate diverse auditory and thermal conditions, offering deeper insights into how various environmental factors impact students’ restorative states [65].

4.2. Restorative Influencing Mechanism

Our second scientific goal was to investigate the underlying mechanisms by which these spatial factors influence the restorative state. The fitting results showed that HDB, HT, AS, TG, and HG all have unique impacts on attention restoration, which can be described by quadratic relationships rather than simple linear ones. These results suggest that moderate variations in appropriate ground hues and textures can significantly enhance visual appeal and comfort, contributing to better attention restoration [6,25]. And the building height difference indicated that moderate height variations create a visually stimulating yet not overwhelming environment. Similarly, the obtained range of tree heights was most beneficial for attention restoration, likely due to their ability to provide a sense of enclosure and connection with nature without obstructing views [66,67]. Additionally, a moderate increase in the presence of shrubs within the visual field benefits attention restoration. It is clear that an excessive number of shrubs may convey a sense of wildness and neglect of care, leading to unease among students and, consequently, diminishing attention restoration [46,68].
However, reliance on self-reported measures introduces subjectivity and individual variance. To address this, future research could employ wearable devices that measure physiological indicators such as heart rate variability, electrodermal activity, electromyography, body temperature, and breathing rate, facilitating an objective evaluation of stress states [66,69].

4.3. Machine Learning and Deep Learning for Restorativeness Prediction

This study leverages advanced machine learning and deep learning techniques to predict the restorativeness of campus pedestrian spaces. The Convolutional Neural Network (CNN) provided the highest accuracy; this finding was aligned with previous research on the application of machine learning, which highlighted the capability of CNNs to handle high-dimensional data and capture intricate patterns in environmental variables [70]. It also further proves the effectiveness of CNN in predicting environmental influences on psychological outcomes [71].
Future research could focus on refining the deep learning models to improve their interpretability and applicability to environmental studies [61]. Also, we could explore other advanced techniques, like reinforcement learning and generative adversarial networks (GAN), to open new avenues for understanding and predicting restorative environments [72].

4.4. Implications for Restorative Design in University Campuses

By employing CNN and Genetic Algorithms (GA), this study identified restorative parameter thresholds for campus pedestrian spaces that favor attention recovery for college students. The recommended ranges for HDB, HT, AS, TG, and HG provide concrete guidelines for designing restorative campus environments. These findings are consistent with research that highlights the importance of specific natural and architectural features in promoting restorative experiences [25]. Through the design of the built and natural elements present in the construction of a restorative campus, pedestrian spaces can be effective in reducing mental fatigue and enhancing cognitive function [48,73,74]. Our study adds to this body of knowledge by providing specific, quantifiable guidelines that can be directly applied to campus design.
Future research should continue to refine these design strategies and explore their long-term impacts on student well-being. Incorporating additional factors such as seasonal changes, auditory stimuli, and thermal comfort could provide a more comprehensive understanding of restorative environments [19].

5. Conclusions

5.1. Restorative Environmental Factors Selection

We utilized SD questionnaires to identify key restorative spatial factors in campus walkways. Spearman’s rank correlation was employed to analyze the questionnaire data, successfully isolating five spatial factors that showed a strong correlation with university students’ restorative states: building height difference (HDB), tree height (HT), shrub area within the field of view (AS), ground hue (HG), and ground texture (TG).
To verify the reliability of the questionnaire results, we conducted eye-tracking experiments in both real and VR environments. By tracking eye movements, we assessed the extent to which different spatial elements captured attention. The findings revealed that students predominantly focused on the ground and surrounding buildings during their walking activities, with natural elements such as trees and shrubs identified as secondary points of interest. These results not only validated the five restorative spatial factors identified by the SD questionnaire but also confirmed the reliability of the VR experiment.

5.2. Interrelation of Restorativeness and Environmental Factors

The research is based on 49 virtual reality (VR) campus pedestrian space restorative orthogonal experiments of 5 restorative pedestrian space factors that impact the state of the recovery of college students’ study. The experimental data reveal the influencing mechanism of non-linear fitting. In the analysis of campus pedestrian spaces, it was identified that HDB was negatively correlated with the restorative state (ΔS), suggesting that, as the height differential among buildings increases, the restorative potential of these spaces diminishes. Interestingly, beyond the minimum ΔS value, an increase in HDB led to some improvement in the environment’s restorative capacity. Positive correlations were established between ΔS and HT, AS, HG, and TG. Specifically, as HT increased, ΔS first increased and then decreased. This pattern was mirrored in the trends observed with increases in AS and TG, where ΔS first increased and then decreased. Similarly, an increase in HG resulted in an initial upsurge in ΔS, which then tapered off.

5.3. Restorative Prediction of Campus Pedestrian Space

A comparative analysis of various machine learning models—namely, Decision Tree (DT), k-Nearest Neighbors (KNN), and Artificial Neural Networks (ANNs), alongside a deep learning model, a Convolutional Neural Network (CNN)—revealed that the CNN exhibited superior performance. Specifically, the CNN achieved a classification prediction accuracy of 95.2%, surpassing the ANN (94.5%), KNN (90.5%), and Decision Tree (78.0%). This outcome underscores the exceptional efficacy of CNNs in predicting the restorative capabilities of campus pedestrian spaces, highlighting their potential to accurately capture and model the complex interactions between multiple spatial factors and their impact on attention restoration.

5.4. Restorative Optimization of Campus Pedestrian Space

Furthermore, the results from the Genetic Algorithm (GA) optimization for the VR motion visual environment provide valuable insights into the threshold values for key environmental factors. The GA findings recommend specific ranges for the restorative design of campus pedestrian spaces: the HDB should be within 20–28 m, the HT should range from 8.5 to 15.5 m, the AS should be between 12 and 15%, the TG should fall within 0.20–0.50 m2, and the HG should be 140–245°. These recommendations are instrumental in enhancing the restorative quality of campus pedestrian environments, providing a scientifically grounded basis for the design and optimization of spaces that promote student well-being and attention restoration.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/land13081308/s1, SD questionnaire for Visual Perception of Campus Pedestrian Space.

Author Contributions

Conceptualization, K.H. and R.Z.; methodology, K.H. and X.L.; software, T.W.; validation, K.H. and R.Z.; formal analysis, K.H.; investigation, T.W. and X.L.; resources, R.Z.; data curation, K.H.; writing—original draft preparation, K.H.; writing—review and editing, X.L.; visualization, R.Z.; supervision, Y.D.; project administration, Y.D.; funding acquisition, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (grant number 52378012) and the Key Research and Development Plan of Heilongjiang Province (project number JD2023SJ01).

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

To ensure data integrity in this experiment, extensive evaluation was conducted, focusing on accuracy, precision, and potential data loss. Participants were to fixate on a central red calibration dot (10 cm diameter) after 40, 80, and 120 s of the experiment, maintaining each fixation for 3 s, cumulatively defining a 9 s “fixation time”. The data were processed using a moving median filter in Tobii Pro Lab to rectify blinking-derived anomalies. The mean horizontal (x-coordinate) and vertical (y-coordinate) positions of gaze points for each eye were calculated from the data. During the 9 s fixation duration, 450 frames of video data were analyzed, with the MATLAB “imfindcircles” function employed to ascertain and extract the calibration circle’s coordinates. Gaze accuracy was quantified by computing the average angular distance (θ) between gaze and target points across the 9 s timeframe, using the following formula for θ calculation.
θ ( i , j ) = a cos ( ( x i 960 ) ( x j 960 ) + ( y i 540 ) ( y j 540 ) + V D 2 ( x i 960 ) 2 + ( y i 540 ) 2 + V D 2 · ( x j 960 ) 2 + ( y j 540 ) 2 + V D 2 )
where x and y represent the horizontal and vertical coordinates of the gaze point, respectively, and j denotes the coordinates of the target point. To ensure precise calculations based on the camera image center, half of the screen width and height were subtracted from the x and y coordinates, respectively. The virtual distance (1132.4 pixels), obtained from reported Tobii estimates, is crucial for converting pixel displacements captured by cameras into actual physical distances. The interrelations among the fixation points, actual target points, and the adjusted coordinates during this cumulative 9 s “fixation period” are illustrated in Figure A1 and Figure A2.
Figure A1. Point coordinates of the gaze point versus the actual point of x.
Figure A1. Point coordinates of the gaze point versus the actual point of x.
Land 13 01308 g0a1
Figure A2. Point coordinates of the gaze point versus the actual point of y.
Figure A2. Point coordinates of the gaze point versus the actual point of y.
Land 13 01308 g0a2

References

  1. Larcombe, W.; Finch, S.; Sore, R.; Murray, C.M.; Kentish, S.; Mulder, R.A.; Lee-Stecum, P.; Baik, C.; Tokatlidis, O.; Williams, D.A. Prevalence and socio-demographic correlates of psychological distress among students at an Australian university. Stud. High. Educ. 2016, 41, 1074–1091. [Google Scholar] [CrossRef]
  2. Auerbach, R.P.; Mortier, P.; Bruffaerts, R.; Alonso, J.; Benjet, C.; Cuijpers, P.; Demyttenaere, K.; Ebert, D.D.; Green, J.G.; Hasking, P.; et al. WHO World Mental Health Surveys International College Student Project: Prevalence and distribution of mental disorders. J. Abnorm. Psychol. 2018, 127, 623–638. [Google Scholar] [CrossRef] [PubMed]
  3. Guo, W.; Wen, H.; Liu, X. Research on the psychologically restorative effects of campus common spaces from the perspective of health. Front. Public Health 2023, 11, 1131180. [Google Scholar] [CrossRef]
  4. San Juan, C.; Subiza-Pérez, M.; Vozmediano, L. Restoration and the City: The Role of Public Urban Squares. Front. Psychol. 2017, 8, 292475. [Google Scholar] [CrossRef]
  5. Daykin, N.; Byrne, E.; Soteriou, T.; O’Connor, S. Review: The impact of art, design and environment in mental healthcare: A systematic review of the literature. J. R. Soc. Promot. Health 2008, 128, 85–94. [Google Scholar] [CrossRef] [PubMed]
  6. Pazhouhanfar, M.; Mustafa Kamal, M.S. Effect of predictors of visual preference as characteristics of urban natural landscapes in increasing perceived restorative potential. Urban For. Urban Green. 2014, 13, 145–151. [Google Scholar] [CrossRef]
  7. Deng, L.; Luo, H.; Ma, J.; Huang, Z.; Sun, L.-X.; Jiang, M.-Y.; Zhu, C.-Y.; Li, X. Effects of integration between visual stimuli and auditory stimuli on restorative potential and aesthetic preference in urban green spaces. Urban For. Urban Green. 2020, 53, 126702. [Google Scholar] [CrossRef]
  8. Liu, Q.; Wang, X.; Liu, J.; Zhang, G.; An, C.; Liu, Y.; Fan, X.; Hu, Y.; Zhang, H. The Relationship between the Restorative Perception of the Environment and the Physiological and Psychological Effects of Different Types of Forests on University Students. Int. J. Environ. Res. Public Health 2021, 18, 12224. [Google Scholar] [CrossRef]
  9. Abu-Ghazzeh, T.M. Communicating Behavioral Research to Campus Design:Factors Affecting the Perception and Use of Outdoor Spaces at the University of Jordan. Environ. Behav. 1999, 31, 764–804. [Google Scholar] [CrossRef]
  10. Ewing, R.; Handy, S. Measuring the Unmeasurable: Urban Design Qualities Related to Walkability. J. Urban Des. 2009, 14, 65–84. [Google Scholar] [CrossRef]
  11. Rashid, M.; Wineman, J.; Zimring, C. Space, behavior, and environmental perception in open plan offices: A prospective study. Environ. Plan. B Plan. Des. 2009, 36, 432–449. [Google Scholar] [CrossRef]
  12. Porzi, L.; Bulò, S.R.; Lepri, B.; Ricci, E. Predicting and Understanding Urban Perception with Convolutional Neural Networks. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 139–148. [Google Scholar]
  13. Wilson, C.J.; Soranzo, A.J.C. The use of virtual reality in psychology: A case study in visual perception. Comput. Math. Methods Med. 2015, 2015, 151702. [Google Scholar] [CrossRef]
  14. Berto, R. The Role of Nature in Coping with Psycho-Physiological Stress: A Literature Review on Restorativeness. Behav. Sci. 2014, 4, 394–409. [Google Scholar] [CrossRef] [PubMed]
  15. Janeh, O.; Langbehn, E.; Steinicke, F.; Bruder, G.; Gulberti, A.; Poetter-Nerger, M. Biomechanical analysis of (non-)isometric virtual walking of older adults. In Proceedings of the 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 18–22 March 2017; pp. 217–218. [Google Scholar]
  16. Heilmann, F.; Witte, K. Perception and Action under Different Stimulus Presentations: A Review of Eye-Tracking Studies with an Extended View on Possibilities of Virtual Reality. Appl. Sci. 2021, 11, 5546. [Google Scholar] [CrossRef]
  17. Zhang, R.-X.; Zhang, L.-M. Panoramic visual perception and identification of architectural cityscape elements in a virtual-reality environment. Future Gener. Comput. Syst. 2021, 118, 107–117. [Google Scholar] [CrossRef]
  18. Pei, W.; Guo, X.; Lo, T. Pre-Evaluation method of the experiential architecture based on multidimensional physiological perception. J. Asian Archit. Build. Eng. 2023, 22, 1170–1194. [Google Scholar] [CrossRef]
  19. Ma, J.H.; Lee, J.K.; Cha, S.H. Effects of lighting CCT and illuminance on visual perception and task performance in immersive virtual environments. Build. Environ. 2022, 209, 108678. [Google Scholar] [CrossRef]
  20. Kaplan, R.; Kaplan, S. The Experience of Nature: A Psychological Perspective; Cambridge University Press: Cambridge, UK, 1989. [Google Scholar]
  21. Hartig, T.; Mang, M.; Evans, G.W. Restorative effects of natural environment experiences. Environ. Behav. 1991, 23, 3–26. [Google Scholar] [CrossRef]
  22. Ríos-Rodríguez, M.L.; Rosales, C.; Lorenzo, M.; Muinos, G.; Hernández, B. Influence of Perceived Environmental Quality on the Perceived Restorativeness of Public Spaces. Front. Psychol. 2021, 12, 644763. [Google Scholar] [CrossRef]
  23. Ulrich, R.; Simons, R.; Losito, B.; Fiorito, E.; Miles, M.; Zelson, M. Stress Recovery During Exposure to Natural and Urban Environments. J. Environ. Psychol. 1991, 11, 201–230. [Google Scholar] [CrossRef]
  24. Collado, S.; Staats, H.; Corraliza, J.A.; Hartig, T. Restorative environments and health. In Handbook of Environmental Psychology and Quality of Life Research; Springer International Publishing: Cham, Switzerland, 2017; pp. 127–148. [Google Scholar]
  25. Roe, J.; Aspinall, P. The restorative benefits of walking in urban and rural settings in adults with good and poor mental health. Health Place 2011, 17, 103–113. [Google Scholar] [CrossRef] [PubMed]
  26. Sundling, C.; Jakobsson, M. How Do Urban Walking Environments Impact Pedestrians’ Experience and Psychological Health? A Systematic Review. Sustainability 2023, 15, 10817. [Google Scholar] [CrossRef]
  27. Johnsen, S.Å.K.; Brown, M.K.; Rydstedt, L.W. Restorative experiences across seasons? Effects of outdoor walking and relaxation exercise during lunch breaks in summer and winter. Landsc. Res. 2022, 47, 664–678. [Google Scholar] [CrossRef]
  28. Lu, M.; Fu, J. Attention Restoration Space on a University Campus: Exploring Restorative Campus Design Based on Environmental Preferences of Students. Int. J. Environ. Res. Public Health 2019, 16, 2629. [Google Scholar] [CrossRef]
  29. Farghaly, Y.; Aly Hany, N.; Moussa, Y. The Interrelationship Between Restorative Environments and Visual Preferences in University Campus Landscapes. In Proceedings of the International Conference of Contemporary Affairs in Architecture and Urbanism-ICCAUA, Alanya, Turkey, 20–21 May 2021; Volume 4, pp. 124–133. [Google Scholar] [CrossRef]
  30. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
  31. Boulila, W.; Ghandorh, H.; Khan, M.A.; Ahmed, F.; Ahmad, J. A novel CNN-LSTM-based approach to predict urban expansion. Ecol. Inform. 2021, 64, 101325. [Google Scholar] [CrossRef]
  32. Hong, T.; Wang, Z.; Luo, X.; Zhang, W. State-of-the-art on research and applications of machine learning in the building life cycle. Energy Build. 2020, 212, 109831. [Google Scholar] [CrossRef]
  33. Mohandes, S.R.; Zhang, X.; Mahdiyar, A. A comprehensive review on the application of artificial neural networks in building energy analysis. Neurocomputing 2019, 340, 55–75. [Google Scholar] [CrossRef]
  34. Kumar, R.; Aggarwal, R.K.; Sharma, J.D. Energy analysis of a building using artificial neural network: A review. Energy Build. 2013, 65, 352–358. [Google Scholar] [CrossRef]
  35. Magnier, L.; Haghighat, F. Multiobjective optimization of building design using TRNSYS simulations, genetic algorithm, and Artificial Neural Network. Build. Environ. 2010, 45, 739–746. [Google Scholar] [CrossRef]
  36. Al-Shawwa, M.; Al-Absi, A.; Hassanein, S.A.; Baraka, K.A.; Abu-Naser, S.S.J.I.J.o.A.P.R. Predicting temperature and humidity in the surrounding environment using artificial neural network. Int. J. Acad. Pedagog. Res. (IJAPR) 2018, 2, 1–6. [Google Scholar]
  37. Bre, F.; Gimenez, J.M.; Fachinotti, V.D. Prediction of wind pressure coefficients on building surfaces using artificial neural networks. Energy Build. 2018, 158, 1429–1441. [Google Scholar] [CrossRef]
  38. Viotti, P.; Liuti, G.; Di Genova, P. Atmospheric urban pollution: Applications of an artificial neural network (ANN) to the city of Perugia. Ecol. Model. 2002, 148, 27–46. [Google Scholar] [CrossRef]
  39. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  40. Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M.S. Deep learning for visual understanding: A review. Neurocomputing 2016, 187, 27–48. [Google Scholar] [CrossRef]
  41. Moen, E.; Bannon, D.; Kudo, T.; Graf, W.; Covert, M.; Van Valen, D. Deep learning for cellular image analysis. Nat. Methods 2019, 16, 1233–1246. [Google Scholar] [CrossRef] [PubMed]
  42. Bhatt, D.; Patel, C.; Talsania, H.; Patel, J.; Vaghela, R.; Pandya, S.; Modi, K.; Ghayvat, H. CNN Variants for Computer Vision: History, Architecture, Application, Challenges and Future Scope. Electronics 2021, 10, 2470. [Google Scholar] [CrossRef]
  43. Qi, Y.; Chodron Drolma, S.; Zhang, X.; Liang, J.; Jiang, H.; Xu, J.; Ni, T. An investigation of the visual features of urban street vitality using a convolutional neural network. Geo-Spat. Inf. Sci. 2020, 23, 341–351. [Google Scholar] [CrossRef]
  44. Mahmud, M.; Kaiser, M.S.; Hussain, A.; Vassanelli, S. Applications of deep learning and reinforcement learning to biological data. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2063–2079. [Google Scholar] [CrossRef]
  45. Law, S.; Seresinhe, C.I.; Shen, Y.; Gutierrez-Roig, M. Street-Frontage-Net: Urban image classification using deep convolutional neural networks. Int. J. Geogr. Inf. Sci. 2020, 34, 681–707. [Google Scholar] [CrossRef]
  46. Van den Berg, A.E.; Jorgensen, A.; Wilson, E.R. Evaluating restoration in urban green spaces: Does setting type make a difference? Landsc. Urban Plan. 2014, 127, 173–181. [Google Scholar] [CrossRef]
  47. Sonnentag, S.; Venz, L.; Casper, A. Advances in recovery research: What have we learned? What should be done next? J. Occup. Health Psychol. 2017, 22, 365–380. [Google Scholar] [CrossRef] [PubMed]
  48. Ha, J.; Kim, H.J. The restorative effects of campus landscape biodiversity: Assessing visual and auditory perceptions among university students. Urban For. Urban Green. 2021, 64, 127259. [Google Scholar] [CrossRef]
  49. Manualslib. Tobii Pro Glasses 2 User Manual. Available online: https://www.manualslib.com/manual/1269253/Tobii-Pro-Glasses-2.html (accessed on 11 July 2024).
  50. Tommaso, D.D.; Wykowska, A. TobiiGlassesPySuite: An open-source suite for using the Tobii Pro Glasses 2 in eye-tracking studies. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, Denver, CO, USA, 25–28 June 2019; p. 46. [Google Scholar]
  51. Zou, Z.; Ergan, S. Where Do We Look? An Eye-Tracking Study of Architectural Features in Building Design. In Advances in Informatics and Computing in Civil and Construction Engineering, Proceedings of the 35th CIB W78 2018 Conference: IT in Design, Construction, and Management; Springer International Publishing: Cham, Switzerland, 2019; pp. 439–446. [Google Scholar]
  52. Vortmann, L.-M.; Knychalla, J.; Annerer-Walcher, S.; Benedek, M.; Putze, F. Imaging Time Series of Eye Tracking Data to Classify Attentional States. Front. Neurosci. 2021, 15, 664490. [Google Scholar] [CrossRef]
  53. Chen, Y.-T.; Yeh, P.-H.; Cheng, Y.-C.; Su, W.-W.; Hwang, Y.-S.; Chen, H.S.-L.; Lee, Y.-S.; Shen, S.-C. Application and Validation of LUXIE: A Newly Developed Virtual Reality Perimetry Software. J. Pers. Med. 2022, 12, 1560. [Google Scholar] [CrossRef] [PubMed]
  54. Stein, N.; Niehorster, D.C.; Watson, T.; Steinicke, F.; Rifai, K.; Wahl, S.; Lappe, M. A Comparison of Eye Tracking Latencies Among Several Commercial Head-Mounted Displays. i-Perception 2021, 12. [Google Scholar] [CrossRef] [PubMed]
  55. Song, Y.Y.; Lu, Y. Decision tree methods: Applications for classification and prediction. Shanghai Arch. Psychiatry 2015, 27, 130–135. [Google Scholar] [CrossRef] [PubMed]
  56. Zhang, S.; Li, X.; Zong, M.; Zhu, X.; Cheng, D. Learning k for kNN Classification. ACM Trans. Intell. Syst. Technol. (TIST) 2017, 8, 1–19. [Google Scholar] [CrossRef]
  57. Aries, M.B.C.; Veitch, J.A.; Newsham, G.R. Windows, view, and office characteristics predict physical and psychological discomfort. J. Environ. Psychol. 2010, 30, 533–541. [Google Scholar] [CrossRef]
  58. Ma, G.; Pan, X. Research on a visual comfort model based on individual preference in china through machine learning algorithm. Sustainability 2021, 13, 7602. [Google Scholar] [CrossRef]
  59. Tuhus-Dubrow, D.; Krarti, M. Genetic-algorithm based approach to optimize building envelope design for residential buildings. Build. Environ. 2010, 45, 1574–1581. [Google Scholar] [CrossRef]
  60. Mirjalili, S. Genetic Algorithm. In Evolutionary Algorithms and Neural Networks: Theory and Applications; Mirjalili, S., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 43–55. [Google Scholar]
  61. Ma, H.; Xu, Q.; Zhang, Y. High or low? Exploring the restorative effects of visual levels on campus spaces using machine learning and street view imagery. Urban For. Urban Green. 2023, 88, 128087. [Google Scholar] [CrossRef]
  62. Gulwadi, G.B.; Mishchenko, E.D.; Hallowell, G.; Alves, S.; Kennedy, M. The restorative potential of a university campus: Objective greenness and student perceptions in Turkey and the United States. Landsc. Urban Plan. 2019, 187, 36–46. [Google Scholar] [CrossRef]
  63. Felsten, G. Where to take a study break on the college campus: An attention restoration theory perspective. J. Environ. Psychol. 2009, 29, 160–167. [Google Scholar] [CrossRef]
  64. Lee, J.; Kim, M.; Kim, J. A Study on Immersion and VR Sickness in Walking Interaction for Immersive Virtual Reality Applications. Symmetry 2017, 9, 78. [Google Scholar] [CrossRef]
  65. Zhang, G.; Wu, G.; Yang, J. The restorative effects of short-term exposure to nature in immersive virtual environments (IVEs) as evidenced by participants’ brain activities. J. Environ. Manag. 2023, 326, 116830. [Google Scholar] [CrossRef] [PubMed]
  66. Brown, D.K.; Barton, J.L.; Gladwell, V.F. Viewing Nature Scenes Positively Affects Recovery of Autonomic Function Following Acute-Mental Stress. Environ. Sci. Technol. 2013, 47, 5562–5569. [Google Scholar] [CrossRef] [PubMed]
  67. Jiang, B.; Chang, C.-Y.; Sullivan, W.C. A dose of nature: Tree cover, stress reduction, and gender differences. Landsc. Urban Plan. 2014, 132, 26–36. [Google Scholar] [CrossRef]
  68. Gatersleben, B.; Andrews, M. When walking in nature is not restorative—The role of prospect and refuge. Health Place 2013, 20, 91–101. [Google Scholar] [CrossRef]
  69. Shu, S.; Ma, H. Restorative effects of urban park soundscapes on children’s psychophysiological stress. Appl. Acoust. 2020, 164, 107293. [Google Scholar] [CrossRef]
  70. Farahani, M.; Razavi-Termeh, S.V.; Sadeghi-Niaraki, A.; Choi, S.-M. A Hybridization of Spatial Modeling and Deep Learning for People’s Visual Perception of Urban Landscapes. Sustainability 2023, 15, 10403. [Google Scholar] [CrossRef]
  71. Zou, Z.; Ergan, S. Towards emotionally intelligent buildings: A Convolutional neural network based approach to classify human emotional experience in virtual built environments. Adv. Eng. Inform. 2023, 55, 101868. [Google Scholar] [CrossRef]
  72. He, Q.; Li, Z.; Gao, W.; Chen, H.; Wu, X.; Cheng, X.; Lin, B. Predictive models for daylight performance of general floorplans based on CNN and GAN: A proof-of-concept study. Build. Environ. 2021, 206, 108346. [Google Scholar] [CrossRef]
  73. Tenngart Ivarsson, C.; Hagerhall, C.M. The perceived restorativeness of gardens—Assessing the restorativeness of a mixed built and natural scene type. Urban For. Urban Green. 2008, 7, 107–118. [Google Scholar] [CrossRef]
  74. Kaplan, S. The restorative benefits of nature: Toward an integrative framework. J. Environ. Psychol. 1995, 15, 169–182. [Google Scholar] [CrossRef]
Figure 1. Experiment workflow and optimization.
Figure 1. Experiment workflow and optimization.
Land 13 01308 g001
Figure 2. Illustration of the eye-tracking experiment.
Figure 2. Illustration of the eye-tracking experiment.
Land 13 01308 g002
Figure 3. Illustration of VR eye-tracking experiment.
Figure 3. Illustration of VR eye-tracking experiment.
Land 13 01308 g003
Figure 4. Orthogonal experimental scenes.
Figure 4. Orthogonal experimental scenes.
Land 13 01308 g004
Figure 5. Schematic diagram of the orthogonal experiment.
Figure 5. Schematic diagram of the orthogonal experiment.
Land 13 01308 g005
Figure 6. Artificial neural network model.
Figure 6. Artificial neural network model.
Land 13 01308 g006
Figure 7. Architecture of convolutional neural network.
Figure 7. Architecture of convolutional neural network.
Land 13 01308 g007
Figure 8. Correlation between space factors and stress state.
Figure 8. Correlation between space factors and stress state.
Land 13 01308 g008
Figure 9. Gaze feature analysis in real scene.
Figure 9. Gaze feature analysis in real scene.
Land 13 01308 g009
Figure 10. Analysis of gaze features in VR scene.
Figure 10. Analysis of gaze features in VR scene.
Land 13 01308 g010
Figure 11. Fitting effects between environmental factors and restorative state.
Figure 11. Fitting effects between environmental factors and restorative state.
Land 13 01308 g011
Figure 12. Accuracy confusion matrix for four machine learning models.
Figure 12. Accuracy confusion matrix for four machine learning models.
Land 13 01308 g012
Figure 13. Analysis of CNN training.
Figure 13. Analysis of CNN training.
Land 13 01308 g013
Figure 14. Optimization process of genetic algorithm..
Figure 14. Optimization process of genetic algorithm..
Land 13 01308 g014
Table 1. Items in the restorative state scale.
Table 1. Items in the restorative state scale.
Restorative State Scale
1. My mind is not invaded by stressful thoughts.
2. I can take time out from a busy life.
3. I can lose all sense of time.
4. I am thinking about everything and nothing at the same time.
5. I can make space to think about my problems.
6. I can leave all my problems behind me.
7. My mind just wanders in infinity.
8. I can imagine myself as part of the larger cyclical process of living.
9. I feel connected to the natural world.
Table 2. Adjective pairs in SD questionnaire.
Table 2. Adjective pairs in SD questionnaire.
Spatial FactorsFactor of NeutralityAdjective Pairs
Buildings and treesHeightShort–Tall
BuildingsHeight differenceSmall–Large
Buildings and groundDistanceNear–Far
BuildingsAngleSmall–Large
Buildings and groundColor–HueCold–Warm
Buildings and groundColor-ContrastLow–High
ShrubsArea in visionLittle–Much
GroundTextureSparse–Dense
SkyOpennessSmall–Large
Buildings and groundMaterialArtificial–Natural
Land 13 01308 i011
Table 3. Pedestrian spaces in investigation.
Table 3. Pedestrian spaces in investigation.
Case12345
LocationHIT 1HITHEU 2HNU 3HNU
TypeSquareStreetSquarePathSidewalk
FeatureMore artificialMore naturalMore artificialMore natural More natural
Picture of Vegetation periodLand 13 01308 i001Land 13 01308 i002Land 13 01308 i003Land 13 01308 i004Land 13 01308 i005
Picture of Frost periodLand 13 01308 i006Land 13 01308 i007Land 13 01308 i008Land 13 01308 i009Land 13 01308 i010
1 Harbin Institute of Technology. 2 Harbin Engineering University. 3 Harbin Engineering University.
Table 4. Orthogonal experimental parameters.
Table 4. Orthogonal experimental parameters.
Environmental FactorsParameters in Orthogonal Experiment
Building height difference0, 12, 24, 36, 48, and 60 m
Tree height8, 12, 16, 20, 24, and 28 m
Shrub area0%, 0.03%, 0.06%, 0.09%, 0.12%, and 0.15%
Ground texture100 × 100, 200 × 100, 300 × 300, 600 × 300 mm2, 600 × 600, and 1000 × 1000 mm2
Ground hue0°, 72°, 144°, 216°, 288°, and 360°
Table 5. Optimization thresholds of environmental factors.
Table 5. Optimization thresholds of environmental factors.
Solution Set 1Solution Set 2Solution Set 20Thresholds
HDB (m)27.54320.81926.38220–28
HT (m)8.81315. 36812.7928.5~15.5
AS (%)0.1470.1290.1350.12~0.15
TG (m2)0.3470.4960.2650.20~0.50
HG (°)236.597245.013146.823140~245
ΔS (S3 − S2)4 (4)4 (3.9)4 (4)4 (predict)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, K.; Wang, T.; Li, X.; Zhang, R.; Dong, Y. Predicting and Optimizing Restorativeness in Campus Pedestrian Spaces based on Vision Using Machine Learning and Deep Learning. Land 2024, 13, 1308. https://doi.org/10.3390/land13081308

AMA Style

Huang K, Wang T, Li X, Zhang R, Dong Y. Predicting and Optimizing Restorativeness in Campus Pedestrian Spaces based on Vision Using Machine Learning and Deep Learning. Land. 2024; 13(8):1308. https://doi.org/10.3390/land13081308

Chicago/Turabian Style

Huang, Kuntong, Taiyang Wang, Xueshun Li, Ruinan Zhang, and Yu Dong. 2024. "Predicting and Optimizing Restorativeness in Campus Pedestrian Spaces based on Vision Using Machine Learning and Deep Learning" Land 13, no. 8: 1308. https://doi.org/10.3390/land13081308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop