Next Article in Journal
Sustainable Historic Districts: Vitality Analysis and Optimization Based on Space Syntax
Previous Article in Journal
The Effect of the Corrosion Degree of Prestressed Steel Reinforcements on the Strain of Concrete Box Girders: An Experimental Fatigue Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Spatial Dimensions, Location, Luminance, and Gender Differences on Visual Search Efficiency in Three-Dimensional Space

School of Architecture, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(5), 656; https://doi.org/10.3390/buildings15050656
Submission received: 20 December 2024 / Revised: 5 February 2025 / Accepted: 17 February 2025 / Published: 20 February 2025
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)

Abstract

:
Visual searching is a key cognitive process for acquiring external information, involving the identification of specific stimuli in complex environments. This study, utilizing Virtual Reality (VR) technology and eye-tracking devices, systematically explores the mechanisms by which spatial dimensions, location, luminance, and gender differences affect visual search efficiency in three-dimensional space. The experiment assessed visual search efficiency across three aspect ratios (0.5, 1, 2) of spatial configurations, different icon locations (top, bottom, left, right, center), and under different luminance conditions. Experiment A found that spatial dimensions and target location significantly influenced search efficiency. Targets located on the central plane were searched most quickly, followed by those on the right and bottom planes. Experiment B revealed that the difference in luminance between targets and distractors enhanced the search speed, but this effect diminished as the target depth increased, suggesting that luminance is a key factor in optimizing visual search. Regarding gender differences, both Experiments A and B showed that males generally exhibited a higher visual search efficiency than females in three-dimensional spaces, with the male advantage becoming more pronounced as the difficulty of recognition increased.

1. Introduction

Visual searching refers to the process by which an individual seeks a specific target within a complex visual scene. It requires the individual to quickly locate and identify specific stimuli, such as letters, numbers, or shapes, within a complex environment [1,2]. This process involves the allocation of attention, scanning of the visual environment, and recognition of the target stimuli [3]. Visual searching has long been an important research topic in the field of human factors, and with the continuous refinement of its theoretical framework, its applications have gradually expanded into various other domains [4,5].
In visual search research, previous studies have categorized search behaviors into various models, including Feature Integration Theory [6], the Similarity Model [7], and Guided Search [8]. Other research has found that visual search outcomes are influenced by a range of factors, including the features of the target and distractors [9,10], search strategies [3], and environmental conditions [11,12]. The results of these searches are often evaluated based on time (i.e., efficiency) metrics. However, most existing studies are based on two-dimensional planes. In fact, many real-world search tasks occur in three-dimensional space, which has prompted research in this field to gradually extend into three-dimensional space [3].
Three-dimensional space refers to a space defined by three dimensions, length, width, and height, commonly referred to as 3D space [13]. In everyday life, the spaces we perceive, such as rooms and buildings, fall within the scope of three-dimensional space. Compared to two-dimensional planes, the addition of a third dimension introduces additional characteristics to three-dimensional space, such as spatial relationships and perspective effects [14]. These changes have prompted subsequent studies to investigate whether traditional two-dimensional visual search theories can be directly applied to three-dimensional spaces, or how these changes may influence visual search behavior [3].
Recently, research has begun to explore the factors influencing visual search efficiency in spatial contexts, with a focus on application scenarios such as fire exit sign recognition [15,16,17,18] and wayfinding guidance [19,20,21,22]. The architectural and spatial forms and influencing factors selected in such studies possess certain particularities. The conclusions drawn exhibit a degree of applicability within specific spatial scenarios. However, whether these conclusions can be generalized to other types of scenarios still requires further research and validation. Therefore, we aspire to transcend the limitations of spatial scenarios and employ more fundamental and universal spatial contexts as the research vehicle. By focusing on more universal influencing factors, we aim to derive conclusions with broader applicability, thereby enhancing the potential for the application of the research findings in a wider range of scenarios.
In view of this, the present study will focus on selecting several key factors influencing visual search under fundamental and broadly applicable spatial conditions, aiming to systematically explore their specific impacts on visual search efficiency in three-dimensional space.

2. Literature Review

2.1. Spatial Fundamental Nature and Broad Applicability

In existing three-dimensional visual search studies, the selected architectural spatial forms are often of the “T” type, “F” type [23,24], or more complex spatial types [17,25]. However, from the perspective of architectural space, these spatial forms may not possess strong universal applicability [14]. In the field of spatial geometry, the cube is not only regarded as a fundamental unit for constructing complex shapes but also serves as a typical representation of primitive geometric forms [13]. The square spaces derived from cubes form the foundational prototypes for other types of spaces and are widely present in various existing spatial configurations [13]. The universality of square spaces is reflected not only in their morphological characteristics but also in their adaptability to spatial scales. According to current Chinese standards and regulations [26,27], the minimum width of public spaces is generally required to be no less than 1.5 m, while the minimum height typically ranges between 2.2 m and 2.6 m. In relevant research, previous studies have often investigated spatial characteristics by varying the width of a space while keeping its height and depth constant, with the depth commonly set between 7 m and 15 m [23,24,28,29].
Considering prior research and current regulatory standards, this study selects square spaces as the subject of investigation. The spatial height is fixed at 3 m, while the widths are set to 1.5 m, 3 m, and 6 m, corresponding to width-to-height ratios of 0.5, 1, and 2, respectively. The depth is uniformly set to 12 m.

2.2. Key Influencing Factors of Visual Search

In the field of two-dimensional visual search, factors such as the relationship between the target and distractors, target position, and luminance have been the primary focus of research. Early experiments by Duncan demonstrated that the similarity between the target and distractors is a critical factor influencing visual search efficiency [30]. Building on this foundation, subsequent studies have explored the effects of variables such as position and luminance under similar target–distractor types, including types like “EF”, “OQ”, and “CO”. In terms of position, studies often divide the plane into quadrants and have revealed that the search speed in the lower visual field is superior to that in the upper field, while the right field outperforms the left [2,31]. Furthermore, it has been found that the greater the eccentricity of the target from the visual origin, the slower the search speed [32,33,34]. Regarding luminance, the luminance relationships among the target, distractors, and background also significantly impact the search efficiency. Overall, when the target’s luminance exhibits a strong contrast with the other two, the recognition speed of the target increases substantially [35,36,37,38,39].
Building on this foundation, research on visual search in three-dimensional space incorporates key factors from two-dimensional planes, such as position and luminance, while also introducing spatial characteristics, including spatial relationships and scale effects. Previous studies have investigated the optimal field-of-view angle for human eye searches in three-dimensional space [40] and revealed that individuals tend to focus most on ground-level positions and least on top-level positions within spaces of varying visual field widths [41]. Additionally, a significant body of research has focused on practical applications, such as fire safety recognition and wayfinding guidance.
In the field of fire safety, Zhao Hualiang demonstrated from a theoretical perspective that low-position indicators are easier to identify and recognize than high-position ones [29]. Similarly, Ma Mingming et al. confirmed this conclusion through experimental measurements [17]. In contrast to studies on two-dimensional planes, gender has become a key factor of interest in three-dimensional spatial studies. Research indicates that in evacuation passages, men tend to have a better spatial awareness than women, enabling them to locate directional signs more quickly [25]. Through experiments, Huo Xiaomin found that men outperform women in spatial scene searches and inferred that this difference may stem from gender-specific search strategies. Supporting evidence from other fields also suggests that men perform better than women in spatial ability tests [42]. In wayfinding studies, prior research has shown that factors such as target location [16] and signage content [19,20], as well as spatial environment and individual behavioral characteristics, significantly influence target recognition. Elisângela et al. explored the effects of spatial scale and luminance changes on individual guidance behaviors by altering environmental luminance in T-shaped and F-shaped spaces of varying widths [23]. Similarly, Jan Dijkstra conducted experiments focusing on different spatial widths and luminance levels [28]. Subsequent studies incorporated signage factors to examine their impact on wayfinding efficiency [24]. Moreover, micro-level behaviors, such as walking speed and eye height, have been found to exert varying degrees of influence on signage recognition [21,22].
Based on the aforementioned studies, this research selects more universal influencing factors such as target position, luminance contrast, spatial configuration, and gender differences as the focus of investigation to explore their effects on visual search efficiency in space. Specifically, the “CO” symbol will be used as the target and distractor, with the target-to-background luminance ratios set at 3:1, 5:1, and 7:1, and the distractor-to-background luminance ratio fixed at 5:1 (resulting in target-to-distractor luminance ratios less than 1, equal to 1, and greater than 1). In Experiment A, we will examine the differences and patterns in the search efficiency for targets at various positions under different spatial width-to-height ratios. In Experiment B, we will investigate the variation in the efficiency of searching for targets under different luminance conditions. Additionally, the role of gender differences in visual search tasks will be thoroughly explored in both Experiments A and B.

2.3. Visual Search Efficiency Evaluation Standards

In both two-dimensional and three-dimensional visual search studies, the visual search time has consistently been a key metric for evaluating search efficiency. Visual searching is typically divided into the search phase and the response phase; however, in practice, the visual search time is largely equivalent to the time consumed in the search phase [5]. Prior research has commonly defined the visual search time as the duration from entering a scene to detecting the target and used this as a primary indicator of search efficiency. In two-dimensional studies, search time is often measured using either keypress recordings [30,31,43,44] or eye-tracking devices [2,45]. Keypress recordings involve calculating the time interval between two consecutive keystrokes, while eye-tracking devices track pupil position to record the time interval from entering a scene to the first fixation on the target (i.e., the first fixation time). Compared to keypress recordings, eye-tracking provides more accurate data with lower error rates. In three-dimensional space studies, most research has employed eye-tracking devices to record search time during experiments, with the primary evaluation metric being the first fixation time [17,25,46].
For data collection, an eye-tracking device will be used to record experimental data, with the first fixation time serving as the evaluation metric for search efficiency.

2.4. Applications of Virtual Reality (VR) Technology

In scientific research, the acquisition and construction of experimental scenarios often face numerous challenges, such as high costs, site limitations, and safety concerns. To address these issues, previous studies have demonstrated the feasibility of using virtual reality (VR) technology to recreate scenarios and support experiments [47,48]. In the context of spatial visual search studies, VR technology combined with 3D models or panoramic images has been widely used to recreate experimental settings. For instance, Ma Xiaohui et al. employed 3D models connected with VR headsets in their research on spatial evacuation signage recognition, successfully reproducing realistic scenarios and obtaining reliable conclusions [25]. Similarly, this approach has been adopted in related studies [18,49,50,51]. Elisângela et al. used a combination of VR and panoramic images to recreate experimental scenarios for studying spatial guidance, providing participants with a highly immersive experience [23]. From prior research, it can be seen that these two methods yield comparable presentation effects but differ significantly in their application contexts. The VR-with-3D-model approach allows individuals to move freely within virtual spaces, making it suitable for applications such as fire evacuation, where spatial relocation is required. In contrast, the VR-with-360-panorama approach does not support movement within the space and is better suited for scenarios requiring stationary observation, such as wayfinding guidance.
Considering the objectives and requirements of this study, it was ultimately decided to adopt a combination of virtual reality (VR) technology and 360-degree panoramic images to achieve a 1:1 accurate reconstruction of the experimental scenarios.

3. Methods

In scientific research, the acquisition and construction of experimental scenarios often face numerous challenges, such as high costs, spatial limitations, and safety concerns. To address these issues, previous studies have demonstrated the feasibility of using virtual reality (VR) technology to replicate scenarios and assist in experiments [47,48]. Scholars studying spatial visual search have commonly employed VR technology in conjunction with images or video materials to recreate experimental environments, yielding reliable conclusions [18,49,50,51,52]. Based on a comparative analysis, this study has decided to utilize VR technology combined with 360-degree panoramic images to achieve a 1:1 replication of the experimental scenario.

3.1. Preliminary Experiment

Design of Materials
The experiment was conducted at the 1895 Creative Building of Tianjin University, using the HTC VIVE Pro 2.0 professional VR headset (HTC Corporation, located in Taiwan, China) (dual-eye resolution of 4896 × 2448) with an integrated eye tracker. The experimental platform was the ErgoLAB human–machine environment synchronization cloud platform, which was used to receive real-time eye movement data. The depth of the experimental scenario was set to 12 m, with a spatial height of 3 m and spatial widths set at 1.5 m, 3 m, and 6 m. Four C-shaped icons (Land’s Ring) with orientations of top-left, bottom-left, top-right, and bottom-right were chosen as the search targets, while O-shaped icons were used as distractors. The luminance contrast of the icons to the background was 5:1. The scene’s eye height was set at 1596 mm for males and 1480 mm for females [53]. The scene was created using Unreal Engine 5 [48], and 360-degree panoramic images were generated and imported into the VR headset for presentation, with the viewpoint rotating in response to head movements.
The experiment involved 12 university students (6 males and 6 females), aged between 22 and 25 years. All participants were right-handed, with normal or corrected vision, and had no color blindness or color vision deficiencies. None of the participants were informed of the purpose of the experiment prior to participation, nor had they previously participated in similar experiments. Participants were compensated following the completion of the experiment.
Experimental Design
The purpose of Experiment 1 was to determine the minimum size of an icon that can be recognized in the spatial plane. A space with dimensions of 12 m (length) × 1.5 m (width) × 3 m (height) was selected for the experiment. Specific “C”-shaped icons were placed at the furthest points on the top, bottom, left, and right walls of the space. The radius of each icon varied from 20 cm to 40 cm, with a 5 cm interval between each. A total of five different size groups was designed, resulting in 20 unique scenes. Participants wore a VR headset, and the experimental scenes were randomly switched. They were asked to verbally indicate whether they could recognize the orientation of the opening of the icon and to specify the exact direction of the opening. The experimenter recorded their responses.
The purpose of Experiment 2 was to determine the minimum spacing at which icons can be recognized in the spatial plane. A space with dimensions of 12 m (length) × 1.5 m (width) × 3 m (height) was selected for the experiment. Two rows of “C”-shaped icons (sizes determined in Experiment 1) were placed at the furthest and second furthest points on the top, bottom, left, and right walls. The center-to-center distance between the two rows of icons started at 30 cm, with an increment of 20 cm per step, reaching a maximum distance of 190 cm. Participants wore a VR headset, and the experimental scenes were sequentially switched from the smallest distance until participants could accurately identify the orientation of the opening of the two rows of icons. The experimenter recorded their responses.
Results
Participants in the experiment unanimously considered the VR experimental scenes to be similar to real-world environments. Preliminary results indicated that when the icon radius was greater than or equal to 30 cm, and the distance between icons was greater than or equal to 130 cm, participants were able to accurately identify the orientation of the icon’s opening.

3.2. Main Experiment A: The Impact of Spatial Dimensions and Position on Visual Search Efficiency

Design of Materials
The experimental setup and equipment were identical to those used in the preliminary experiment. A priori power analysis was conducted using G*Power to determine the required sample size for each experimental group, which was calculated to be 11 participants. The university student population exhibits a relatively high homogeneity in physiological functions and cognitive abilities, as well as greater adaptability to VR technology, which, to some extent, helps ensure the reliability of the research findings. Based on this, the study recruited 26 university students (13 males and 13 females), aged between 22 and 25 years. All participants were right-handed, with normal or corrected vision, and had no color blindness or color vision deficiencies. None of the participants were informed of the purpose of the experiment prior to their participation, nor had they previously participated in similar experiments. Participants were compensated for their participation following the completion of the experiment.
In the experimental scenes, three types of spaces were selected with the following dimensions: height of 3 m, length of 12 m, and widths of 1.5 m, 3 m, and 6 m. The experiment utilized “C”-shaped icons, oriented in different directions, as the target stimuli, and “O”-shaped icons as the distractor stimuli. The luminance contrast between both the search target and the distractor and the background was set to 5:1
Based on the preliminary experiment results, the radius of the experimental icons was set to 30 cm, and the distance between adjacent rows (or columns) of icons was set to 130 cm, as shown in Figure 1.
In each type of space, five wall panels (top, bottom, left, right, and center) were selected to place a varying number of icons based on the spatial dimensions. For the 2 m wide space, 21 icons were placed on the left and right walls (3 rows × 7 columns), 7 icons on the top and bottom walls (7 rows × 1 column), and 3 icons on the center wall (3 rows × 1 column). For the 3 m wide space, 21 icons were placed on the left and right walls (3 rows × 7 columns), 21 icons on the top and bottom walls (7 rows × 3 columns), and 9 icons on the center wall (3 rows × 3 columns). For the 6 m wide space, 21 icons were placed on the left and right walls (3 rows × 7 columns), 35 icons on the top and bottom walls (7 rows × 5 columns), and 15 icons on the center wall (3 rows × 5 columns).
For each wall in the different spaces, the “C”-shaped icon could appear at any position of the wall (or not appear at all), with all other positions occupied by “O”-shaped distractor icons. No icons were placed on any wall other than the one being tested. Each configuration was considered a separate experimental condition. The total number of experimental conditions for each space type was calculated by adding the total number of icons across all walls and the number of conditions where no target icon appeared. For the 1.5 m wide space, there were 65 experimental conditions; for the 3 m wide space, there were 100 experimental conditions; and for the 6 m wide space, there were 135 experimental conditions.
Additionally, for each space type, a calibration condition was included: the center of the center wall was marked with a “+” symbol to help participants refocus their gaze on the center of the space.
Experimental Design
During the experiment, participants wore a VR headset, and the experimental conditions were alternated with calibration conditions. In each experimental condition, participants were required to locate the “C”-shaped target icon and report the orientation of its opening. If the participant failed to locate the target, they were instructed to provide a response indicating this. Afterwards, the experiment transitioned to the calibration condition. Once the participant’s gaze was directed back to the center of the experimental space, the experiment would return to the experimental condition, continuing this cycle until all experimental conditions for the specific space type were completed, as shown in Figure 2.
The experiment was divided into three groups based on space type. After each group of experiments, participants filled out a subjective questionnaire. Prior to the formal experiment, participants completed 30 practice trials to familiarize themselves with the procedure. The entire experiment was estimated to take approximately 1 h to complete.

3.3. Main Experiment B: The Impact of Luminance Contrast on Visual Search Efficiency

Design of Materials
The experimental scene selected was a space of 12 m × 3 m × 3 m. The luminance contrast between the distractors and the background was set at 5:1. The luminance contrast between the search target and the background was set at three different conditions: 3:1, 5:1, and 7:1. The experiments were divided into three groups based on the contrast of luminance: the low-luminance group (Cl), the same-luminance group (Cs), and the high-luminance group (Ch). Each group contained 100 experimental conditions. Data for the same luminance group (Cs) were obtained from Experiment A, while the low and high luminance groups required experimental measurements.
All other conditions remained the same as those in main Experiment A.
Experimental Design
All other conditions were the same as those in main Experiment A.

4. Results

4.1. Experiment A: The Impact of Spatial Dimensions and Position on Visual Search Efficiency

Experimental Results
A preliminary analysis indicated that the participants unanimously considered the experimental scenes in VR to be close to real-world environments, and the accuracy rate of target direction identification was 100% for all participants. Subsequently, visual target search times (the time interval between participants entering the scene and first locating the target) were exported using the ErgoLAB platform. During the experiment, one female participant abandoned the study due to dizziness, and one male participant had a significant amount of missing data during the data collection process. Therefore, their data were excluded from the final statistical analysis.
The final dataset included 12 male and 12 female participants, with a total of 24 valid participants. A total of 7179 data points were collected out of the expected 7200. The data were analyzed separately for male and female participants. The small number of missing data points were supplemented by using the average value of the corresponding experimental condition.
Data Analysis
To provide a clearer description of the data, the positions where the icons appeared in the space have been labeled, as shown in Figure 3.
According to the data analysis, the target search speed on the center wall was significantly faster than on the other walls. In this experiment, the target on the center wall was aligned perpendicularly to the observer’s line of sight, effectively creating a two-dimensional target search scenario. Given that prior research has extensively investigated two-dimensional target search tasks, this study does not repeat that analysis but instead focuses on the analysis of the remaining four walls.
We assumed that the average search time at each location follows a normal distribution and conducted a normality test on the sample data using the Shapiro–Wilk test. The results showed that the significance index was p > 0.05, indicating that the null hypothesis holds and the average search time data for each location follow a normal distribution. Furthermore, we assumed that gender and location would have no significant effect on search time. A Multi-variant Analysis of Variance revealed that the significance coefficients for gender and location were p = 0.00 (p < 0.05), indicating that the null hypothesis does not hold. This suggests that gender and different locations have a significant impact on search time. We conducted a multiple comparison analysis on the data for males and females at each location. The statistical results are shown in Figure 3.
As shown in Figure 4a,b, both the male and female participants exhibited the fastest search speeds in the second row from the left and right walls. The search speed in the first and third rows showed minimal differences. Overall, the search speed on the right wall was faster than that on the left. For both the male and female participants, the optimal search points in each row were located at the near-eye positions (1 or 2), and as the target’s depth increased, the search time gradually increased.
Figure 4c,d indicate that males had faster overall search speeds on the left and right walls compared to females. Additionally, as the depth of the target increased, the difficulty of target recognition also increased, and the male advantage in recognition became more pronounced. In particular, on the right wall, the search speed for females in the third row was noticeably faster than for males. However, this was not the case for the same position on the left wall. It is inferred that this difference may be due to the varying eye levels between males and females, with the third-row target location being closer to the female participants’ line of sight, and the females showed a more pronounced search speed advantage on the right wall.
Figure 4e,f shows that, compared to the upper walls, both males and females were able to recognize targets on the bottom walls more quickly. The optimal search points for both genders on each wall were located at positions 1 or 2, with the search time increasing as the target depth increased. Males generally exhibited faster search speeds at each position compared to females.
As shown in Figure 5a,b, the patterns observed in the 3 m wide space were similar to those in the 1.5 m-wide space. For both male and female participants, the fastest search speeds were observed in the second row from the center of the left and right walls. The overall search speed on the right wall was faster than that on the left. In the 3 m- wide space, the optimal search points on the left and right walls were located at positions 2 or 3, which were shifted inward compared to the 1.5 m wide space. The search time increased gradually as the search area extended outward from the optimal search points.
Figure 5c,d show that the males had faster search speeds in the first and second rows of the left and right walls, while the females had faster search speeds in the third row. This difference is likely due to the fact that females’ line of sight is closer to the third-row position. Additionally, as the target depth and recognition difficulty increased, the search speed advantage for males became more pronounced.
Figure 5e,f show that for the top and bottom in the 3 m wide space, the overall search speed for targets on the bottom was faster than on the top. On the bottom, the fastest search speed was for targets in the middle column, with the right column slightly faster than the left column. For males, the optimal search points on the top and bottom were located at positions 2 or 3 in each column, while for females, the optimal search points were at positions 1 or 3. Compared to the 1.5 m wide space, as the distractor target load increased, the difficulty and search time for target recognition gradually increased.
Figure 5g,h reflect the search speeds of the males and females on the top and bottom in the 3 m wide space. The males generally had slightly faster search speeds than the females, although in some positions, the females were faster than the males.
Figure 6a,b show that in the 6 m-wide space, the optimal search points for males on the left and right walls were located at positions 4 or 5, while for females, the optimal search point was at position 5. The point shifted inward compared to the 3 m wide space, while the other patterns remained consistent with those observed in the 3 m wide space. Figure 6c,d present the results that followed the same pattern as seen in the 3 m wide space.
Figure 6e,f indicate that for the top and bottom in the 6 m wide space, targets closer to the center were recognized more quickly. The recognition speed order was middle column > (left adjacent column, right adjacent column) > (left column, right column). There were no significant differences in recognition speed between the top and bottom. The optimal search area for both walls was located at position 4, and this shifted inward compared to the 1.5 m and 3 m wide spaces. Compared to the 3 m wide space, as the number of distractor targets increased, the recognition speed of targets at the same position gradually decreased.
Figure 6g,h show that males had significantly faster recognition speeds in the 6 m-wide space than females. This difference was more pronounced than in the 1.5 m and 3 m-wide spaces, suggesting that as the target load and recognition difficulty increased, the male advantage in target search speed became more pronounced.
From the analysis above, it can be observed that for targets on the left and right walls, when the target load remained constant, as the space width increased, the optimal search area for the targets gradually moved inward. Moreover, as the search area extended from the optimal search zone toward the sides, the target search time increased. During this process, the target’s spatial position and the area within the field of view changed, which affected the target search time.
To explore the impact of these factors on the target search time, we conducted the following analysis:
As shown in Figure 7, a 3D Cartesian coordinate system was established with the center point of the central plane as the origin, dividing the space into four quadrants. In each quadrant, the spatial distance d from each icon’s center to the origin was calculated. The ratio of the area of the icon to the area of the plane it was located on is denoted as the field of view area ratio p. Correlation analysis was performed between d, p, and the average search time. The Spearman correlation coefficients for all variables were found to be less than 0.05, indicating a significant correlation between the variables. Affected curves were established with spatial distance d and field of view area ratio p as independent variables and the average search time as the dependent variable. The results of the fitted curves are shown in Table 1:
Subsequently, a Gray Relational Analysis (GRA) iswas conducted to calculate the correlation coefficients for each independent variable. After normalization, the corresponding weight coefficients are obtained. The resulting influence function curves for each quadrant are shown in Table 2.

4.2. Experiment B: The Impact of Luminance Contrast on Visual Search Efficiency

Experimental Results
According to preliminary statistics, participants unanimously considered the experimental scenes in VR to be close to real-world environments, and the accuracy rate of target direction identification was 100% for all participants. The visual target search time (from entering the scene to the first detection of the target) for each participant was then exported through the ErgoLAB platform. A total of 12 male and 12 female participants were involved in this experiment, with 4800 data points expected to be collected. In practice, 4750 data points were collected. After processing the data separately for males and females, the small amount of missing data were supplemented with the average values corresponding to the respective conditions
Data Analysis
Through statistical analysis, it was found that the differences in gender, upper/lower, and left/right orientations between the low-luminance group and high-luminance group were consistent with the reaction patterns observed in the same luminance groups in Experiment A. Therefore, these will not be reiterated here.
After conducting the Shapiro–Wilk test, the data were found to conform to the assumption of normality. Based on this, we further explored the impact of luminance on search time. A One-way Analysis of Variance was performed, and the results indicated a significance coefficient of p = 0.01 (p < 0.05), which suggests that the null hypothesis does not hold. In other words, luminance has a significant effect on search time. Therefore, we conducted a detailed analysis based on the data presented in Figure 8.
From the above data, it can be observed that, compared to the condition where the luminance of the search target and the distractors are the same, the search speed increases when the luminance of the search target is either lower or higher than that of the distractors. Additionally, as the depth of the target increases, the effect of luminance contrast on search speed gradually diminishes. Even the search speed for deep targets in the Cl group is lower than that in the Cs group. Furthermore, luminance variation affected the optimal search area on each plane: for the left and right planes, compared to the Cs group, the optimal search area in the Ch group shifted to the outermost region (closer to the eye), and the average search time showed a linear increase from the outer to the inner area. In contrast, the optimal search area for the Cl group shifted inward. Unlike the Cs group, where the search time showed a concave trend from outer to inner areas, the Cl group displayed a trend where the search time first increased, then decreased, and later increased again. For the upper and lower planes, compared to the Cs group, the optimal search area in the Ch group also shifted to the outermost region, with the average search time increasing linearly from the outer to inner areas. The Cl group showed no change in the optimal search area, and the search time trend was similar to that of the Cs group.
By comparison, it was found that the average search time for the target significantly varied under different luminance contrast conditions. To explore the specific relationship between these factors, we treated spatial distance d, field of view area ratio p, and luminance contrast as independent variables, and average search time as the dependent variable. Using the Cs group as the baseline, a comparative analysis was conducted between the Cl and Ch groups.
Similar to Experiment A, the space is divided into four quadrants. Considering the impact of icon orientation, the positions of the icons within each quadrant are further subdivided into left (right) and upper (lower) sections, as shown in Figure 7b. After correlation testing, it was found that, except for the icons in the lower sections of the third and fourth quadrants, where the luminance factor and average search time have a significant coefficient greater than 0.05, all other target positions show a significant correlation between the independent and dependent variables. Using the Cs group as the reference, with luminance contrast as an interaction term, equations for the regression between the variables were derived, as shown in Table 3.
From the above results, it can be seen that, for targets at the same position, both high and low luminance can increase the search speed for targets in the space, with the improvement being more pronounced for high luminance than for low luminance. For targets at different positions, luminance variation adjusts the impact of both the field of view area ratio and the spatial distance on average search time, causing a change in the weight of their influences. The adjustment trend leads to a positive development towards an improved search efficiency.

5. Discussion

This study explores the factors influencing visual search efficiency in three-dimensional spaces, with a focus on analyzing the effects of spatial dimensions, target positions, luminance, and gender differences on visual search performance. Using virtual reality technology and eye-tracking devices, we thoroughly examined the differences in search efficiency for targets at different positions under various spatial dimensions and luminance conditions, and investigated the impact of gender on visual search tasks. The results indicate that spatial dimensions, position, and luminance significantly affect visual search efficiency, and that gender differences further amplify this effect under certain conditions.

5.1. The Influence of Spatial Dimensions and Position on Visual SEARCH Efficiency

The results indicate that the position of the target significantly affects visual search efficiency. Among the four surfaces (top, bottom, left, and right) in the study, targets located on the right and bottom surfaces exhibited higher recognition speeds in three-dimensional space, with both male and female participants demonstrating stronger search abilities in these positions. These findings align with the spatial attention theory proposed by Zhang et al. [41] and the optimal field of view range summarized by Jin Lianghai [40], both of which suggest that the lower and right areas of a scene tend to receive greater attention, making targets in these regions easier to locate. Compared to the right-field and lower-field advantage zones identified in two-dimensional visual search studies [2], this study similarly found that the dominant search areas in three-dimensional space are also located in the right and lower visual fields, consistent with conclusions drawn from two-dimensional studies. These findings suggest that the positional advantage effect in visual search transcends spatial dimensions. This cross-dimensional advantage effect may be associated with the global processing mechanism of the visual system, wherein the system prioritizes integrating information from regions with positional advantages [54]. Future research could further explore the specific neural mechanisms underlying this effect and investigate how experimental designs can better reveal the physiological basis behind this phenomenon.
Further analysis revealed that as the spatial width increased, the optimal search area for the target gradually shifted inward. Particularly in the 6-m wide space, the optimal search area was almost entirely located at the center. This suggests that changes in spatial dimensions significantly alter the relationship between a target’s visual characteristics—such as its distance from the observer and the area it occupies in the visual field—and search efficiency. In two-dimensional studies, it has been demonstrated that a decrease in the visual field area of a target or an increase in its eccentricity reduces the search speed, while an increase in visual area or a decrease in eccentricity enhances the search speed [32,33,34]. In the present study, both factors were found to jointly influence search speed. The findings indicate that, for the left and right surfaces in each spatial configuration, as the target moves inward, its distance from the visual origin decreases, and the proportion of the target’s area in the visual field gradually diminishes, resulting in increased search time. This suggests that within this range, the proportion of the target’s area in the visual field has a more pronounced effect on search time. Conversely, as the target moves outward, its area increases, but its distance from the visual origin also grows, leading to longer search times. This indicates that in this range, distance becomes the more dominant factor affecting search time. To further investigate this phenomenon, a spatial coordinate system was established to analyze the relationship between spatial distance, visual field area proportion, and search time. The results show that the weights of the influences of spatial distance and visual field area on search speed vary across different spatial positions. These findings provide new insights for future research on factors affecting spatial visual search and offer a theoretical foundation for practical applications of visual search in spatial contexts, such as wayfinding and navigation.

5.2. The Impact of Luminance on Visual Search Efficiency

This study also examined the impact of luminance on visual search efficiency. The results indicated that as the difference in luminance between the target and distractors increased, the search time decreased significantly. Notably, when the target’s luminance exceeded that of the distractors, the search efficiency improved substantially. Previous research in two-dimensional planes has demonstrated that search speed increases with greater luminance contrast between the target and the background [35,55]. Similarly, this study found that in three-dimensional spaces, the presence of luminance differences between the target and distractors enhanced the search speed, with the high-luminance group showing a more pronounced improvement than the low-luminance group. This finding aligns with conclusions from two-dimensional studies, where higher-luminance targets are associated with faster search speeds. However, this study also revealed that as the target depth increases and the target’s visual field area decreases, the influence of luminance differences on search speed diminishes. Specifically, in target regions farther from the observer, the search speed of the low-luminance group in certain positions fell below that of the equal-luminance group, while the high-luminance group’s speed advantage gradually declined. These results suggest that visual search is not solely influenced by the direct effect of target luminance but is also closely tied to the target’s spatial position, viewing distance, and the size of its visual field area.
Further analysis of the impact of brightness contrast on the location of optimal search regions on each surface revealed that a higher luminance contrast shifts the optimal search region outward in space, while a lower luminance contrast causes it to move inward. This phenomenon indicates that luminance differences not only enhance search efficiency but also influence the position of optimal search regions. This finding suggests that in designing spatial search environments, luminance can be strategically adjusted to position target objects in areas that are more easily recognizable by the human eye, thereby improving overall search performance.

5.3. The Impact of Gender Differences on Visual Search Efficiency

Gender differences were a key focus of this study, and the experimental results showed that men generally demonstrated a higher visual search efficiency than women in three-dimensional space. This advantage was particularly pronounced under high-load and difficult target search conditions. Previous studies by Ma Xiaohui [25] and Liao [56] have shown that men have superior cognitive and recognition abilities for targets in spatial scenes. The findings of this study are consistent with these conclusions, suggesting that men typically exhibit a greater search efficiency in spatial tasks. Building on prior analyses of gender differences, we speculate that these differences may be related to variations in visual attention allocation strategies. Men are likely to adopt spatial strategies during the visual search process, while women may rely more on target feature information [57,58]. This distinction in strategy use could explain the observed disparities in search efficiency between genders.
However, in certain cases, women demonstrated faster target search speeds than men in specific regions, particularly in areas closer to the observer or located in lower spatial positions. Previous studies have shown that attention distribution tends to decrease gradually from the center of the visual field outward [34,59]. Based on this, we speculate that this phenomenon may be related to differences in gaze height and visual angle preferences between men and women. Women typically have a lower average eye height than men, which might enable them to perform searches more quickly in regions aligned with their gaze height. Therefore, the impact of gender differences is not only reflected in overall efficiency but is also influenced by the specific position of the target and the spatial layout. This finding offers a new perspective for future design optimizations that consider gender differences.

6. Conclusions

This study systematically explored the factors influencing visual search efficiency in fundamental and widely applicable spatial environments using virtual reality technology and eye-tracking devices. It revealed the significant roles of spatial dimensions, target position, brightness contrast, and gender differences in visual search. The key findings are as follows:
① In three-dimensional space, individuals exhibit faster search speeds on the right and bottom surfaces compared to the left and top surfaces.
② Target search speed is jointly influenced by spatial distance and visual field area. Brightness differences between the target and distractors enhance search speed to varying degrees.
③ The optimal search regions on the left and right surfaces shift inward as spatial width increases. Changes in target brightness can also alter the location of these optimal search regions.
④ Men generally exhibit higher search speeds than women in spatial tasks. However, women outperform men in certain specific regions, particularly in areas closer to the observer or at lower spatial positions.
The findings of this study establish a connection between visual search theories in two-dimensional planes and three-dimensional spaces, providing a theoretical foundation for optimizing visual search tasks in three-dimensional environments. This research offers valuable references for the design and practical applications of various scenarios, such as wayfinding signage placement and spatial visual search training.
This study has certain limitations. First, although the experimental environment provided an immersive experience with the aid of virtual reality technology, individual differences in perceptual abilities within virtual spaces may exist. Second, the participants in this study were limited to university students. Future research should broaden the participant pool to include a wider range of age groups, such as children and older adults. Additionally, increasing the sample size would further enhance the statistical power of the findings.

Author Contributions

Conceptualization, W.W. and M.Z.; methodology, W.W. and Z.W.; software, W.W.; validation, W.W., M.Z. and Z.W.; formal analysis, Q.F.; investigation, W.W.; resources, M.Z. and Q.F.; data curation, W.W.; writing—original draft preparation, W.W.; writing—review and editing, M.Z. and Z.W.; visualization, Q.F.; supervision, M.Z.; project administration, M.Z. and Q.F.; funding acquisition, M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data supporting the results reported in this study are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ashman, A.F. The Relationship Between Planning and Simultaneous and Successive Synthesis; University of Alberta: Edmonton, AB, Canada, 1978. [Google Scholar]
  2. Ren, Y. The Spatial Effect in the Course of Visual Search. Master’s Thesis, Liaoning Normal University, Dalian, China, 2006. [Google Scholar]
  3. Wolfe, J.M.; Horowitz, T.S. Five Factors That Guide Attention in Visual Search. Nat. Hum. Behav. 2017, 1, 58. [Google Scholar] [CrossRef] [PubMed]
  4. Carr, K.T. An investigation into the effects of a simulated thermal cueing aid upon air-toground search performance. Vis. Search 1988, 361–370. [Google Scholar] [CrossRef]
  5. Chan, A.H.S.; Yu, R. Validating the Random Search Model for Two Targets of Different Difficulty. Percept. Mot. Skills 2010, 110, 167–180. [Google Scholar] [CrossRef] [PubMed]
  6. Treisman, A.M.; Gelade, G. A Feature-Integration Theory of Attention. Cognit. Psychol. 1980, 12, 97–136. [Google Scholar] [CrossRef] [PubMed]
  7. Duncan, J.; Humphreys, G. Beyond the Search Surface: Visual Search and Attentional Engagement. J. Exp. Psychol. Hum. Percept. Perform. 1992, 18, 578–588. [Google Scholar] [CrossRef] [PubMed]
  8. Wolfe, J.M.; Cave, K.R.; Franzel, S.L. Guided Search: An Alternative to the Feature Integration Model for Visual Search. J. Exp. Psychol. Hum. Percept. Perform. 1989, 15, 419–433. [Google Scholar] [CrossRef] [PubMed]
  9. Buetti, S.; Cronin, D.A.; Madison, A.M.; Wang, Z.; Lleras, A. Towards a Better Understanding of Parallel Visual Processing in Human Vision: Evidence for Exhaustive Analysis of Visual Information. J. Exp. Psychol. Gen. 2016, 145, 672–707. [Google Scholar] [CrossRef] [PubMed]
  10. Wang, A.; Lu, J.; Liu, X. Effect Mechanism of Attention Allocation Strategy on Task Performance and Visual Behavior of Interface. Comput. Syst. Appl. 2022, 31, 10–20. [Google Scholar]
  11. Varjo, J.; Hongisto, V.; Haapakangas, A.; Maula, H.; Koskela, H.; Hyönä, J. Simultaneous Effects of Irrelevant Speech, Temperature and Ventilation Rate on Performance and Satisfaction in Open-Plan Offices. J. Environ. Psychol. 2015, 44, 16–33. [Google Scholar] [CrossRef]
  12. Pierrette, M.; Parizet, E.; Chevret, P.; Chatillon, J. Noise Effect on Comfort in Open-Space Offices: Development of an Assessment Questionnaire. Ergonomics 2015, 58, 96–106. [Google Scholar] [CrossRef]
  13. Peng, Y. Theory of Architectural Spatial Composition; China Architecture & Building Press: Beijing, China, 2008. [Google Scholar]
  14. Cheng, D. Architecture Form Space and Order: Tianjin, China; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  15. Kobayashi, T.; Jadram, N.; Sugaya, M. Evaluation of the Effect of Three-Dimensional Shape in VR Space on Emotion Using Physiological Indexes. In Virtual, Augmented and Mixed Reality; Springer: Berlin/Heidelberg, Germany, 2024; Volume 14706, pp. 213–223. [Google Scholar]
  16. Li, H.; Yi, P.; Hong, Y.; Wang, L. Research on search behavior of evacuation signs under the influence of interference of different complexity. J. Saf. Environ. 2024, 24, 2348–2356. [Google Scholar]
  17. Mingming, M.; Jianhua, G.; Wenhang, L.; Lin, H.; Xiaohui, M.; Yabin, L. Layout Optimization of the Directional Emergency Evacuation Signs Based on Virtual Reality Eye-Tracking Experiment. Geomat. Inf. Sci. Wuhan Univ. 2020, 45, 1386–1394. [Google Scholar]
  18. Schrom-Feiertag, H.; Settgast, V.; Seer, S. Evaluation of Indoor Guidance Systems Using Eye Tracking in an Immersive Virtual Environment. Spat. Cogn. Comput. 2017, 17, 163–183. [Google Scholar] [CrossRef]
  19. López, G.; De Oña, J.; Garach, L.; Baena, L. Influence of Deficiencies in Traffic Control Devices in Crashes on Two-Lane Rural Roads. Accid. Anal. Prev. 2016, 96, 130–139. [Google Scholar] [CrossRef]
  20. Wakide, I. Standardized guide signs at Yokohama station. Jpn. Railw. Eng. 2005, 45, 16–18. [Google Scholar]
  21. Matsubara, H. A User-Specific Passenger Guidance System Aimed at Universal Design. Jpn. Railw. Eng. 2005, 45, 1–3. [Google Scholar]
  22. Lei, B.; Xu, J.; Li, M.; Li, H.; Li, J.; Cao, Z.; Hao, Y.; Zhang, Y. Enhancing Role of Guiding Signs Setting in Metro Stations with Incorporation of Microscopic Behavior of Pedestrians. Sustainability 2019, 11, 6109. [Google Scholar] [CrossRef]
  23. Vilar, E.; Rebelo, F.; Noriega, P.; Teles, J.; Mayhorn, C. The Influence of Environmental Features on Route Selection in an Emergency Situation. Appl. Ergon. 2013, 44, 618–627. [Google Scholar] [CrossRef]
  24. Vilar, E.; Rebelo, F.; Noriega, P.; Teles, J.; Mayhorn, C. Signage Versus Environmental Affordances: Is the Explicit Information Strong Enough to Guide Human Behavior During a Wayfinding Task? Hum. Factors Ergon. Manuf. Serv. Ind. 2015, 25, 439–452. [Google Scholar] [CrossRef]
  25. Ma, X.; Zhou, J.; Gong, J.; Huang, L.; Li, W.; Zou, Y. VR Eye-Tracking Perception Experiment and Layout Evaluation for Indoor Emergency Evacuation Signs. J. Geo-Inf. Sci. 2019, 21, 1170–1182. [Google Scholar]
  26. GB 55031-2022; General Code for Civil Building. Ministry of Housing and Urban Rural Development of the People’s Republic of China: Beijing, China, 2022.
  27. GB 55037-2022; General Specification for Building Fire Protection. Ministry of Housing and Urban Rural Development of the People’s Republic of China: Beijing, China, 2022.
  28. Dijkstra, J.J.; Chen, Q.Q.; De Vries, B.B.; Jessurun, A.J. Measuring Individual’s Egress Preference in Wayfinding through Virtual Navigation Experiments. Pedestr. Evacuation Dyn. 2014, 2012, 371–383. [Google Scholar]
  29. Zhao, H. Discussion on Safety Evacuation Design of Commercial Buildings. Fire Tech. Prod. Inf. 2005, 9–11. [Google Scholar]
  30. Duncan, J.; Humphreys, G.W. Visual Search and Stimulus Similarity. Psychol. Rev. 1989, 96, 433–458. [Google Scholar] [CrossRef] [PubMed]
  31. Previc, F.H.; Naegele, P.D. Target-Tilt and Vertical-Hemifield Asymmetries in Free-Scan Search for 3-D Targets. Percept. Psychophys. 2001, 63, 445–457. [Google Scholar] [CrossRef]
  32. Castiello, U.; Umiltà, C. Size of the Attentional Focus and Efficiency of Processing. Acta Psychol. 1990, 73, 195–209. [Google Scholar] [CrossRef] [PubMed]
  33. Eriksen, C.W.; St James, J.D. Visual Attention within and around the Field of Focal Attention: A Zoom Lens Model. Percept. Psychophys. 1986, 40, 225–240. [Google Scholar] [CrossRef]
  34. Carrasco, M.; Chang, I. The Interaction of Objective and Subjective Organizations in a Localization Search Task. Percept. Psychophys. 1995, 57, 1134–1150. [Google Scholar] [CrossRef]
  35. Shieh, K.-K.; Lin, C.-C. Effects of Screen Type, Ambient Illumination, and Color Combination on VDT Visual Performance and Subjective Preference. Int. J. Ind. Ergon. 2000, 26, 527–536. [Google Scholar] [CrossRef]
  36. Lin, C.-C. Effects of Screen Luminance Combination and Text Color on Visual Performance with TFT-LCD. Int. J. Ind. Ergon. 2005, 35, 229–235. [Google Scholar] [CrossRef]
  37. Wang, A.-H.; Chen, M.-T. Effects of Polarity and Luminance Contrast on Visual Performance and VDT Display Quality. Int. J. Ind. Ergon. 2000, 25, 415–421. [Google Scholar] [CrossRef]
  38. Guan, X.; Zhao, H.; Hu, S. Measurement of Spatial Luminance Influence on Visual Clarity of Road Lighting. China Illum. Eng. J. 2010, 21, 17–24. [Google Scholar]
  39. Ji, Z.; Shao, H.; Lin, Y. The Effect of Adaptation Time, Adaptation Luminance and Glare on Contrast of Human ’s Eye. China Illum. Eng. J. 2006, 15, 1–4. [Google Scholar]
  40. Jin, L.; Min, L.; Chen, S.; Zheng, X.; Jiang, X.; Chen, Y. Visual Space Model and Eye Motion Verification of Visual Guiding System in Public Space. J. Eng. Stud. 2017, 9, 430–438. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Zheng, X.J.; Hong, W.; Mou, X.Q. A Comparison Study of Stationary and Mobile Eye Tracking on EXITs Design in a Wayfinding System; IEEE: Piscataway, NJ, USA, 2015; pp. 649–653. [Google Scholar]
  42. Hedges, L.V.; Nowell, A. Sex Differences in Mental Test Scores, Variability, and Numbers of High-Scoring Individuals. Science 1995, 269, 41–45. [Google Scholar] [CrossRef] [PubMed]
  43. Liu, X.; Jin, T.; Wu, X.; Gu, H.; Zhao, D. Effect of Emergency Icon Presentation and Brightness Contrast on Cognitive Performance. Sci. Technol. Eng. 2024, 24, 3291–3297. [Google Scholar]
  44. Jin, T.; Ming, C.; Zhou, S.; He, J. Impact Mechanism of Icon Layout on Visual Search Performance. J. Northeast. Univ. 2021, 42, 1579–1584. [Google Scholar]
  45. Wang, Y. Feature-Oriented Visual Search ERP Research. Master’s Thesis, Southeast University, Nanjing, China, 2022. [Google Scholar]
  46. Teng, S. Research on Visual Saliency of Guide Signs in Underground Commercial Street Based on Eye Movement Experiment. Master’s Thesis, China University of Mining and Technology, Beijing, China, 2022. [Google Scholar]
  47. Chen, Y.; Cui, Z.; Hao, L. Virtual Reality in Lighting Research: Comparing Physical and Virtual Lighting Environments. Light. Res. Technol. 2019, 51, 820–837. [Google Scholar] [CrossRef]
  48. Zong, Z. VR simulation research on colour and light environment design of cruise cabin. Master’s Thesis, Harbin Engineering University, Harbin, China, 2023. [Google Scholar]
  49. Kobes, M.; Helsloot, I.; De Vries, B.; Post, J.G. Building Safety and Human Behaviour in Fire: A Literature Review. Fire Saf. J. 2010, 45, 1–11. [Google Scholar] [CrossRef]
  50. Bode, N.W.F.; Codling, E.A. Human Exit Route Choice in Virtual Crowd Evacuations. Anim. Behav. 2013, 86, 347–358. [Google Scholar] [CrossRef]
  51. Tang, C.-H.; Wu, W.-T.; Lin, C.-Y. Using Virtual Reality to Determine How Emergency Signs Facilitate Way-Finding. Appl. Ergon. 2009, 40, 722–730. [Google Scholar] [CrossRef]
  52. Kobes, M.; Helsloot, I.; de Vries, B. Exit choice,(pre-) movement time and (pre-) evacuation behaviour in hotel fire evacuation—Behavioural analysis and validation of the use of serious gaming in experimental research. Procedia Eng. 2010, 3, 37–51. [Google Scholar] [CrossRef]
  53. GB-T 10000-2023; The Chinese Adult Body Size. The State Bureau of Quality and Technical Supervision: Beijing, China, 2023.
  54. Yang, X. The Processing of Configural Superiority Effect in the Ventral Visual Pathway: A fMRI and TMS Study. Master’s Thesis, University of Electronic Science and Technology of China, Chengdu, China, 2020. [Google Scholar]
  55. Ojanpää, H.; Näsänen, R. Effects of Luminance and Colour Contrast on the Search of Information on Display Devices. Displays 2003, 24, 167–178. [Google Scholar] [CrossRef]
  56. Liao, H.; Dong, W. An Exploratory Study Investigating Gender Effects on Using 3D Maps for Spatial Orientation in Wayfinding. ISPRS Int. J. Geo-Inf. 2017, 6, 60. [Google Scholar] [CrossRef]
  57. Lawton, C.A. Gender differences in way-finding strategies: Relationship to spatial ability and spatial anxiety. Sex Roles 1994, 30, 765–779. [Google Scholar] [CrossRef]
  58. Huo, X. Gender Differences of Visual Search in Scene Perception. Master’s Thesis, Ningxia University, Yinchuan, China, 2014. [Google Scholar]
  59. Chapman, A.F.; Störmer, V.S. Feature Similarity Is Non-Linearly Related to Attentional Selection: Evidence from Visual Search and Sustained Attention Tasks. J. Vis. 2022, 22, 4. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Icon configuration. (a) C-shaped icon (Lanchester ring) size. (b) Adjacent icon spacing.
Figure 1. Icon configuration. (a) C-shaped icon (Lanchester ring) size. (b) Adjacent icon spacing.
Buildings 15 00656 g001
Figure 2. Experimental process diagram.
Figure 2. Experimental process diagram.
Buildings 15 00656 g002
Figure 3. Position labeling diagram (using a 12 m × 3 m × 3 m space as an example).
Figure 3. Position labeling diagram (using a 12 m × 3 m × 3 m space as an example).
Buildings 15 00656 g003
Figure 4. Data statistics for a 12 m × 1.5 m × 3 m Space. (a) Male left–right comparison in a 1.5 m space. (b) Female left–right comparison in a 1.5 m space. (c) Male–female left side comparison in a 1.5 m space. (d) Male–female right side comparison in a 1.5 m space. (e) Up–down comparison chart in a 1.5 m space. (f) Male–female up–down comparison in a 1.5 m space.
Figure 4. Data statistics for a 12 m × 1.5 m × 3 m Space. (a) Male left–right comparison in a 1.5 m space. (b) Female left–right comparison in a 1.5 m space. (c) Male–female left side comparison in a 1.5 m space. (d) Male–female right side comparison in a 1.5 m space. (e) Up–down comparison chart in a 1.5 m space. (f) Male–female up–down comparison in a 1.5 m space.
Buildings 15 00656 g004
Figure 5. Data statistics for a 12 m × 3 m × 3 m space. (a) Male left–right comparison in a 3 m space. (b) Female left–right comparison in a 3 m space. (c) Male–female left side comparison in a 3 m space. (d) Male–female right side comparison in a 3 m space. (e) Male up–down comparison chart in a 3 m space. (f) Female up–down comparison in a 3 m space. (g) Male–female upward comparison in a 3 m space. (h) Male–female downward comparison in a 3 m space.
Figure 5. Data statistics for a 12 m × 3 m × 3 m space. (a) Male left–right comparison in a 3 m space. (b) Female left–right comparison in a 3 m space. (c) Male–female left side comparison in a 3 m space. (d) Male–female right side comparison in a 3 m space. (e) Male up–down comparison chart in a 3 m space. (f) Female up–down comparison in a 3 m space. (g) Male–female upward comparison in a 3 m space. (h) Male–female downward comparison in a 3 m space.
Buildings 15 00656 g005aBuildings 15 00656 g005b
Figure 6. Data statistics for a 12 m × 6 m × 3 m space. (a) Male left–right comparison in a 6 m space. (b) Female left–right comparison in a 6 m space. (b) Female left–right comparison in a 6 m space. (c) Male–female left side comparison in a 6 m space. (d) Male–female right side comparison in a 6 m space. (e) Male up–down comparison in a 6 m space. (f) Female up–down comparison in a 6 m space. (g) Male–female upward comparison in a 6 m space. (h) Male–female downward comparison in a 6 m space.
Figure 6. Data statistics for a 12 m × 6 m × 3 m space. (a) Male left–right comparison in a 6 m space. (b) Female left–right comparison in a 6 m space. (b) Female left–right comparison in a 6 m space. (c) Male–female left side comparison in a 6 m space. (d) Male–female right side comparison in a 6 m space. (e) Male up–down comparison in a 6 m space. (f) Female up–down comparison in a 6 m space. (g) Male–female upward comparison in a 6 m space. (h) Male–female downward comparison in a 6 m space.
Buildings 15 00656 g006aBuildings 15 00656 g006b
Figure 7. Three-dimensional Cartesian coordinate system. (a) Three-dimensional coordinate system. (b) Spatial quadrant division.
Figure 7. Three-dimensional Cartesian coordinate system. (a) Three-dimensional coordinate system. (b) Spatial quadrant division.
Buildings 15 00656 g007
Figure 8. Comparison of average search time. (a) Left side search time comparison chart. (b) Right side search time comparison chart. (c) Top side search time comparison chart. (d) Bottom side search time comparison chart.
Figure 8. Comparison of average search time. (a) Left side search time comparison chart. (b) Right side search time comparison chart. (c) Top side search time comparison chart. (d) Bottom side search time comparison chart.
Buildings 15 00656 g008aBuildings 15 00656 g008b
Table 1. Fitted curves for each quadrant.
Table 1. Fitted curves for each quadrant.
QuadrantField of View Area RatioSpatial Distance
First quadrantBuildings 15 00656 i001
t = 0.0002 p + 1.1973 (R2 = 0.516)
Buildings 15 00656 i002
t = 3.3316 0.8538     d + 0.1096     d 2 0.0042     d 3
(R2 = 0.622)
Second quadrantBuildings 15 00656 i003
t = 0.0004 p + 1.2642 (R2 = 0.598)
Buildings 15 00656 i004
t = 4.0398 0.9416     d + 0.1004     d 2 0.0029     d 3
(R2 = 0.631)
Third quadrantBuildings 15 00656 i005
t = 0.0004 p + 1.2520 (R2 = 0.601)
Buildings 15 00656 i006
t = 3.9696 0.9051     d + 0.0932     d 2 0.0025     d 3
(R2 = 0.659)
Fourth quadrantBuildings 15 00656 i007
t = 0.0002 p + 1.1884 (R2 = 0.491)
Buildings 15 00656 i008
t = 3.3780 0.8607     d + 0.1078     d 2 0.0039     d 3
(R2 = 0.614)
Table 2. Final impact functions for each quadrant.
Table 2. Final impact functions for each quadrant.
QuadrantGRANormalizationFinal Influence Function
First quadrantd0.8260.53 F ( t ) = 0.0001 p + 2.3284 0.4525     d + 0.0581     d 2 + 0.0022     d 3
p0.7270.47
Second quadrantd0.8140.53 F ( t ) = 0.0002 p + 2.7353 0.4990     d + 0.0532     d 2 0.0015     d 3
p0.7280.47
Third quadrantd0.8140.53 F ( t ) = 0.0002 p + 2.6923 0.4797     d + 0.0494     d 2 0.0013     d 3
p0.7270.47
Fourth quadrantd0.8260.53 F ( t ) = 0.0001 p + 2.3488 0.4562     d + 0.0571     d 2 0.0021     d 3
p0.7270.47
Table 3. Regression equations for positions in each quadrant.
Table 3. Regression equations for positions in each quadrant.
QuadrantPositionGroupRegression Equation
First quadrantRightCs t = 96.584 p 0.223 d + 2.310
Cl t = 96.584 p 0.223 d + 2.159
Ch t = 96.584 p 0.223 d + 2.043
UpperCs t = 91.641 p 0.278 d + 2.872
Cl t = 91.641 p 0.278 d + 2.669
Ch t = 91.641 p 0.278 d + 2.571
Second quadrantLeftCs t = 98.429 p 0.229 d + 2.470
Cl t = 98.429 p 0.229 d + 2.337
Ch t = 98.429 p 0.229 d + 2.246
UpperCs t = 74.916 p 0.252 d + 2.900
Cl t = 74.916 p 0.252 d + 2.676
Ch t = 74.916 p 0.252 d + 2.547
Third quadrantLeftCs t = 92.127 p 0.216 d + 2.422
Cl t = 92.127 p 0.216 d + 2.290
Ch t = 92.127 p 0.216 d + 2.189
LowerCs t = 37.735 p 0.167 d + 2.207
Cl——
Ch t = 37.735 p 0.167 d + 1.999
Fourth quadrantRightCs t = 63.451 p 0.190 d + 2.226
Cl t = 63.451 p 0.190 d + 2.074
Ch t = 63.451 p 0.190 d + 1.996
LowerCs t = 96.584 p 0.223 d + 2.310
Cl——
Ch t = 96.584 p 0.223 d + 2.043
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, W.; Zhang, M.; Wang, Z.; Fan, Q. The Impact of Spatial Dimensions, Location, Luminance, and Gender Differences on Visual Search Efficiency in Three-Dimensional Space. Buildings 2025, 15, 656. https://doi.org/10.3390/buildings15050656

AMA Style

Wang W, Zhang M, Wang Z, Fan Q. The Impact of Spatial Dimensions, Location, Luminance, and Gender Differences on Visual Search Efficiency in Three-Dimensional Space. Buildings. 2025; 15(5):656. https://doi.org/10.3390/buildings15050656

Chicago/Turabian Style

Wang, Wenheng, Mingyu Zhang, Zhide Wang, and Qing Fan. 2025. "The Impact of Spatial Dimensions, Location, Luminance, and Gender Differences on Visual Search Efficiency in Three-Dimensional Space" Buildings 15, no. 5: 656. https://doi.org/10.3390/buildings15050656

APA Style

Wang, W., Zhang, M., Wang, Z., & Fan, Q. (2025). The Impact of Spatial Dimensions, Location, Luminance, and Gender Differences on Visual Search Efficiency in Three-Dimensional Space. Buildings, 15(5), 656. https://doi.org/10.3390/buildings15050656

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop