Next Article in Journal
Evaluation and Recognition of Handwritten Chinese Characters Based on Similarities
Previous Article in Journal
Inverse Modeling of Seepage Parameters Based on an Improved Gray Wolf Optimizer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Establishment and Saliency Verification of a Visual Translation Method for Cultural Elements of High-Speed Railways: A Case Study of the BZ Railway Line

School of Architecture and Design, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(17), 8520; https://doi.org/10.3390/app12178520
Submission received: 30 June 2022 / Revised: 4 August 2022 / Accepted: 23 August 2022 / Published: 25 August 2022

Abstract

:

Featured Application

(1) As opposed to the complex environment of the real high-speed railway station, this study used a combination of eye-tracking and VR to test and analyze the validity and significance of transcription results in a laboratory environment. (2) A total of 94 sets of data were collected, of which 92 were valid. (3) This study emphasized the development of a visual saliency verification framework for a cultural elements translation method. (4) This study provides new ideas for the design and layout of station symbol systems.

Abstract

The high-speed railway station symbol system, generated from regional culture translations, not only improves transfer efficiency but also reveals the area’s unique urban cultural context. In this study, we used an eye-tracking technique and virtual reality technology to examine the visual cognitive preferences of the existing cultural translation method used by the Beijing–Zhangjiakou high-speed railway. Then, considering the design and layout of the existing station symbol system, we analyzed the visual saliency of different elements such as images, words, and symbols in three types of spaces in the Taizicheng high-speed railway station. The experiment site was located in the physical laboratory of the School of Architecture and Design at Beijing Jiaotong University. A total of 94 students from different majors were selected to participate in the experiment, with 92 datapoints eventually being deemed valid. The experiment data showed the following. First, the overall significance ranking of three scenes in the Taizicheng station was: S1 (81.10%) > S2 (64.57%) > S3 (49.57%). The cognitive correctness rankings of the number positions of the three scenes were: S1: 5 > 2 > 3 > 1 = 4; S2: 4 > 2 > 3 > 1 > 5; S3: 1 > 3 > 2 > 5 > 4. Second, the significance ranking of the transliteration element in S1 was: Images > Words > Sculptures > Patterns > Colors; S2 was: Patterns > Colors > Words > Images > Sculptures; and S3 was: Colors > Images > Words > Patterns > Sculptures. The results underscore the validity of the Beijing–Zhangjiakou cultural translation and offer a reference for station layout and spatial optimization. Finally, they provide new ideas for the design and layout of station symbol systems.

1. Introduction

1.1. Research Background

The BZ (Beijing–Zhangjiakou) railway is the first railroad designed and built independently in China, and thus, carries important historical and cultural significance. In order to adapt to the era of high-speed development, the Beijing–Zhangjiakou railway was transformed into the first intelligent high-speed railroad in China: the Beijing–Zhangjiakou high-speed railway. It is not only an important part of China’s high-speed railroad network, but also a key transportation line for the 2022 Beijing Winter Olympic Games, carrying rich cultural connotations and the significance of the times, including one hundred years of Beijing–Zhangjiakou history, high-speed railway technology, and humanity as represented by the Winter Olympics culture (see Figure 1) [1,2,3,4,5].
The culture surrounding the Beijing–Zhangjiakou high-speed railway fully embodies the spiritual core of China’s railroad construction and development process; extension of the Beijing–Zhangjiakou high-speed railway culture was necessary for its recent rapid development [6]. The Beijing–Zhangjiakou railway carries with it one hundred years of history and essential aspects of the national memory. It is the brilliant result of the effort and resiliency of generations [7,8,9,10]. An effective interpretation of the Beijing–Zhangjiakou high-speed railway culture in this new era can mainly be accomplished through the visual presentation of elements of Beijing–Zhangjiakou in the stations, including images, words, and symbols generated by the translation of regional culture along the Beijing–Zhangjiakou route [11].
In order to verify the effectiveness of such cultural translation, this research used a combination of eye tracking and virtual reality (VR) to examine the visual cognitive preferences demonstrated by the existing cultural translation scheme, and then combined that information with the design and layout of the actual station guidance system. The visual salience levels of three types of spaces were explored, as well as elements such as images, words, and symbols appearing in the Taizicheng railway station. These elements were implemented to ensure the efficiency of riding and illustrate the integration of the regional culture into the station lines, providing new ideas for the design and layout of station guidance systems.

1.2. Cultural Production and Translation

Transcreation is accomplished by integrating and recreating a subject through archetypal (According to Jung, an archetype is a summary of an experience and can be divided into two categories: image and situation. In this research, we studied the image category.) Information and later constructing a new symbolic system [12]. Established cultural transliteration methods contain linguistic words, images, interactions, patterns, and devices, etc. Han et al., used the form and color of Weixian paper cuts to form a guidance system for the Beijing–Zhangjiakou high-speed railway station, based on the regional culture of Beijing and Zhangjiakou and enhancing the connection between the city’s cultural lineage and the station [13]. Yujia et al., analyzed the use of regional cultural elements in the subway guidance design (e.g., color, graphics, and space) from the perspective of regional cultural characteristics and the psychology of wayfinding cognition in subway spaces [14,15,16]. Zhao et al., explored the design of a guidance system for the Tokyo subway space, demonstrating that the theory and method of the user experience for a subway guidance system design can improve the quality of that design [17].
The effectiveness of a transcription can be evaluated in terms of attention, recognition, and preference [18]. Eye movements reflect the subjects’ mental activities and cognitive processes [19], and their indexing can be regarded as a visual evaluation of the subjects for stimuli [20]. Eye tracking has been used to conduct experiments in the fields of interior decoration [21,22], advertising [23,24,25], reading [26,27], and urban planning [28]. Based on such eye tracking and subsequent data analysis, Fu et al., improved the design of existing high-speed train waiting screens and tickets [29]. Li et al., used eye tracking to investigate differences in the effectiveness of web ads with various presentation methods and explored the consistency of their subjective memory scores through eye-tracking indicators [30]. Cheng et al., analyzed the visual saliency of wayfinding markers in indoor public spaces within real commercial complexes, based on mobile eye tracking; the results provided theoretical support and supplemental data for the design and optimization of indoor architectural environments [31]. A large body of research findings indicated that eye tracking is more scientifically rigorous and, to a certain extent, a more appropriate methodological tool for effectively demonstrating cultural transcription.

2. Methodology

2.1. Eye-Tracking Combined with VR

As opposed to the complex environment of the real high-speed railway station, this study used a combination of eye tracking and VR to test and analyze the validity and significance of the transcription results in a laboratory environment. VR reduced the interference of the real environment on the experimental data by providing a realistic presentation of the station scene, resulting in more valid research findings [32,33,34,35]. Eye tracking refers to the process of measuring and recording the subjects’ eye movements while they observe experimental materials; it has a wide range of applications in the field of human visual cognition [36,37,38,39]. An analysis of subjects’ eye-tracking index data can accurately reflect the visual information selected by those subjects and reveal their mental activities during viewing and selection [40,41,42,43]. To achieve an efficient comparative analysis of eye-tracking data, we focused on specific areas called areas of interest (AOI) [44]. We compiled and selected eye-tracking indicators [45,46,47] and categorized and interpreted them by drawing upon and integrating the conclusions made in previous studies (see Table 1).

2.2. Experiment and Questionnaire Design

The experiment in this study was used to establish a method for capturing and analyzing data on the various types of eye-tracking indicators, collected when subjects observed pictures in a VR environment. The goal was to explore their visual cognitive preferences with regards to the existing cultural translation scheme of the Beijing–Zhangjiakou high-speed railway, and subsequently analyze the visual saliency of different elements such as images, words, and symbols in this type of space, in conjunction with the design and layout of the guidance system used in the Taizicheng railway station. The process was divided into four parts.

2.2.1. Experiment 1: Exploring the Visual Cognitive Preferences Related to the Existing Beijing–Zhangjiakou High-Speed Railway Culture Transcreation Scheme

Five representative pictures of Beijing–Zhangjiakou culture were selected from the established Beijing–Zhangjiakou high-speed railway culture database for use as experiment materials. The pictures were uniformly processed as grayscale pictures to avoid the visual interference produced by color factors [48]. According to the existing Beijing–Zhangjiakou high-speed railway cultural translation scheme, the symbols corresponding to each representative picture were translated into five types of symbols: (1) the current style of “line and surface combination”, and the deformation styles of (2) “surface”, (3) “outlines”, (4) “lines”, and (5) “errors” (see Table 2). The symbols were randomly sorted, numbered from “A” to “E”, and set to the corresponding AOI. Fifteen seconds after observing the pictures, subjects were asked to select the symbol they thought corresponded most accurately to the above image, according to the textual prompt (see Figure 2).

2.2.2. Experiment 2: Exploring the Visual Saliency of Three Types of Spaces inside and outside the Taizicheng Railway Station

Taking the Taizicheng railway station as the base case, we completed 3D modeling of three scenes inside and outside the station, and set numbers from “1” to “5” on the ground, top, front, left, and right elevations, respectively, in each scene, according to the different positions of the interface. We then generated a panoramic of each scene. Subjects were given 10 s to memorize the numbers and their positions within the scenes. Then, a slide of the question with partial scene pictures labeled “A” to “E” was displayed. According to the text prompt, the subjects gave verbal feedback on their cognitive results (see Figure 3).

2.2.3. Experiment 3: Studying the Visual Saliency Characteristics of Different Elements in Interior and Exterior Spaces of the Taizicheng Railway Station

The 3D model produced for Experiment 2 was used, keeping the perspective of the scene unchanged. According to the relevant design guidelines [48] of the Taizicheng railway station (in terms of location, size, and number of signs), five types of cultural transliteration elements (i.e., images, patterns, words, colors, and sculptures) were set in each scene and the corresponding AOI. Subjects observed the scene without the elements for 30 s. Through textual prompts, subjects were then allowed to enter the same scene with the five types of elements and observe it for 30 s. Eye-tracking data were derived to analyze the results (see Figure 4).

2.2.4. Subjective Questionnaire Design

The questionnaire design mainly consisted of surveys reflecting the participants’ subjective feelings regarding scene favorites. In order to obtain more comprehensive and accurate feedback, the questionnaire was designed based on the semantic differential principle, commonly used in a variety of countries around the world [49]. The rating scale was quantified into numerical values, and a 7-segment evaluation model was used (i.e., −3, −2, −1, 0, 1, 2, 3 representing “negative” to “positive”); the higher the number, the more “positive” was the corresponding test feeling. The evaluation scale was described verbally, so that the subjects could better understand it and offer the most accurate and true evaluations (see Table 3 and Table 4).

2.3. Subject Selection and Experiment Site

To ensure an independent and quiet testing environment, the experiments were conducted in the physics laboratory of the School of Architecture and Art at Beijing Jiaotong University. A VR headset (HTC VIVE Pro eye) was selected as the experiment apparatus, and an accompanying eye-tracking data processing platform was used. A total of 94 college students from different majors were recruited as subjects to make the findings more credible.
After the experiment, data collation and summary statistics for each subject were conducted and derived, including the collection of all subjects’ information and statistics for the paper version of the record files. A total of 94 sets of data were collected, of which 92 were valid. The eye-tracking data recorded in the experiment were used to generate heat and tracking maps. The heat map used warm and cool colors to indicate the time and position changes of the subjects’ fixation; warmer colors indicated a higher fixation and, conversely, cooler colors indicated a lower fixation. The tracking map indicated the locations of the subjects’ fixation points, as well as the fixation duration sequence (by connecting the dots).

3. Results

In the subjective questionnaire, the subjects indicated that having positive emotions, higher comfort levels, and more focused attention in the VR environment experiment ensured the quality of the data collected.

3.1. Data from and Results of Experiment 1

The visualization results of the representative pictures are shown in Table 5. The G1 viewpoints were mainly concentrated in the area of the city tower and the daughter wall. The G2 viewpoints focused on the plaque words and daughter wall areas. The G3 viewpoints focused on the Great Wall and projection areas. The visual range of the G4 viewpoints was the most concentrated, with the focus on the central area of the gate tower. The G5 viewpoints focused on the word and symbol areas.
In general, the subjects’ visual ranges were concentrated, with the preference being looking at word information, building form and structure, and architectural features.
To avoid viewing habits affecting the experiments’ data and results, the different styles of transliteration symbol for the five groups of questions were randomly ordered [50]. Higher AOI FC and AOI TFD values represented a greater or more difficult amount of information in this region and a longer time required for subjects to comprehend the content [51]. The eye-tracking data from the subjects during symbol selection (see Table 6) showed that the highest overall transcription elements for AOI MFD (0.218 s), AOI FC (22.465%), and AOI TFD (23.941%) were line and face combinations, indicating that this style of symbol attracted sufficient attention and substantial interest. For the error style transliteration symbols, AOI FF (0.216 s) was relatively long, but AOI FC (10.699%) and AOI TFD (10.618%) were short, indicating that subjects had more difficulty interpreting the information conveyed by this style during their observation and were reluctant to spend too much time on it.
According to the results of the AOI and the final selection of symbols (see Figure 5), most subjects first fixated on the line and surface combination symbols (171 visits, accounting for 37% of the total). Most also selected the line and surface combination symbols (145 visits, accounting for 31% of the total). In summary, the current line and surface combinations agreed with the visual perception and aesthetic preference of the subjects, conveying the image information accurately; thus, this type was the most suitable for symbols.

3.2. Data from and Results of Experiment 2

The three scenes in Experiment 2 were all from the Taizichen railway station: Scene 1 (S1), the transfer lane inside; Scene 2 (S2), the waiting hall inside; and Scene 3 (S3), the landscape pavilion outside. The visualization results for the subjects in terms of scene memory are shown in Table 7. The visual range of S1 was more concentrated. The fixation points were mainly distributed in a “+” shape, with the center of the picture serving as the midpoint. The visual range of S2 was more dispersed than that of S1. The distribution of viewpoints in the horizontal direction was greater than in the vertical direction. The visual range of S3 was more dispersed. The fixation points were mainly distributed horizontally in a “−” shape.
Combined with human visual characteristics, the horizontal field of view tends to be much greater than the vertical, and the eyes move faster and grow less fatigued in the horizontal rather than the vertical direction [52]. Thus, given the limited time, subjects in the present research tended to prioritize their information search efforts in the horizontal direction, followed by the vertical direction.
Further viewed in conjunction with the FC and SC indicators (see Table 8), the FCs in S1 were approximately twice that of S2 and S3, and the SCs were approximately 88% that of S2 and S3. This indicated that the subjects had more difficulty with searching for target information in S2 and S3, due to challenges with finding the target and the need to scan the scenes extensively; this was likely attributable to the complexity of the spatial structure.
The cognitive task data for the subjects showed that the overall cognitive correctness of the three scenarios was ranked as “S1 (81.104%) > S2 (64.566%) > S3 (49.566%).” In S1, the significance of the five digits was high and similar, and the cognitive correctness was ranked from highest to lowest as “5 > 2 > 3 > 1 = 4.” The number 5 was slightly significant (84.87%); In S2, the digits began to show more significant differences, ranked as “4 > 2 > 3 > 1 > 5,” and the number 4 was the most significant (76.09%); In S3, the numerical significance was the most prominent, with a ranking of “1 > 3 > 2 > 5 > 4.” The number 1 was the most significant (77.17%) (see Figure 6). With the same observation times, subjects were able to complete their searches quickly and obtain more information about the scenes in S1.
In summary, the higher the cognitive correctness, the greater was the significance. The visual saliency was greater in spaces with simple environmental structures and reduced in spaces in which the environmental structures were more complex.

3.3. Data from and Results of Experiment 3

According to the related literature on the layout of symbol systems [53] as applied to the Taizicheng railway station (in terms of location, size, and number of signs), we concluded that the information related to image type was mostly arranged on the space facades as propaganda. Pattern types were mostly arranged along passenger travel routes as guidance signs, and word types were mostly used for crowd diversion in the middle and upper regions of the space. Color decorations were primarily applied to create atmosphere in the station, and sculptures were most often used to illustrate regional historical heritage.
As a result, the translational elements (images, patterns, words, colors, and sculptures) and the spatial layout of each scene in this experiment were marked with the corresponding AOI (see Figure 7).
The heat maps (see Table 9) showed that in S1, the optimized spatial fixation concentration area was significantly increased as compared to the unoptimized area, and basically focused on the optimized area. In S2, the optimized fixation concentration area was increased in the vertical direction as compared to the unoptimized area, and basically focused on the optimized area (except for the dome). In S3, the focus of fixation in the optimized area was more horizontal than in the unoptimized area, and primarily focused on the optimized area.
The track maps (see Table 10) showed that in S1, the spatially optimized fixation points were more concentrated and the saccadic distance was shorter than in the unoptimized area, especially the ceiling. In S2, the spatially optimized fixation points were more concentrated on both sides of the optimized rather than the unoptimized area, making the saccadic distance longer and causing the information in the middle section (i.e., the area from the center to both sides) to be more obviously neglected. In S3, the optimized design significantly reduced the fixation points in the unoptimized ground area, substantially increased the fixation points in the optimized ceiling area, and markedly shortened the saccadic distance of the optimized scene.
Overall, the spatial optimization of the scene began to shift the focus of the subjects’ viewing from being scattered before optimization to being concentrated and focused afterwards.
The analysis was performed in combination with a scene eye movement index. The data were judged in extreme cases, and extreme values at both ends that affected the data stability were removed using the truncated mean method, yielding an average index that more truly reflected the data situation [54,55,56]. According to the data (see Table 11), S1 showed an increase of approximately 40% in the number of FCs and a decrease of approximately 3% in the number of SCs, as compared to the unoptimized scenario. S2 showed an increase of approximately 9% in the number of FCs and a decrease of approximately 2% in the number of SCs, as compared to the unoptimized scenario. S3 showed an increase of approximately 12% in the number of FCs and a decrease of approximately 1% in the number of SCs, as compared to the unoptimized scenario.
Overall, the number of FCs increased and SCs decreased after (as compared to before) spatial optimization, and more saccadic time was converted into fixation time. This indicated that the subjects’ attention was more focused and they paid greater visual attention in the spatially optimized scenes, which together represented the most obvious phenomenon in S1.
The duration of fixation not only indicated the attractiveness of the target, but also the difference in cognitive load [57]. As shown in Table 12, the significance ranking of the transliteration element in S1 was “Images > Words > Sculptures > Patterns > Colors.” In S2, the significance of the transliteration element was “Patterns > Colors > Words > Images > Sculptures.” In S3, the significance of the transliteration element was “Colors > Images > Words > Patterns > Sculptures”.
In summary, the images had a high visual significance in S1 and a low visual significance in S2. Patterns had a high visual significance in S2 and a low visual significance in S3. Words were more significant in each scene, with the most significance being in S1. Colors had a high significance in scenes with complex environment spaces and a low significance in scenes in which the environment space was simple. Sculptures were more significant in scenes with a simple environment space and very low in scenes in which the environment space was complex. It is worth noting that the average duration of the fixation points in S1 was longer for sculptures, indicating that the transliteration of sculptures required a certain cognitive load and offered more readability and reflectivity. In S2, words were more important. In S3, sculptures were more important to people in scenes with more external influence and lower visual salience.

3.4. Data from and Results of the Subjective Questionnaire

The statistical results of the questionnaire were obtained on a 1 to 3 scoring scale. For the scene favorites survey (see Figure 8), the scores for unoptimized and optimized S1 were 16 and 113, a 6.06-times improvement. For unoptimized and optimized S2, the scores were 24 and 154, a 5.42-times improvement. For unoptimized and optimized S3, the scores were 80 and 183, a 1.29-times improvement. Thus, the optimized scenes were more recognized and favored by the subjects.

4. Discussion

By analyzing the data from the various eye-tracking indicators of 92 subjects observing stimulus pictures in a VR environment, the experimental findings were determined as follows.
(1) The subjects preferred to view the textual information and architectural form features in the representative pictures. The eye-tracking data proved that the line and surface combination style of the transliteration symbols agreed best with the subjects’ visual perception and aesthetic preferences, conveying the image information more accurately; thus, it is more suitable for symbols.
(2) The research result showed that the higher the cognitive correctness, the greater the significance. The overall significance ranking of the scenes was: S1 (81.10%) > S2 (64.57%) > S3 (49.57%). Based on the number of FCs and SCs, the subjects scanned the scenes extensively when searching for target information in S2 and S3, which indicated that the simpler the structure of the spatial environment, the higher was the visual saliency.
(3) Comparing the number of FCs and SCs before and after optimization showed that subjects in the spatially optimized scenes had a more focused attention, a clearer observation focus, and made a greater visual effort. The significance of the transliteration elements of the three scenes in the Taizicheng railway station were as follows: S1: Images > Words > Sculptures > Patterns > Colors; S2: Patterns > Colors > Words > Images > Sculptures; and S3: Colors > Images > Words > Patterns > Sculptures.
(4) In the subjective questionnaire, the respondents proved with their votes that having positive emotions, a higher comfort level, and a more focused attention in the VR environment ensured the quality of the experiment data. The indoor scenes were more significant, with five- to six-times higher ratings than before decoration. This indicated that the optimized layout of the scenes was more recognized and preferred by the subjects.

5. Future Research and Next Step

This study was conducted using university students and completed in a laboratory setting. It emphasized the development of a visual saliency verification framework for a cultural elements translation method. After the basic research methodology was developed, the differences between the real and virtual environments and shifts in perception between different age groups were compared and studied. A more complete station guidance system layout and spatial optimization reference were then proposed to provide new ideas for the design and layout of high-speed rail station guidance systems.

Author Contributions

Conceptualization, W.B. and J.L.; methodology, W.B. and J.L.; software, W.B. and J.L.; validation, W.B., W.W., and J.L.; formal analysis, W.B. and J.L.; investigation, W.B. and J.L.; resources, J.L.; data curation, R.Z.; writing—original draft preparation, W.B.; writing—review and editing, W.B.; visualization, W.B.; supervision, X.W.; project administration, W.W.; funding acquisition, R.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Key Research and Development Program Project: “Demonstration of the full scheme design technology of the supporting visualization of Beijing–Zhangjiakou high-speed railway for the Winter Olympics” (Grant No. 2020YFF0304106) and the Fundamental Research Funds for the Central Universities (Grant No. 2022YJS126).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patients to publish this paper.

Data Availability Statement

Not applicable.

Acknowledgments

The publication of this manuscript was financially by School of Architecture and Design, Beijing Jiaotong University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mao, Y. Mr. Zhan Tianyou’s life in China’s railroad construction. Archit. J. 1959, 4, 2–3. [Google Scholar]
  2. Wang, H. The overall innovative design of the intelligent high-speed railway from Beijing to Zhangjiakou. Railw. Stand. Des. 2020, 64, 7–11. [Google Scholar]
  3. Duan, H.L. A Study on Beijing-Suiyuan Railway (1905–1937). Ph.D. Thesis, Inner Mongolia Normal University, Hohhot, China, 2011. [Google Scholar]
  4. Zhou, Z. From Jingzhang railway to Jingzhang high-speed railway’s 100-year leap and butterfly change. Arch. World 2020, 6, 31–36. [Google Scholar]
  5. Feng, X.; Ou, N.; Ma, G. Study and application of station-city integrated mode of Beijing-Zhangjiakou high-speed railway. World Archit. 2021, 11, 48–53. [Google Scholar]
  6. Chu, G. Discussion on the culture expression of Beijing—Zhangjiakou high-speed railway from perspective of station building. Railw. Investig. Surv. 2020, 46, 12–18. [Google Scholar]
  7. Zhu, Q.; Li, X. A century of review and reflection on the Beijing-city section of the Jingzhang railway. Beijing Plan. Rev. 2017, 4, 91–94. [Google Scholar]
  8. Li, H. Thoughts on heritage protection and heritage of Jingzhang Railway. Constr. Archit. 2018, 24, 48–49. [Google Scholar]
  9. Sun, Y.; Zhao, Y. Study on protection and update of Beijing-Zhangjiakou railway site. Contemp. Archit. 2020, 4, 32–36. [Google Scholar]
  10. Yao, X. The origin of the policy of independent construction of Beijing-Zhang railroad in the late Qing Dynasty. Beijing Arch. 2019, 2, 49–52. [Google Scholar]
  11. Ou, N. Design innovation of Beijing—Zhangjiakou high-speed railway station. Railw. Stand. Des. 2020, 64, 164–169. [Google Scholar]
  12. Liu, Q. Traditional Visual Modeling Elements Translation in Contemporary Spatial Form. Ph.D. Thesis, Tianjin University, Tianjin, China, 2016. [Google Scholar]
  13. Geng, H.; Huang, G. Design scheme of regional culture-based guidance signage system for Beijing Zhangjiakou high-speed railway stat. Railw. Comput. Appl. 2021, 30, 66–71. [Google Scholar]
  14. Li, Y. Exploration of Subway Guidance System under the Role of Regional Visual Symbols. Ph.D. Thesis, Central Academy of Fine Arts, Beijing, China, 2017. [Google Scholar]
  15. Zhuo, Y. Analysis of subway guiding system based on the function of regional visual symbols. Ind. Eng. Des. 2019, 1, 46–49. [Google Scholar]
  16. Zhao, J.; Li, S. Application of regional cultural elements in the design of metro guide system. Packag. Eng. 2020, 41, 226–230. [Google Scholar]
  17. Zhao, X.; Wang, J. Visual guiding system based on user experience analysis of Tokyo subway. Packag. Eng. 2019, 40, 88–93. [Google Scholar]
  18. Liu, X. The Construction of Cultural Translation Model and its Influence on the Mobile User Experience. Ph.D. Thesis, Huazhong University of Science & Technology, Wuhan, China, 2017. [Google Scholar]
  19. Jacob, R.J.K.; Karn, K.S. Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. In The Mind’s Eye; North-Holland: Amsterdam, The Netherlands, 2003; pp. 573–605. [Google Scholar]
  20. Epelboim, J.; Steinman, R.M.; Kowler, E.; Edwards, M.; Pizlo, Z.; Erkelens, C.J.; Collewijn, H. The function of visual search and memory in sequential looking tasks. Vis. Res. 1995, 35, 3401–3422. [Google Scholar] [CrossRef]
  21. Fu, X.; Sheng, L.; Tang, Z. Interactive design of the information on waiting screen of high-speed railway based on the eye-tracking technology. Packag. Eng. 2020, 41, 143–148. [Google Scholar]
  22. Wang, C.; Chen, Y.; Zheng, S.; Yuan, Y.; Wang, S. Research on generating an indoor landmark salience model for self-location and spatial orientation from eye-tracking data. ISPRS Int. J. Geo Inf. 2020, 9, 97. [Google Scholar] [CrossRef]
  23. Li, J.; Wu, J.; Lam, F.; Zhang, C.; Kang, J.; Xu, H. Effect of the degree of wood use on the visual psychological response of wooden indoor spaces. Wood Sci. Technol. 2021, 55, 1485–1508. [Google Scholar] [CrossRef]
  24. Meuleners, L.; Roberts, P.; Fraser, M. Identifying the distracting aspects of electronic advertising billboards: A driving simulation study. Accid. Anal. Prev. 2020, 145, 105710. [Google Scholar] [CrossRef]
  25. Keyzer, F.D.; Dens, N.; Pelsmacker, P.D. The processing of native advertising compared to banner advertising: An eye-tracking experiment. Electron. Commer. Res. 2021. [Google Scholar] [CrossRef]
  26. Simmonds, L.; Bellman, S.; Kennedy, R.; Nenycz-Thiel, M.; Bogomolova, S. Moderating effects of prior brand usage on visual attention to video advertising and recall: An eye-tracking investigation. J. Bus. Res. 2020, 111, 241–248. [Google Scholar] [CrossRef]
  27. Li, J.; Song, J.; Huang, Y.; Wang, Y.; Zhang, J. Effects of different interaction modes on fatigue and reading effectiveness with mobile phones. Int. J. Ind. Ergon. 2021, 85, 103189. [Google Scholar] [CrossRef]
  28. Lever, M.W.; Shen, Y.; Joppe, M. Reading travel guidebooks: Readership typologies using eye-tracking technology. J. Destin. Mark. Manag. 2019, 14, 100368. [Google Scholar] [CrossRef]
  29. Rzeszewski, M.; Kotus, J. Usability and usefulness of internet mapping platforms in participatory spatial planning. Appl. Geogr. 2019, 103, 56–69. [Google Scholar] [CrossRef]
  30. Cheng, L.; Yang, Z.; Wang, X. A study on eye movements of different presentations of web-ads. J. Psychol. Sci. 2007, 3, 584–587. [Google Scholar]
  31. Sun, C.; Yang, Y. A study on visual saliency of way-finding landmarks based on eye-tracking experiments as exemplified in harbin kaide shopping center. Archit. J. 2019, 605, 24–29. [Google Scholar]
  32. Zhao, Q. Virtual reality overview. Sci. Sin. 2009, 39, 2–46. [Google Scholar]
  33. Jiang, X.Z.; Li, Z.H. Present situation of VR researching at home and abroad. J. Liaoning Tech. Univ. 2004, 2, 238–240. [Google Scholar]
  34. Gao, Y.; Liu, Y.; Cheng, D.; Wang, Y. A review on development of head mounted display. J. Comput. Aided Des. Comput. Graph. 2016, 28, 896–904. [Google Scholar]
  35. Yang, Q.; Zhong, S. A review of foreign countries on the development and evolution trends of virtual reality technology. J. Dialectics Nat. 2021, 43, 97–106. [Google Scholar]
  36. Rayner, K. Eye movements in reading and information processing: 20 years of research. Psychol. Bull. 1998, 124, 372. [Google Scholar] [CrossRef] [PubMed]
  37. Han, Y. The development of oculomotor and eye movement experiment method. J. Psychol. Sci. 2000, 4, 454–457. [Google Scholar]
  38. Yan, G.; Tian, H. A review of eye movement recording methods and techniques. Chin. J. Appl. Psychol. 2004, 10, 55–58. [Google Scholar]
  39. Alipour, H.; Namazi, H.; Azarnoush, H.; Jafari, S. Complexity-based analysis of the influence of visual stimulus color on human eye movement. Fractals 2019, 27, 1950002. [Google Scholar] [CrossRef]
  40. Yarbus, A.L. Eye movements during perception of complex objects. In Eye Movements and Vision; Springer: Boston, MA, USA, 1967; pp. 171–211. [Google Scholar]
  41. Salim, S.S. SCOUT and affective interaction design: Evaluating physiological signals for usability in emotional processing. In Proceedings of the 2010 2nd International Conference on Computer Engineering and Technology, Chengdu, China, 16–18 April 2010; IEEE: Piscataway, NJ, USA, 2010; pp. V1-201–V1-205. [Google Scholar]
  42. Park, H.; Lee, S.; Lee, M.; Chang, M.S.; Kwak, H.W. Using eye movement data to infer human behavioral intentions. Comput. Hum. Behav. 2016, 63, 796–804. [Google Scholar] [CrossRef]
  43. Graham, D.J.; Jeffery, R.W. Predictors of nutrition label viewing during food purchase decision making: An eye tracking investigation. Public Health Nutr. 2012, 15, 189–197. [Google Scholar] [CrossRef] [PubMed]
  44. Yan, G.; Xiong, J.; Zang, C.; Yu, L.; Cui, L.; Bai, X. Review of eye-movement measures in reading research. Adv. Psychol. Sci. 2013, 21, 589–605. [Google Scholar] [CrossRef]
  45. Zhang, X.; Ye, W. Review of oculomotor measures in current reading research. Stud. Psychol. Behav. 2006, 4, 236–240. [Google Scholar]
  46. Ren, Y.; Han, Y.; Sui, X. The saccades and its mechanism in the process of visual search. Adv. Psychol. Sci. 2006, 14, 340–345. [Google Scholar]
  47. Han, Y. The sequential quality of eye movement in observing different shapes and colours. J. Psychol. Sci. 1997, 1, 40–43. [Google Scholar]
  48. He, H. Railroad Passenger Station Guiding Sign System Design Guide; China Railway Publishing: Beijing, China, 2010. [Google Scholar]
  49. Zhuang, W. SD method related to evaluation of architectural space objectives. J. Tsinghua Univ. 1996, 36, 42–47. [Google Scholar]
  50. Guo, S.; Zhao, N.; Zhang, J.; Xue, T.; Liu, P.; Xu, S.; Xu, D. Landscape visual quality assessment based on eye movement: College student eye-tracking experiments on tourism landscape pictures. Resour. Sci. 2017, 39, 1137–1147. [Google Scholar]
  51. Locher, P.; Krupinski, E.A.; Mello-Thoms, C. Visual interest in pictorial art during an aesthetic experience. Spat. Vis. 2008, 21, 55. [Google Scholar] [CrossRef]
  52. Lv, J.; Chen, J.; Xu, J. Ergonomics; Tsinghua University Press: Beijing, China, 2009. [Google Scholar]
  53. Liu, Q.; Xu, Y. Research on the humanized design of high-speed railroad station guidance system. J. Chang. Norm. Univ. 2017, 36, 189–190. [Google Scholar]
  54. Engbert, R. Microsaccades: A microcosm for research on oculomotor control, attention, and visual perception. Prog. Brain Res. 2006, 154, 177–192. [Google Scholar]
  55. Engbert, R.; Kliegl, R. Microsaccades uncover the orientation of covert attention. Vis. Res. 2003, 43, 1035–1045. [Google Scholar] [CrossRef]
  56. Martinez-Conde, S.; Macknik, S.L.; Troncoso, X.G.; Hubel, D.H. Microsaccades: A neurophysiological analysis. Trends Neurosci. 2009, 32, 463–475. [Google Scholar] [CrossRef]
  57. Walter, K.; Bex, P. Cognitive load influences oculomotor behavior in natural scenes. Sci. Rep. 2021, 11, 12405. [Google Scholar] [CrossRef]
Figure 1. Cultural theme segmentation for the Beijing–Zhangjiakou high-speed railway.
Figure 1. Cultural theme segmentation for the Beijing–Zhangjiakou high-speed railway.
Applsci 12 08520 g001
Figure 2. Experiment 1 process.
Figure 2. Experiment 1 process.
Applsci 12 08520 g002
Figure 3. Experiment 2 process.
Figure 3. Experiment 2 process.
Applsci 12 08520 g003
Figure 4. Experiment 3 process.
Figure 4. Experiment 3 process.
Applsci 12 08520 g004
Figure 5. Comparison of the selection and fixation levels of different symbol types.
Figure 5. Comparison of the selection and fixation levels of different symbol types.
Applsci 12 08520 g005
Figure 6. S1–S3 cognitive correctness.
Figure 6. S1–S3 cognitive correctness.
Applsci 12 08520 g006
Figure 7. S1–S3 layout schematic.
Figure 7. S1–S3 layout schematic.
Applsci 12 08520 g007
Figure 8. S1−S3 before and after optimization.
Figure 8. S1−S3 before and after optimization.
Applsci 12 08520 g008
Table 1. Categories and Interpretations of Eye-Tracking Indexes.
Table 1. Categories and Interpretations of Eye-Tracking Indexes.
Eye-Tracking CategoryEye-Tracking IndexAbbreviationInterpretation of Eye-Tracking Index
FixationFixation countFCTotal number of focus points.
Total fixation durationTFDSum of the duration of each fixation point in the region. The higher the number, the less efficient the search.
AOI fixation countAOI FCThe number of fixation points in the region. The higher the number, the more important or cognitively difficult the area is for the subject.
Total AOI fixation durationAOI TFDThe total duration of fixation on the region. The longer the duration, the more time the subject has allocated attention to that area, possibly demonstrating greater interest or a higher cognitive load.
Time to first AOI fixationAOI FFDuration of the first fixation in the region. The shorter the duration, the easier it is to obtain the perceived information and assess how substantially a particular feature attracts attention.
Means of AOI fixation durationAOI MFDThe average of the gaze time at each fixation point in the region. The longer the time, the more attention-grabbing the target or more difficult it is to extract information.
SaccadeSaccadic countSCThe jump distance from one fixation point to another. The greater the distance, the longer the search process.
Table 2. Styles of Transliteration Symbols.
Table 2. Styles of Transliteration Symbols.
Style NameG1G2G3G4G5
Line and surface combination Applsci 12 08520 i001 Applsci 12 08520 i002 Applsci 12 08520 i003 Applsci 12 08520 i004 Applsci 12 08520 i005
Surface Applsci 12 08520 i006 Applsci 12 08520 i007 Applsci 12 08520 i008 Applsci 12 08520 i009 Applsci 12 08520 i010
Outline Applsci 12 08520 i011 Applsci 12 08520 i012 Applsci 12 08520 i013 Applsci 12 08520 i014 Applsci 12 08520 i015
Line Applsci 12 08520 i016 Applsci 12 08520 i017 Applsci 12 08520 i018 Applsci 12 08520 i019 Applsci 12 08520 i020
Error Applsci 12 08520 i021 Applsci 12 08520 i022 Applsci 12 08520 i023 Applsci 12 08520 i024 Applsci 12 08520 i025
Table 3. Scene Favorites Survey.
Table 3. Scene Favorites Survey.
Test ScenarioBefore/after Scene OptimizationScene ImageEvaluation ScaleParameter Units
S1Before Applsci 12 08520 i026Negative—PositiveScore
[−3, −2, −1, 0, 1, 2, 3]
After Applsci 12 08520 i027Negative—Positive
S2Before Applsci 12 08520 i028Negative—Positive
After Applsci 12 08520 i029Negative—Positive
S3Before Applsci 12 08520 i030Negative—Positive
After Applsci 12 08520 i031Negative—Positive
Table 4. Subjective Feelings Survey.
Table 4. Subjective Feelings Survey.
Test ItemEvaluation TypeTest Sub-ItemEvaluation ScaleParameter Units
Subjective perception (after the experiment)Cognitive (physiological)VR environment adaptabilityUncomfortable—AdaptableScore
[−3, −2, −1, 0, 1, 2, 3]
Cognitive (physiological)Attention spanDistracted—Focused
Feelings (psychological)EmotionDepressed—Excited
Table 5. Visualization Results from Representative Pictures.
Table 5. Visualization Results from Representative Pictures.
Name Representative PictureHeat MapTrack Map
G1 Applsci 12 08520 i032 Applsci 12 08520 i033 Applsci 12 08520 i034
G2 Applsci 12 08520 i035 Applsci 12 08520 i036 Applsci 12 08520 i037
G3 Applsci 12 08520 i038 Applsci 12 08520 i039 Applsci 12 08520 i040
G4 Applsci 12 08520 i041 Applsci 12 08520 i042 Applsci 12 08520 i043
G5 Applsci 12 08520 i044 Applsci 12 08520 i045 Applsci 12 08520 i046
Applsci 12 08520 i047
Table 6. Comparison of Eye-Tracking Index Data for the Five Types of Symbols.
Table 6. Comparison of Eye-Tracking Index Data for the Five Types of Symbols.
Eye-Tracking Dependent Variable
(within 15 s)
Average Value
Line and Surface CombinationSurfaceOutlineLineError
N = 92N = 92N = 92N = 92N = 92
G1
AOI FF (s)0.1920.2150.2240.1800.221
AOI MFD (s)0.2130.1910.2030.2130.198
AOI FC (%)20.41510.04218.64428.88210.171
AOI TFD (%)21.6349.73218.82530.4169.854
G2
AOI FF (s)0.2350.2210.2410.2170.198
AOI MFD (s)0.2040.2150.2390.2090.210
AOI FC (%)13.83914.83727.8699.94312.972
AOI TFD (%)13.99314.78730.34510.22013.002
G3
AOI FF (s)0.1960.1810.1630.2290.218
AOI MFD (s)0.2070.2120.1520.2240.199
AOI FC (%)18.67023.14610.27715.67211.958
AOI TFD (%)19.21724.6869.12117.48812.786
G4
AOI FF (s)0.2270.1970.1960.2200.211
AOI MFD (s)0.2270.1980.1990.1930.197
AOI FC (%)29.77815.68715.58211.13711.440
AOI TFD (%)32.30615.51315.20711.33111.173
G5
AOI FF (s)0.2010.1980.2300.1970.232
AOI MFD (s)0.2400.2040.2010.2240.197
AOI FC (%)29.62011.95210.17621.9606.954
AOI TFD (%)32.55411.8159.74922.7186.275
All Group
AOI FF (s)0.2100.2030.2110.2090.216
AOI MFD (s)0.2180.2040.1990.2130.200
AOI FC (%)22.46515.13316.51017.51910.699
AOI TFD (%)23.94115.30616.65018.43510.618
Table 7. Visualization of Eye-Tracking Data by Scene.
Table 7. Visualization of Eye-Tracking Data by Scene.
NamePanoramasHeat MapsTrack Maps
S1 Applsci 12 08520 i048 Applsci 12 08520 i049 Applsci 12 08520 i050
S2 Applsci 12 08520 i051 Applsci 12 08520 i052 Applsci 12 08520 i053
S3 Applsci 12 08520 i054 Applsci 12 08520 i055 Applsci 12 08520 i056
Applsci 12 08520 i057
Table 8. Comparison of Eye-Tracking Index Data by Scene.
Table 8. Comparison of Eye-Tracking Index Data by Scene.
Eye-Tracking Dependent Variable
(within 10 s)
Average Value
S1S2S3
N = 92N = 92N = 92
FC (N)221113
SC (N)697877
Table 9. Comparison of Heat Maps before and after Optimization.
Table 9. Comparison of Heat Maps before and after Optimization.
NamePanoramas before Space OptimizationHeat Maps before Space OptimizationHeat Maps after Space Optimization
S1 Applsci 12 08520 i058 Applsci 12 08520 i059 Applsci 12 08520 i060
S2 Applsci 12 08520 i061 Applsci 12 08520 i062 Applsci 12 08520 i063
S3 Applsci 12 08520 i064 Applsci 12 08520 i065 Applsci 12 08520 i066
Applsci 12 08520 i067
Table 10. Comparison of Track Maps before and after Optimization.
Table 10. Comparison of Track Maps before and after Optimization.
NamePanoramas after Space OptimizationTrack Maps before Space OptimizationTrack Maps after Space Optimization
S1 Applsci 12 08520 i068 Applsci 12 08520 i069 Applsci 12 08520 i070
S2 Applsci 12 08520 i071 Applsci 12 08520 i072 Applsci 12 08520 i073
S3 Applsci 12 08520 i074 Applsci 12 08520 i075 Applsci 12 08520 i076
Table 11. Comparison of Eye-Tracking Index Data before and after Optimization.
Table 11. Comparison of Eye-Tracking Index Data before and after Optimization.
Eye-Tracking Dependent Variable
(within 30 s)
Average Value
S1S2S3
N = 90N = 90N = 90
Unoptimized
FC (N)445550
SC (N)243248250
Optimized
FC (N)626056
SC (N)236243248
Table 12. Comparison of Eye-Tracking Index Data for Each Type of Transcribed Element.
Table 12. Comparison of Eye-Tracking Index Data for Each Type of Transcribed Element.
Eye-Tracking Dependent Variable
(within 30 s)
Average Value
ImagesPatternsWordsColorsSculptures
N = 92N = 92N = 92N = 92N = 92
S1
AOI FC (%)24.5187.60224.2746.94112.394
AOI TFD (%)24.3848.74424.1506.46313.897
AOI FF (s)0.1600.1860.1670.1560.213
AOI MFD (s)0.1560.1570.1560.1440.183
S2
AOI FC (%)8.64720.4149.99316.2654.132
AOI TFD (%)8.72319.10011.73715.0104.790
AOI FF (s)0.1260.1600.2090.1150.141
AOI MFD (s)0.1440.1390.1780.1300.110
S3
AOI FC (%)12.4405.76410.33416.5821.087
AOI TFD (%)11.9147.0299.85716.1231.169
AOI FF (s)0.0950.1550.1060.1470.139
AOI MFD (s)0.1100.1240.1220.1370.050
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bian, W.; Li, J.; Zhao, R.; Wu, X.; Wu, W. Establishment and Saliency Verification of a Visual Translation Method for Cultural Elements of High-Speed Railways: A Case Study of the BZ Railway Line. Appl. Sci. 2022, 12, 8520. https://doi.org/10.3390/app12178520

AMA Style

Bian W, Li J, Zhao R, Wu X, Wu W. Establishment and Saliency Verification of a Visual Translation Method for Cultural Elements of High-Speed Railways: A Case Study of the BZ Railway Line. Applied Sciences. 2022; 12(17):8520. https://doi.org/10.3390/app12178520

Chicago/Turabian Style

Bian, Wenyan, Junjie Li, Ruyue Zhao, Xijun Wu, and Wei Wu. 2022. "Establishment and Saliency Verification of a Visual Translation Method for Cultural Elements of High-Speed Railways: A Case Study of the BZ Railway Line" Applied Sciences 12, no. 17: 8520. https://doi.org/10.3390/app12178520

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop