Next Article in Journal
FE-P Net: An Image-Enhanced Parallel Density Estimation Network for Meat Duck Counting
Previous Article in Journal
Advancing Few-Shot Named Entity Recognition with Large Language Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Trust in the Automatic Train Supervision System and Its Effects on Subway Dispatchers’ Operational Behavior

1
School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing 100044, China
2
State Key Laboratory of Advanced Rail Autonomous Operation, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(7), 3839; https://doi.org/10.3390/app15073839
Submission received: 12 February 2025 / Revised: 19 March 2025 / Accepted: 27 March 2025 / Published: 1 April 2025
(This article belongs to the Section Transportation and Future Mobility)

Abstract

:
The evolution of Fully Automatic Operation (FAO) subway control systems has increased the importance of dispatchers’ trust in automation. This study focuses on the dynamics of trust formation among Chinese subway dispatchers and its downstream effects on their reliance behavior. Through the structural equation modeling of questionnaire data, we identify key factors influencing trust and its operational implications. Interface transparency and system capability redundancy directly strengthen trust, while dispatchers’ system knowledge indirectly shapes trust through interface interpretation. Trust strongly correlates with reliance behavior during routine monitoring, which in turn significantly influences reliance behavior in emergencies. Two mediation pathways approaching significance suggest that both capability redundancy and trust may influence emergency reliance behavior through routine reliance behavior. This implies that frequent daily use of automation systems enhances dispatchers’ understanding of system reliability. Our findings underscore the need for transparent interface design, adaptive redundancy strategies, and training programs that bridge system knowledge with operational practices to enhance both trust and scenario-specific reliance. Practical recommendations include designing user-friendly interfaces, implementing adaptive redundancy, and developing targeted training programs to improve dispatchers’ system knowledge and operational efficiency.

1. Introduction

The emergence of autonomous systems, such as driverless cars [1,2] and Fully Automatic Operation (FAO) subway systems [3], has significantly enhanced the operational efficiency of transportation systems while reducing the physical burden on human operators. Automation technology not only improves the reliability of operating systems but also introduces profound changes in the human–machine dynamic. By monitoring and supporting human operations, automation systems can reduce risks. However, in large, complex socio-technical systems, human professionals remain the critical backup, serving as the last line of defense in the event of system failure [4]. This reliance on humans is particularly important because automation inherently places operators in scenarios of uncertainty and incomplete knowledge, making it a pressing challenge to ensure their effectiveness across varying levels of automation [5].
Despite its benefits, automation does not always simplify or enhance work efficiency. Research on human–automation interaction reveals that it can lead to significant issues, such as the suppression of operators’ ability to detect critical signals or alarms [6] and the degradation of operational skills [7]. These problems may leave personnel ill-prepared to protect the system during automation failures. Moreover, increased automation can introduce new failure modes and alter personnel workloads [8], further complicating the human role in automated systems.
In rail transit systems, for example, dispatchers are primarily responsible for ensuring that trains adhere to their schedules [9]. While advances in signal systems have enabled the automatic correction of minor timing deviations, dispatchers must still maintain a high level of vigilance to prevent catastrophic accidents. A notable case is the head-on collision of two trains in Bad Aibling, Bavaria, in February 2016, which was attributed to a dispatcher’s error and an over-reliance on the safety redundancy of the signaling system [10]. This incident underscores the risks of excessive trust in automation and highlights the need for a balanced approach to human–automation interaction.
The challenges posed by automation extend beyond operational failures. Both excessively high and low workloads can introduce new cognitive and attentional demands [11], necessitating changes in operator training. Furthermore, since automation systems are not infallible, humans remain a crucial component in ensuring system safety. However, humans are not immune to the influence of automation systems. The degree of human reliance on automation in different contexts has thus become a topic of significant interest [12].
With the rapid advancement of rail transportation technology, Automatic Train Operation (ATO) systems have been widely adopted in many countries [13]. Automatic Train Supervision (ATS), a foundational component of ATO and its next-generation evolution, the FAO system, provides essential monitoring, control, and decision-making capabilities that underpin the functionality and reliability of both ATO and FAO systems. Currently, ATS is primarily used for routine monitoring and emergency response by dispatchers. In the future, within FAO systems, ATS will remain the primary channel through which dispatchers obtain information [14]. The level of trust that dispatchers place in the ATS system will significantly influence the performance of FAO systems.
Given the critical role of ATS in rail automation, it is essential to investigate the factors that shape dispatchers’ trust in ATS and how this trust influences their reliance behaviors. Understanding these dynamics can contribute to refining the theory of automation trust in safety-critical systems and optimizing the performance of FAO systems. Therefore, this study aims to explore the influencing factors of ATS automation trust and its impact on dispatchers’ reliance behaviors, providing both theoretical insights and practical implications for the design and operation of future rail automation systems.

2. Theoretical Background, Related Work, and Hypotheses

2.1. Foundations of Automation Trust in ATS Systems

Trust is a fundamental concept across numerous domains, including social psychology, where the study of interpersonal trust provides valuable insights for defining automation trust [15]. Empirical research has revealed significant similarities between automation trust and interpersonal trust [16]. For instance, human reactions to technology often mirror reactions to other humans [17], and the dynamics of trust in complex environments show strong parallels between interpersonal and automation contexts [18]. Several studies have identified the core features of automation trust within human–automation partnerships [19,20,21]. First, trust requires a delegator (e.g., the operator) to grant trust, a trustee (e.g., the automation system) to receive trust, and a task with potential risks [22]. Second, the trustee must be motivated to perform the task, typically driven by the designer’s intended use of the system. Finally, the possibility of task failure introduces uncertainty and risk. In this study, we adopt the widely accepted definition of automation trust by Lee and See as follows: “the attitude that an agent will help achieve individual goals in situations characterized by uncertainty and vulnerability” [23]. Empirical research further highlights that trust in automation is influenced by the following three main dimensions: the automation system, the operator, and the environment [24].
The human–machine interface serves as the critical link between operators and automation systems. Transparency, a key aspect of interface design, determines the operator’s ability to comprehend, perceive, and anticipate the automation system’s behavior [25]. Higher transparency enables operators to maintain better performance and trust, especially when the automation system encounters abnormalities [26]. Empirical studies in human–machine collaboration have consistently shown that interface transparency directly influences operators’ trust in automation agents [27]. In the context of automated decision making, providing transparency about the logic behind system recommendations has been found to improve the accuracy of automation use without increasing decision time or subjective workload [28]. However, excessive transparency, such as detailed visualizations of decision outcomes, may lead to faster but less accurate decisions [29]. For ATS systems, interface transparency reflects the extent to which dispatchers can review historical records, understand current system status, and predict future trends [30,31,32]. As the primary tool for decision making and information access, the perceived transparency of the ATS system over time is likely to shape dispatchers’ trust in automation, forming the basis of Hypothesis 1 in this study.
Automation trust is inherently human-centered, with operator factors playing a pivotal role in its formation [33]. Research has consistently shown that operators’ familiarity with the capabilities and limitations of their systems significantly influences their trust levels [34,35,36]. This finding has been replicated in safety-critical domains, such as spacecraft rendezvous and docking missions [37]. Dispatchers, whose roles demand high operational skills, system knowledge, and experience [38], are particularly affected by their understanding of automation systems. For example, a study using observational methods found that railway signalers’ trust in automation was heavily influenced by their comprehension of the system [39]. In summary, a holistic understanding of the ATS system is essential for building trust in it (Hypothesis 2a of this study). Additionally, a transparent interface serves as a crucial bridge between dispatchers’ system knowledge and the current system status (Hypothesis 2b of this study).
In safety-critical fields such as subway dispatching, environmental risks are mitigated primarily through system safety redundancies [40]. Operators’ awareness of the system’s capabilities and safety redundancies plays a crucial role in building trust. For instance, a study on seafloor visual inspection found that explicitly informing operators of the system’s false alarm rate increased their trust in automation [41]. Similarly, in a virtual decision-making experiment, the reliability of automated agents strongly influenced trust, particularly under conditions involving personal safety and economic costs [22]. Long et al. further demonstrated that system dependability is the most reliable predictor of initial trust [42]. However, operators often underestimate system reliability [43]. While subway operation systems maintain high reliability through redundant automation designs, this very redundancy can also disrupt trust in automation, forming the basis of Hypothesis 3 in this study.

2.2. Trust-Driven Reliance Behavior in Operational Contexts

Automation reliance, the primary behavioral outcome of automation trust, refers to the level of dependence on automation, and it is measured through users’ interactions with automated systems [44]. The relationship between trust and reliance behavior is complex, as trust and its influencing factors can have varied impacts on reliance [45]. Research has shown that reliance patterns vary significantly depending on the type of automation error. For instance, a high frequency of false alarms results in users that under-rely on automation during alarm states while over-relying on it during non-alarm states. Conversely, frequent automation failures result in over-reliance on alarms and under-reliance on automation in non-alarm states [46]. This phenomenon is further supported by a simulated seafloor sonar detection study, where participants’ trust and reliance decreased as the false alarm rate increased [41]. A similar correlation was observed in the context of anti-phishing tools, where user-calibrated trust significantly and positively influenced both current and future reliance on detection systems [47]. Given the complexity of scheduling tasks [48], the effect of trust on reliance behavior may differ across operational contexts. This study proposes that (4a) dispatchers’ trust in the ATS system affects their reliance behavior under routine monitoring, (4b) their trust in the ATS system affects their reliance behavior during emergency response, and (4c) their reliance behavior under routine monitoring further influences their reliance behavior during emergency response.
While trust is a critical factor in automation reliance, Lee and See emphasize that it is not the sole determinant; reliance is also influenced by a combination of other factors [23]. For example, even when operators distrust an automation system, they may still rely on it under conditions of high workload and time pressure to complete tasks. Conversely, in situations where trust is high, operators might reduce their reliance on automation and opt for manual control due to concerns about economic costs or safety risks [49]. In such cases, trust acts as an intermediate variable influencing behavior. Additionally, a disconnect can occur between subjective trust evaluations and objective reliance behaviors. Operators may perceive their trust levels as unchanged, while their actual reliance behaviors shift significantly [50]. For subway dispatchers who rely on the ATS system as their primary information channel, reliance on the system is further influenced by its capability redundancy status. This study suggests that (5a) the ATS’s capability redundancy affects dispatchers’ reliance behavior under routine monitoring, and (5b) the ATS’s capability redundancy also affects their reliance behavior during emergency response.

3. Research Methodology and Empirical Analysis

3.1. Summary of Core Constructs and Hypotheses

By summarizing the established literature and sorting out the current state of practical work, nine research hypotheses are proposed as follows, and these can be summarized as shown in Figure 1.
Hypothesis 1.
The Transparency of the ATS Interface (TAI) affects the Dispatcher’s Trust in ATS system (DTA) (H1: TAI ⇒ DTA).
Hypothesis 2a.
The dispatcher’s ATS System Knowledge (ASK) directly affects the Dispatcher’s Trust in ATS system (DTA) (H2a: ASK ⇒ DTA).
Hypothesis 2b.
The dispatcher’s ATS System Knowledge (ASK) affects their understanding of the Transparency of the ATS Interface (TAI) (H2b: ASK ⇒ TAI).
Hypothesis 3.
The ATS’s Capability Redundancy (ACR) directly affects the Dispatcher’s Trust in ATS system (DTA) (H3: ACR ⇒ DTA).
Hypothesis 4a.
The Dispatchers’ Trust in the ATS system (DTA) affects their Reliance Behavior on ATS under Routine monitoring (RBR) (H4a: DTA ⇒ RBR).
Hypothesis 4b.
The Dispatchers’ Trust in the ATS system (DTA) affects their Reliance Behavior on ATS under Emergency response (RBE) (H4b: DTA ⇒ RBE).
Hypothesis 4c.
The dispatchers’ Reliance Behavior on ATS under Routine monitoring (RBN) affects their Reliance Behavior on ATS under Emergency response (RBE) (H4c: DTR ⇒ RBE).
Hypothesis 5a.
The ATS’s Capability Redundancy (ACR) affects dispatchers’ Reliance Behavior on ATS under Routine monitoring (RBR) (H5a: ACR ⇒ RBR).
Hypothesis 5b.
The ATS’s Capability Redundancy (ACR) affects dispatchers’ Reliance Behavior on ATS under Emergency response (RBE) (H5b: ACR ⇒ RBE).
Figure 1 presents the hypothesized model, illustrating the influence paths among the six key constructs in this study. To further clarify the theoretical foundation of these constructs, Table 1 provides their detailed definitions, factor codes, and supporting sources from the literature.
Table 2 summarizes the hypothesized relationships between predictors and dependent variables.

3.2. Research Design: Pre-Tests and Final Questionnaire Development

All methods adhered to the ethical principles and code of conduct of the American Psychological Association (2017), a widely accepted standard for questionnaire and human factors research. The experimental protocols were approved by the State Key Laboratory of Advanced Rail Autonomous Operation at Beijing Jiaotong University, which was upgraded from the former State Key Lab of Rail Traffic Control and Safety. The laboratory maintains a comprehensive management organization and ethics committee. All participants provided informed consent before the study and were free to withdraw at any time, with the option to delete their data. Data security was ensured throughout the research process.
To explore the conceptual framework of the hypotheses and their influence paths shown in Figure 1 we initially developed 33 items based on the constructs outlined in Table 1. These items were evaluated by 20 manager-level dispatchers from the Beijing subway, who assisted in refining the wording and ensuring the clarity, validity, and non-redundancy of the questions. This process ensured that each question was simple, unambiguous, and distinct from others.
To verify the reliability and validity of the scale, a preliminary version of the questionnaire was distributed to 79 senior dispatchers (mean length of service: 8.6 years, SD = 3.6, min = 5, max = 38) who had participated in the China Dispatcher Skills Competition. After the reliability testing, cluster analysis, and post-response interviews, the questionnaire was further streamlined. The final version of the questionnaire is provided in Appendix A. The six latent variables and their corresponding questionnaire items (observed variables) are detailed in the left section of Table A1 in Appendix B (items 1–18). The first half of the questionnaire comprised 18 questions assessed using a 7-point Likert scale, with at least one reverse question per latent variable to evaluate response quality.
The second part of the questionnaire focused on task complexity under two working conditions as follows: routine monitoring and emergency response. Based on the literature and the dispatchers’ actual work experience [57], task complexity was measured across 10 dimensions [58]. Pre-experiment interviews confirmed significant variability in task complexity between these conditions. The second part of the questionnaire measures task complexity across 10 dimensions for both routine monitoring and emergency response. The specific items for task complexity are included in Appendix A (items 21–40), with their alignment to the Task Complexity dimensions shown in the right section of Table A1 in Appendix B.

3.3. Data Collection and Sample Screening

During the formal questionnaire phase, 214 senior front-line dispatchers (average length of service = 13.3 years, SD = 6.8, min = 5, max = 25) from Beijing, Tianjin, Guangzhou, Shijiazhuang, and Suzhou participated. To ensure data quality, participants completed the questionnaire in a quiet, distraction-free meeting room under the supervision of duty managers or higher-level supervisors. The demographic characteristics of the survey participants, including age, length of service, and educational background, are summarized in Table 3.
The formal questionnaire tested the initial research hypotheses, with questions presented in random order. Participants rated their agreement with descriptive statements on a 7-point Likert scale. Questionnaire items were fine-tuned to account for regional differences in terminology.
Questions 2, 4, 9, 11, 13, 15, 16, 17, and 18 (see Appendix B, Table A1) were reverse-scored. Samples were excluded if more than two dimensions of automation trust exhibited conflicting reverse scale scores or if response times were excessively long (indicating interruptions) or short (indicating careless responses). After screening, a final valid sample of 160 was obtained, with an average response time of 1408 s (SD = 529).

3.4. Structural Equation Modeling and Hypothesis Validation

Exploratory factor analysis (EFA) was conducted on the final valid data to assess construct validity. Using JAMOVI 2.3.21, principal components extraction with varimax rotation was performed. Components with eigenvalues greater than 1.0 were retained based on the scree test and Kaiser’s criterion. Item loadings exceeded the minimum threshold of 0.55, a level deemed suitable for principal component analysis [59]. Reliability was evaluated using Cronbach’s α.
The EFA extracted the following six factors: three trust impact factors, two reliance behavior outcomes, and one trust factor comprising marker variables. The initial scale data for SEM analysis excluded Automation Capability Redundancy Q1 due to its low factor loading (0.353). All other items exhibited ideal factor loadings, as shown in Table 4. The Kaiser–Meyer-Olkin Test for sampling adequacy was 0.80, Bartlett’s test of sphericity was significant (p < 0.001), and the six factors accounted for 76.9% of the total variance, supporting the continuation of structural equation modeling analysis.
The measurement model fitted the data well, as shown in Table 5. The results of the measurement model and the reliability of each construct are presented in Table 5. All factoring loading values were significantly greater than 0.55, and all AVE were over 0.50, indicating that convergent validity was adequately achieved [60].
Structural equation modeling (SEM) was conducted to examine the influence of factors on trust and its outcomes, thereby testing the hypotheses. The analysis was performed using JAMOVI 2.3.21 with the semlj 1.1.6 module. The initial hypothetical structural model revealed two insignificant or weak path coefficients as follows: between automation trust and reliance behavior in emergencies (β = −0.049, p = 0.583, Hypothesis 4b) and between personal knowledge and automation trust (β = 0.090, p = 0.431, Hypothesis 2a), as shown in Table 6. Contrary to the prior hypotheses, personal knowledge did not significantly affect trust, leading to the rejection of Hypothesis 2a as follows: the dispatcher’s ATS system knowledge directly affects the dispatcher’s trust in the ATS system. Similarly, Hypothesis 4b—dispatchers’ trust in the ATS system affects their reliance behavior on ATS under emergency response—was also rejected.
These insignificant parameters were subsequently removed from the model. The goodness-of-fit (GOF) of the measurement and structural models was evaluated using recommended indices for sample sizes under 250 and observed variables between 12 and 30 [60]. The fine-tuned model demonstrated good overall fit, as follows: χ2(111, N = 160) = 220 (p < 0.001), RMSEA = 0.078, TLI = 0.930, CFI = 0.943, and NFI = 0.893. The standardized path estimates for the model are presented in Figure 2, with detailed coefficients between latent variables provided in Table 7.
The structural equation modeling analysis supported seven hypotheses, as illustrated in Figure 2 and detailed in Table 7. Key findings include the significant influence of interface transparency on dispatchers’ trust in the ATS system (Hypothesis 1) and the role of dispatchers’ knowledge mastery in shaping their understanding of interface transparency (Hypothesis 2b). Additionally, the capability redundancy of the ATS system was found to directly enhance dispatchers’ trust (Hypothesis 3). Trust, in turn, primarily influenced reliance behavior during routine monitoring operations (Hypothesis 4a), which further affected reliance behavior during emergency responses (Hypothesis 4c). Finally, the capability redundancy of the ATS system was confirmed to impact reliance behavior in both routine monitoring and emergency scenarios (Hypotheses 5a,b). These results highlight the interconnected roles of interface design, system knowledge, and capability redundancy in fostering trust and shaping reliance behavior in automation systems.

4. Discussion

4.1. Key Factors Shaping Dispatchers’ Trust in ATS Systems

The transparency of the ATS interface significantly enhances dispatchers’ trust in the system, supporting hypothesis 1. A transparent interface provides operators with clear and accessible information, which is critical for building trust, especially under high time pressure [29]. This aligns with traditional research showing that transparent interfaces improve operator trust and subjective acceptance of automation systems, provided operators are proficient in their use [61]. Additionally, the capability redundancy of the ATS system directly and positively affects dispatchers’ trust, supporting Hypothesis 3. This finding is consistent with studies highlighting the importance of system reliability redundancy in fostering automation trust [43,62]. Furthermore, dispatchers’ knowledge of the ATS system significantly influences their perception of interface transparency, supporting Hypothesis 2b. This suggests that a deeper understanding of the system enhances operators’ ability to interpret and utilize interface information effectively [27].
However, Hypothesis 2a, which posited that dispatchers’ knowledge directly affects their trust in the ATS system, was not supported. This contrasts with Balfe’s study, which identified understanding of automation as the strongest dimension for constructing trust [39]. The discrepancy may stem from differences in the methodology, as follows: while Balfe’s study used observational methods with a small sample, this study examined the impact of long-term accumulated knowledge on trust in a larger sample. Additionally, subway dispatchers’ work is highly complex and information-intensive, which may lead to a disconnect between knowledge and trust [63]. This phenomenon is also observed in other domains, such as self-driving vehicles, where operators may express difficulty in fully grasping automation-related knowledge but may still trust and rely on the system [64,65].

4.2. Impact of Trust on Dispatchers’ Behavioral Responses

Trust in the ATS system significantly influences dispatchers’ reliance behavior during routine monitoring, supporting Hypothesis 4a. This aligns with studies showing that operators’ trust in automation systems increases their reliance on the system, particularly when the interface effectively displays system status [41]. Furthermore, reliance behavior during routine monitoring positively affects reliance behavior during emergency responses, supporting Hypothesis 4c. This suggests that daily interactions with the ATS system shape operators’ confidence and behavior in high-stakes scenarios [57]. Additionally, the capability redundancy of the ATS system significantly impacts reliance behavior in both routine and emergency scenarios, supporting Hypotheses 5a,b. This finding corroborates traditional research emphasizing the role of system reliability in shaping operator dependence [66,67].
However, Hypothesis 4b, which proposed that trust directly affects reliance behavior during emergency responses, was not supported. This may be because, in emergency scenarios, dispatchers prioritize situational awareness and immediate action over their trust in the system. Interestingly, reliance behavior during emergencies is indirectly influenced by trust and capability redundancy through routine monitoring, as evidenced by the marginally significant mediation effects of ACR ⇒ RBR ⇒ RBE (p = 0.059) and DTA ⇒ RBR ⇒ RBE (p = 0.064). This indicates that daily reliance behavior serves as a bridge between trust, capability redundancy, and emergency responses. The lack of a direct effect of trust on emergency reliance may also reflect the unique demands of subway dispatching, where operators must balance trust in automation with their own expertise and situational judgment [68,69].

4.3. Task Complexity in Emergency vs. Routine Monitoring Situations: A Comparative Discussion

A non-parametric repeated measures ANOVA (Friedman test) was conducted on task complexity data across conditions in Part 2 of the study. The results revealed that all dimensions of contingency tasks were significantly more challenging than those in routine conditions, except for the third dimension, ambiguity, as shown in Table 8. This suggests that while most aspects of emergency response tasks are inherently more complex, the level of ambiguity remains consistent across both routine and emergency scenarios. This finding aligns with the nature of subway dispatching, where ambiguity in information interpretation is a constant challenge, regardless of the operational context.
The interviews further highlighted that emergency response tasks require greater flexibility and adaptability from dispatchers. The significant differences in task complexity between routine monitoring and emergency response conditions lead to substantial changes in task content and workflow [48]. In emergency situations, dispatchers tend to rely on multiple channels for the repeated confirmation of the same information, as depending solely on the ATS system is often impractical [70]. This multi-channel approach reflects the heightened need for accuracy and reliability during high-stake scenarios. Conversely, in routine monitoring environments, the security and completeness of the operation system are relatively robust, allowing dispatchers to place stronger reliance on the ATS system based on their established trust in its performance. However, the trust developed under routine conditions may not seamlessly translate to emergency response situations, where operational demands and task dynamics undergo drastic changes. This discrepancy underscores the importance of designing automation systems that can adapt to varying levels of task complexity and operational contexts.

4.4. Future Directions and Practical Implications

To address automation trust calibration, this study recommends designing transparent ATS interfaces that provide clear, real-time feedback, enabling dispatchers to maintain an appropriate level of trust. Integrating targeted training programs to enhance dispatchers’ system knowledge can further optimize trust calibration. For instance, our research team is currently developing a next-generation ATS human–machine interface (see Figure 3) that features transparent information channels, prototype completion, and ongoing interaction design and human factors validation. This interface aims to enhance dispatchers’ understanding of system operations and improve trust calibration. Additionally, we are designing an adaptive training system (see Figure 4) that incorporates adaptive learning modules and usability testing, ensuring dispatchers can effectively build and maintain system proficiency. Future research should explore the interaction between system transparency, feedback mechanisms, and trust calibration, aiming to identify the optimal level of transparency that balances trust with cognitive workload. Advancements in artificial intelligence and deep learning also offer promising avenues for automating trust assessment and calibration. While this study focused on Chinese subway dispatchers, extending the research to other regions and operational contexts will help establish a more comprehensive understanding of automation trust in safety-critical systems.

4.5. Limitations and Generalizability

This study has several limitations that should be acknowledged. First, the heterogeneity of ATS systems—ranging from differences in system logic and interface design to terminology and operational rules across cities—constrained the selection of subjects and limited the sample size. These variations, while reflective of real-world conditions, may affect the generalizability of the findings. Second, due to sample size constraints, the study focused on a limited set of factors derived from pre-experiment interviews and questionnaires. Future research with larger and more diverse samples could explore a three-dimensional automation trust model encompassing system factors (e.g., failure history and safety redundancy), dispatcher factors (e.g., individual tendencies and skill levels), and environmental factors (e.g., task complexity and multitasking demands). Finally, the study’s focus on Chinese subway dispatchers raises questions about cross-cultural applicability, necessitating further validation in other regions and operational contexts.

5. Conclusions

This study investigates the factors influencing the formation of automation trust in ATS systems among subway dispatchers and its behavioral consequences. Through structural equation modeling, we validated the direct positive impact of interface transparency on automation trust, highlighting its critical role in fostering dispatchers’ confidence in the system. While dispatchers’ knowledge of the operational system did not directly affect their trust in the ATS, it significantly enhanced their understanding of the interface information, underscoring the importance of system familiarity. Furthermore, the capability redundancy of the automation system not only directly influenced dispatchers’ trust but also shaped their reliance behavior across different operational conditions. Notably, reliance behavior during emergency responses was more closely tied to routine behavior and system safety redundancy than to automation trust itself. This divergence likely stems from the inherent differences in task complexity and operational demands between routine monitoring and emergency scenarios. These findings provide valuable insights for designing transparent interfaces, optimizing system redundancy, and developing targeted training programs to enhance automation trust and operational efficiency in safety-critical systems.

Author Contributions

Conceptualization, J.W. and W.F.; Data curation, J.W.; Formal analysis, J.W.; Funding acquisition, W.F.; Investigation, J.W.; Methodology, J.W.; Project administration, W.F.; Resources, W.F.; Supervision, W.F.; Validation, J.W. and H.B.; Visualization, J.W. and H.B.; Writing—original draft, J.W.; Writing—review and editing, J.W., H.B. and W.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 72271015 and 62203039); the Systematic major projects of China State Railway Group (Grant No. P2023J003); the Natural Science Foundation of Beijing Municipality, China (Grant No. L241057 and L191018); the Ministry of Education’s Industry–University Cooperation Collaborative Education Project, 2020 (Grant No. 202002SJ08); and the National Key Research and Development Program of China (Grant No. 2023YFF0615904).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Academic Committee, State Key Laboratory of Advanced Rail Autonomous Operation, Beijing Jiaotong University (protocol code L191018-2021070901, 9 July 2021).

Informed Consent Statement

Informed consent was obtained from all participants for their involvement in the study. Written informed consent was also obtained from all participants for the publication of this paper.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the personal privacy of some of the data obtained from the participants. The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy and confidentiality agreements with the metro operator, which protect sensitive operational details and dispatchers' personal information. Data sharing requires written permission from the data provider.

Acknowledgments

We acknowledge Beiyuan Guo and Hanzhao Qiu for their help with the design, adaptation, and implementation of the experiment. We acknowledge all the participants who cooperated by adjusting their physiological and psychological state and actively mobilized their individual abilities to fully participate in the experiment. We are also grateful to the reviewers and editors who made valuable corrections during the publication process.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Automatic Train Supervision System Automation Trust Questionnaire

Instructions: Please rate in square brackets how much you agree with the following statements based on your work experience, understanding of the ATS system, and other factors.
Score setting 1~7 points.
1 mark for completely disagree.
2 marks, strongly disagree.
3 marks, basically disagree.
4 marks, not sure, neutral attitude.
5 marks, basically agree.
6 marks, strongly agree.
7 marks, completely agree.
Please note that the questionnaire has a considerable number of negative descriptive items, and there is no time limit for answering the questionnaire. We hope you will answer the questionnaire carefully, as it will significantly benefit scientific research and subsequent improvement of the engineered system.
[     ] 1. I can easily understand and access the ATS system’s interface and accurately build a mental model of the signal and field status.
[     ] 2. The human–computer interaction experience of the current system is poor and the user interface is not user-friendly.
[     ] 3. I have confidence in the ATS system I use, subjectively trusting it to perform routine monitoring tasks and trusting it to help me enough in emergency response like a teacher.
[     ] 4. The ATS system has not resulted in significant improvements in my productivity.
[     ] 5. The ATS system I use has never had a fatal crash that affected operations.
[     ] 6. The ATS system provides me with the most critical decision-making information for my emergency response work.
[     ] 7. For practically all of my normal operations monitoring duties, I rely on the ATS system.
[     ] 8. The information is efficiently ranked and filtered by the ATS system I use at work.
[     ] 9. The ATS system makes me more nervous and disorganized during emergency response.
[     ] 10. The ATS system I use at work pushes me the current information I need in a timely manner, or presents the appropriate information quickly when I make an information query.
[     ] 11. The ATS system I used at work was unable to quickly and accurately detect anomalies in the field.
[     ] 12. I have a thorough and in-depth understanding of the hardware and software systems of the ATS system and the knowledge associated with them.
[     ] 13. When the ATS system disagrees with my decision-making judgment, I trust my own comprehensive judgment more.
[     ] 14. My work pressure is significantly reduced due to the adoption of the ATS system.
[     ] 15. Almost none of the functions of the ATS system are used in the emergency response work.
[     ] 16. The ATS system I work with is completely inconsistent with the operating rules in terms of language expressions such as textual terms and abbreviations.
[     ] 17. When I use the ATS system at work, I use other channels (such as phone, OA, instant messaging software WeChat, etc.) to check the accuracy of the information presented and actions performed by the ATS system.
[     ] 18. I believe that the hardware and software system backups for ATS systems are far from adequate to maintain reliable operation of the system.
19. Please rank 3 terms that enhance your trust in the ATS system in descending order of impact.
20. Please rank 3 terms that reduce your trust in the ATS system in descending order of impact.
[     ] 21. The vast majority of emergency response work involves many personnel and equipment.
[     ] 22. The work content of the emergency response is extensive.
[     ] 23. There needs to be a distinct and standardized operational process for most emergency responses.
[     ] 24. The primary purpose of emergency response is to intervene in conflicts between personnel, equipment, and the environment.
[     ] 25. Most emergency response work is carried out in rapidly changing situations.
[     ] 26. In most emergency response situations, personnel, equipment, and environments are unreliable.
[     ] 27. Most emergency response work involves dealing with unconventional, unprecedented events.
[     ] 28. A lack of coordination between the environment, equipment, and employees creates most emergency response work scenarios.
[     ] 29. Most emergency response work entails intricate operating procedures.
[     ] 30. Most work done in emergency response situations is under intense time pressure.
[     ] 31. The vast majority of routine monitoring work involves many personnel and equipment.
[     ] 32. The work content of the routine monitoring is extensive.
[     ] 33. There needs to be a distinct and standardized operational process for most routine monitoring working.
[     ] 34. The primary purpose of routine monitoring is to intervene in conflicts between personnel, equipment, and the environment.
[     ] 35. Most routine monitoring work is carried out in rapidly changing situations.
[     ] 36. In most routine monitoring situations, personnel, equipment, and environments are unreliable.
[     ] 37. Most routine monitoring work involves dealing with unconventional, unprecedented events.
[     ] 38. A lack of coordination between the environment, equipment, and employees creates most routine monitoring work scenarios.
[     ] 39. Most routine monitoring work entails intricate operating procedures.
[     ] 40. Most work done in routine monitoring situations is under intense time pressure.

Appendix B. Overview of Questionnaire Setup

In order to reduce the influence of subjects’ subjective intentions on the validity of the questionnaire research, this questionnaire disordered the automation trust research dimensions in the actually distributed questionnaires. To increase the subjects’ participation in answering the questionnaire, half the number of reverse scales were set up in the questionnaire for the section on automation trust. The correspondence between the questionnaire items and the research dimensions is shown in Table A1.
Table A1. Correspondence between questionnaire items and research dimensions.
Table A1. Correspondence between questionnaire items and research dimensions.
No.Research Dimension (Automation Trust in ATS)No.Research Dimension (Task Complexity)
1ASK.Q221Emergency TC-1-Size
2TAI.Q1 (Scale Reversed)22Emergency TC-2-Variety
3DTA.Q223Emergency TC-3-Ambiguity
4RBR.Q2 (Scale Reversed)24Emergency TC-4-Relationship
5ACR.Q225Emergency TC-5-Variability
6RBE.Q226Emergency TC-6-Unreliability
7RBR.Q127Emergency TC-7-Novelty
8TAI.Q328Emergency TC-8-Incongruity
9RBE.Q3 (Scale Reversed)29Emergency TC-9-Action Complexity
10TAI.Q230Emergency TC-10-Temporal Demand
11ACR.Q3 (Scale Reversed)31Routine Monitoring TC-1-Size
12ASK.Q132Routine Monitoring TC-2-Variety
13DTA.Q3 (Scale Reversed)33Routine Monitoring TC-3-Ambiguity
14RBR.Q334Routine Monitoring TC-4-Relationship
15RBE.Q1 (Scale Reversed)35Routine Monitoring TC-5-Variability
16ASK.Q3 (Scale Reversed)36Routine Monitoring TC-6-Unreliability
17DTA.Q1 (Scale Reversed)37Routine Monitoring TC-7-Novelty
18ACR.Q1 (Scale Reversed)38Routine Monitoring TC-8-Incongruity
19Open-Ended Questions on Enhancing Trust in Automation39Routine Monitoring TC-9-Action Complexity
20Open-Ended Questions on Reducing Trust in Automation40Routine Monitoring TC-10-Temporal Demand
Notes: TAI, Transparency of the ATS Interface; ASK, dispatcher’s ATS System Knowledge; ACR, ATS’s Capability Redundancy; DTA, Dispatcher’s Trust in the ATS system; RBR, Reliance Behavior on ATS under Routine monitoring; RBE, Reliance Behavior on ATS under Emergency response; TC, Task Complexity.

References

  1. Zeng, J.; Li, Z. Trust towards autonomous driving, worthwhile travel time, and new mobility business opportunities. Appl. Econ. 2024, 1–15. [Google Scholar] [CrossRef]
  2. Vasile, L.; Seitz, B.; Staab, V.; Liebherr, M.; Däsch, C.; Schramm, D. Influences of personal driving styles and experienced system characteristics on driving style preferences in automated driving. Appl. Sci. 2023, 13, 8855. [Google Scholar] [CrossRef]
  3. Kim, H. Trustworthiness of unmanned automated subway services and its effects on passengers’ anxiety and fear. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 158–175. [Google Scholar]
  4. Tilbury, J.; Flowerday, S. Humans and Automation: Augmenting Security Operation Centers. J. Cybersecur. Priv. 2024, 4, 388–409. [Google Scholar] [CrossRef]
  5. Ferraris, D.; Fernandez-Gago, C.; Lopez, J. A model-driven approach to ensure trust in the IoT. Hum.-Centric Comput. Inf. Sci. 2020, 10, 50. [Google Scholar]
  6. Parasuraman, R.; Mouloua, M.; Molloy, R. Effects of adaptive task allocation on monitoring of automated systems. Hum. Factors 1996, 38, 665–679. [Google Scholar] [PubMed]
  7. Sauer, J.; Chavaillaz, A. The use of adaptable automation: Effects of extended skill lay-off and changes in system reliability. Appl. Ergon. 2017, 58, 471–481. [Google Scholar] [CrossRef] [PubMed]
  8. Bowden, V.K.; Griffiths, N.; Strickland, L.; Loft, S. Detecting a single automation failure: The impact of expected (but not experienced) automation reliability. Hum. Factors 2023, 65, 533–545. [Google Scholar]
  9. Feng, J.; Shang, J.; Ibrahim, A.N.H.; Borhan, M.N.B.; Tao, Y. Analysis of Safe Operation Behavior for Dispatching Fully Automated Operation Urban Rail Transit Lines Based on FTA. In Proceedings of the The International Conference on Artificial Intelligence and Logistics Engineering, Guiyang, China, 16–17 April 2024; pp. 142–151. [Google Scholar]
  10. Kippnich, M.; Kowalzik, B.; Cermak, R.; Kippnich, U.; Kranke, P.; Wurmb, T. Katastrophen-und Zivilschutz in Deutschland. AINS-Anästhesiologie Intensivmed. Notfallmedizin Schmerzther. 2017, 52, 606–617. [Google Scholar]
  11. Ferraro, J.C.; Mouloua, M. Effects of automation reliability on error detection and attention to auditory stimuli in a multi-tasking environment. Appl. Ergon. 2021, 91, 103303. [Google Scholar]
  12. Fahim, M.A.A.; Khan, M.M.H.; Jensen, T.; Albayram, Y. Human vs. Automation: Which One Will You Trust More If You Are About to Lose Money? Int. J. Hum.–Comput. Interact. 2022, 39, 2420–2435. [Google Scholar] [CrossRef]
  13. Dimitrova, E.; Tomov, S. Automatic Train Operation for Mainline. In Proceedings of the 2021 13th Electrical Engineering Faculty Conference (BulEF), Varna, Bulgaria, 8–11 September 2021; pp. 1–4. [Google Scholar] [CrossRef]
  14. Singh, P.; Dulebenets, M.A.; Pasha, J.; Gonzalez, E.D.S.; Lau, Y.-Y.; Kampmann, R. Deployment of autonomous trains in rail transportation: Current trends and existing challenges. IEEE Access 2021, 9, 91427–91461. [Google Scholar] [CrossRef]
  15. Ajenaghughrure, I.B.; Sousa, S.D.C.; Lamas, D. Measuring trust with psychophysiological signals: A systematic mapping study of approaches used. Multimodal Technol. Interact. 2020, 4, 63. [Google Scholar] [CrossRef]
  16. Calhoun, C.; Bobko, P.; Gallimore, J.; Lyons, J. Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment. J. Trust Res. 2019, 9, 28–46. [Google Scholar] [CrossRef]
  17. Schaefer, K. The Perception and Measurement of Human-Robot Trust; University of Central Florida: Orlando, FL, USA, 2013.
  18. Pushparaj, K.; Ky, G.; Ayeni, A.J.; Alam, S.; Duong, V.N. A quantum-inspired model for human-automation trust in air traffic controllers derived from functional Magnetic Resonance Imaging and correlated with behavioural indicators. J. Air Transp. Manag. 2021, 97, 102143. [Google Scholar] [CrossRef]
  19. Malekshahi Rad, M.; Rahmani, A.M.; Sahafi, A.; Nasih Qader, N. Social Internet of Things: Vision, challenges, and trends. Hum.-Centric Comput. Inf. Sci. 2020, 10, 52. [Google Scholar] [CrossRef]
  20. Rheu, M.; Shin, J.Y.; Peng, W.; Huh-Yoo, J. Systematic review: Trust-building factors and implications for conversational agent design. Int. J. Hum.–Comput. Interact. 2021, 37, 81–96. [Google Scholar] [CrossRef]
  21. Kohn, S.C.; De Visser, E.J.; Wiese, E.; Lee, Y.-C.; Shaw, T.H. Measurement of trust in automation: A narrative review and reference guide. Front. Psychol. 2021, 12, 604977. [Google Scholar] [CrossRef]
  22. Liehner, G.L.; Brauner, P.; Schaar, A.; Ziefle, M. Delegation of Moral Tasks to Automated Agents—The Impact of Risk and Context on Trusting a Machine to Perform a Task. IEEE Trans. Technol. Soc. 2022, 3, 46–57. [Google Scholar] [CrossRef]
  23. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef]
  24. French, B.; Duenser, A.; Heathcote, A. Trust in Automation A Literature Review; IEEE: New York, NY, USA, 2018. [Google Scholar]
  25. Selkowitz, A.R.; Lakhmani, S.G.; Chen, J.Y.C. Using agent transparency to support situation awareness of the Autonomous Squad Member. Cogn. Syst. Res. 2017, 46, 13–25. [Google Scholar] [CrossRef]
  26. Wang, K.; Hou, W.; Hong, L.; Guo, J. Smart Transparency: A User-Centered Approach to Improving Human–Machine Interaction in High-Risk Supervisory Control Tasks. Electronics 2025, 14, 420. [Google Scholar] [CrossRef]
  27. Eloy, L.; Doherty, E.J.; Spencer, C.A.; Bobko, P.; Hirshfield, L. Using fNIRS to identify transparency-and reliability-sensitive markers of trust across multiple timescales in collaborative human-human-agent triads. Front. Neuroergonomics 2022, 3, 838625. [Google Scholar] [CrossRef]
  28. Li, J.; Liu, J.; Wang, X.; Liu, L. The Impact of Transparency on Driver Trust and Reliance in Highly Automated Driving: Presenting Appropriate Transparency in Automotive HMI. Appl. Sci. 2024, 14, 3203. [Google Scholar] [CrossRef]
  29. Bhaskara, A.; Duong, L.; Brooks, J.; Li, R.; McInerney, R.; Skinner, M.; Pongracic, H.; Loft, S. Effect of automation transparency in the management of multiple unmanned vehicles. Appl. Ergon. 2021, 90, 103243. [Google Scholar] [CrossRef]
  30. Motamedi, S.; Wang, P.; Zhang, T.; Chan, C.-Y. Acceptance of Full Driving Automation: Personally Owned and Shared-Use Concepts. Hum. Factors 2020, 62, 288–309. [Google Scholar] [CrossRef]
  31. Brauner, P.; Philipsen, R.; Valdez, A.C.; Ziefle, M. What happens when decision support systems fail?—The importance of usability on performance in erroneous systems. Behav. Inf. Technol. 2019, 38, 1225–1242. [Google Scholar] [CrossRef]
  32. Cheng, X.; Guo, F.; Chen, J.; Li, K.; Zhang, Y.; Gao, P. Exploring the Trust Influencing Mechanism of Robo-Advisor Service: A Mixed Method Approach. Sustainability 2019, 11, 4917. [Google Scholar] [CrossRef]
  33. Parasuraman, R.; Riley, V. Humans and Automation: Use, misuse, disuse, abuse. Hum. Factors 1997, 39, 230–253. [Google Scholar] [CrossRef]
  34. Khastgir, S.; Birrell, S.; Dhadyalla, G.; Jennings, P. Calibrating trust through knowledge: Introducing the concept of informed safety for automation in vehicles. Transp. Res. Pt. C-Emerg. Technol. 2018, 96, 290–303. [Google Scholar] [CrossRef]
  35. Brdnik, S.; Podgorelec, V.; Šumak, B. Assessing perceived trust and satisfaction with multiple explanation techniques in XAI-enhanced learning analytics. Electronics 2023, 12, 2594. [Google Scholar] [CrossRef]
  36. Zhang, X.; Song, Z.; Huang, Q.; Pan, Z.; Li, W.; Gong, R.; Zhao, B. Shared eHMI: Bridging Human–Machine Understanding in Autonomous Wheelchair Navigation. Appl. Sci. 2024, 14, 463. [Google Scholar] [CrossRef]
  37. Niu, J.W.; Geng, H.; Zhang, Y.J.; Du, X.P. Relationship between automation trust and operator performance for the novice and expert in spacecraft rendezvous and docking (RVD). Appl. Ergon. 2018, 71, 1–8. [Google Scholar] [CrossRef] [PubMed]
  38. Chen, L.; Zhang, L.; Zheng, W. Research on Quantitative Evaluation of Emergency Handling Workload of Railway Dispatcher. In Proceedings of the 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), Bilbao, Spain, 24–28 September 2023; pp. 61–66. [Google Scholar]
  39. Balfe, N.; Sharples, S.; Wilson, J.R. Understanding Is Key: An Analysis of Factors Pertaining to Trust in a Real-World Automation System. Hum. Factors 2018, 60, 477–495. [Google Scholar] [CrossRef] [PubMed]
  40. Li, Z.; Li, X.; Jiang, B. How people perceive the safety of self-driving buses: A quantitative analysis model of perceived safety. Transp. Res. Rec. 2023, 2677, 1356–1366. [Google Scholar]
  41. Knocton, S.; Hunter, A.; Connors, W.; Dithurbide, L.; Neyedli, H.F. The Effect of Informing Participants of the Response Bias of an Automated Target Recognition System on Trust and Reliance Behavior. Hum. Factors 2021, 65, 189–199. [Google Scholar]
  42. Long, S.K.; Lee, J.; Yamani, Y.; Unverricht, J.; Itoh, M. Does automation trust evolve from a leap of faith? An analysis using a reprogrammed pasteurizer simulation task. Appl. Ergon. 2022, 100, 103674. [Google Scholar]
  43. Hutchinson, J.; Strickland, L.; Farrell, S.; Loft, S. The Perception of Automation Reliability and Acceptance of Automated Advice. Hum. Factors 2022, 65, 1596–1612. [Google Scholar]
  44. Stowers, K.; Oglesby, J.; Sonesh, S.; Leyva, K.; Iwig, C.; Salas, E. A framework to guide the assessment of human–machine systems. Hum. Factors 2017, 59, 172–188. [Google Scholar]
  45. Klingbeil, A.; Grützner, C.; Schreck, P. Trust and reliance on AI—An experimental study on the extent and costs of overreliance on AI. Comput. Hum. Behav. 2024, 160, 108352. [Google Scholar]
  46. Sanchez, J.; Rogers, W.A.; Fisk, A.D.; Rovira, E. Understanding reliance on automation: Effects of error type, error distribution, age and experience. Theor. Issues Ergon. Sci. 2014, 15, 134–160. [Google Scholar] [PubMed]
  47. Chen, Y.; Zahedi, F.M.; Abbasi, A.; Dobolyi, D. Trust calibration of automated security IT artifacts: A multi-domain study of phishing-website detection tools. Inf. Manag. 2021, 58, 103394. [Google Scholar]
  48. Niu, K.; Liu, W.; Zhang, J.; Liang, M.; Li, H.; Zhang, Y.; Du, Y. A Task Complexity Analysis Method to Study the Emergency Situation under Automated Metro System. Int. J. Environ. Res. Public Health 2023, 20, 2314. [Google Scholar] [CrossRef] [PubMed]
  49. Jin, M.; Lu, G.; Chen, F.; Shi, X.; Tan, H.; Zhai, J. Modeling takeover behavior in level 3 automated driving via a structural equation model: Considering the mediating role of trust. Accid. Anal. Prev. 2021, 157, 106156. [Google Scholar]
  50. Fenn, Z. Lassoing the Loop: An Examination of Factors Influencing Trust in Automation. Master’s Thesis, University of Basel, Basel, Switzerland, 2020. [Google Scholar]
  51. Gulati, S.; Sousa, S.; Lamas, D. Design, development and evaluation of a human-computer trust scale. Behav. Inf. Technol. 2019, 38, 1004–1015. [Google Scholar]
  52. Chancey, E.T. The Effects of Alarm System Errors on Dependence: Moderated Mediation of Trust with and Without Risk. Ph.D. Thesis, Old Dominion University, Norfolk, VA, USA, 2016. [Google Scholar]
  53. Mayer, R.C.; Davis, J.H. The effect of the performance appraisal system on trust for management: A field quasi-experiment. J. Appl. Psychol. 1999, 84, 123–136. [Google Scholar]
  54. Madsen, M.; Gregor, S. Measuring human-computer trust. In Proceedings of the 11th Australasian Conference on Information Systems, Brisbane, Australia, 6–8 December 2000; pp. 6–8. [Google Scholar]
  55. Singh, I.L.; Molloy, R.; Parasuraman, R. Automation-induced “complacency”: Development of the complacency-potential rating scale. Int. J. Aviat. Psychol. 1993, 3, 111–122. [Google Scholar]
  56. Merritt, S.M.; Ako-Brew, A.; Bryant, W.J.; Staley, A.; McKenna, M.; Leone, A.; Shirase, L. Automation-Induced Complacency Potential: Development and Validation of a New Scale. Front. Psychol. 2019, 10, 225–237. [Google Scholar] [CrossRef]
  57. Guo, Z.; Zou, J.; He, C.; Tan, X.; Chen, C.; Feng, G. The Importance of Cognitive and Mental Factors on Prediction of Job Performance in Chinese High-Speed Railway Dispatchers. J. Adv. Transp. 2020, 2020, 7153972. [Google Scholar] [CrossRef]
  58. Liu, P.; Li, Z. Task complexity: A review and conceptualization framework. Int. J. Ind. Ergon. 2012, 42, 553–568. [Google Scholar] [CrossRef]
  59. Comrey, A.L.; Lee, H.B. A First Course in Factor Analysis; Psychology Press: Hove, UK, 2013. [Google Scholar]
  60. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis; Andover: Hampshire, UK, 2019. [Google Scholar]
  61. Muthumani, A.; Diederichs, F.; Galle, M.; Schmid-Lorch, S.; Forsberg, C.; Widlroither, H.; Feierle, A.; Bengler, K. How visual cues on steering wheel improve users’ trust, experience, and acceptance in automated vehicles. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Orlando, FL, USA, 16–20 July 2020; pp. 186–192. [Google Scholar]
  62. Lopez, J.; Watkins, H.; Pak, R. Enhancing Component-Specific Trust with Consumer Automated Systems through Humanness Design. Ergonomics 2022, 66, 291–302. [Google Scholar] [CrossRef] [PubMed]
  63. Lee, Y.; Ha, M.; Kwon, S.; Shim, Y.; Kim, J. Egoistic and altruistic motivation: How to induce users’ willingness to help for imperfect AI. Comput. Hum. Behav. 2019, 101, 180–196. [Google Scholar] [CrossRef]
  64. Lee, J.; Abe, G.; Sato, K.; Itoh, M. Developing human-machine trust: Impacts of prior instruction and automation failure on driver trust in partially automated vehicles. Transp. Res. Part F Traffic Psychol. Behav. 2021, 81, 384–395. [Google Scholar] [CrossRef]
  65. Neuhuber, N.; Lechner, G.; Kalayci, T.E.; Stocker, A.; Kubicek, B. Age-related differences in the interaction with advanced driver assistance systems-a field study. In Proceedings of the International Conference on Human-Computer Interaction, Copenhagen, Denmark, 19–24 July 2020; pp. 363–378. [Google Scholar]
  66. Miller, D.; Sun, A.; Ju, W. Situation Awareness with Different Levels of Automation. In Proceedings of the 2014 IEEE International Conference on Systems, Man and Cybernetics, San Diego, CA, USA, 5–8 October 2014; IEEE: New York, NY, USA, 2014; pp. 688–693. [Google Scholar]
  67. Karpinsky, N.D.; Chancey, E.T.; Palmer, D.B.; Yamani, Y. Automation trust and attention allocation in multitasking workspace. Appl. Ergon. 2018, 70, 194–201. [Google Scholar] [CrossRef]
  68. Pipkorn, L.; Victor, T.W.; Dozza, M.; Tivesten, E. Driver conflict response during supervised automation: Do hands on wheel matter? Transp. Res. Part F Traffic Psychol. Behav. 2021, 76, 14–25. [Google Scholar] [CrossRef]
  69. Nordhoff, S.; Stapel, J.; He, X.; Gentner, A.; Happee, R. Perceived safety and trust in SAE Level 2 partially automated cars: Results from an online questionnaire. PLoS ONE 2021, 16, e0260953. [Google Scholar] [CrossRef]
  70. Sato, T. Exploring the Effects of Task Priority on Attention Allocation and Trust Towards Imperfect Automation: A Flight Simulator Study. Master’s Thesis, Old Dominion University, Norfolk, VA, USA, 2020. [Google Scholar]
Figure 1. Conceptual framework of the hypotheses and their influence paths. Notes: TAI, Transparency of the ATS Interface; ASK, dispatcher’s ATS System Knowledge; ACR, ATS’s Capability Redundancy; DTA, Dispatcher’s Trust in ATS system; RBR, Reliance Behavior on ATS under Routine monitoring; RBE, Reliance Behavior on ATS under Emergency response.
Figure 1. Conceptual framework of the hypotheses and their influence paths. Notes: TAI, Transparency of the ATS Interface; ASK, dispatcher’s ATS System Knowledge; ACR, ATS’s Capability Redundancy; DTA, Dispatcher’s Trust in ATS system; RBR, Reliance Behavior on ATS under Routine monitoring; RBE, Reliance Behavior on ATS under Emergency response.
Applsci 15 03839 g001
Figure 2. Standardized path estimates for the structural model.
Figure 2. Standardized path estimates for the structural model.
Applsci 15 03839 g002
Figure 3. Next-generation ATS human–machine interface under development, featuring transparent information channels, prototype completion, and ongoing interaction design and human factor validation.
Figure 3. Next-generation ATS human–machine interface under development, featuring transparent information channels, prototype completion, and ongoing interaction design and human factor validation.
Applsci 15 03839 g003
Figure 4. Training system under development by the research team, featuring adaptive learning modules and ongoing usability testing.
Figure 4. Training system under development by the research team, featuring adaptive learning modules and ongoing usability testing.
Applsci 15 03839 g004
Table 1. Conceptual definitions and sources of research constructs.
Table 1. Conceptual definitions and sources of research constructs.
ConstructFactor CodeConceptual Definitions and Sources
Dispatcher’s Trust in the ATS SystemDTAHow much the dispatcher subjectively believes in the ATS system [39,51].
ATS’s Capability RedundancyACRRedundancy in the ATS system’s capability to ensure safety that is perceptible to dispatchers [31,52,53].
Dispatcher’s ATS System KnowledgeASKThe dispatcher’s familiarity with the ATS system, associated hardware- and software-related information, and operational dispatching rules and regulations [3,54].
Transparency of the ATS InterfaceTAIThe degree to which the user interface allows the dispatcher to review past history records, understand the current system status, and assist in predicting future trends [30,31,32].
Reliance Behavior on ATS under Routine MonitoringRBRThe extent to which the dispatcher relies on and benefits from the ATS system during routine monitoring conditions [55,56].
Reliance Behavior on ATS under Emergency ResponseRBEThe extent to which the dispatcher relies on and benefits from the ATS system during emergency response conditions [55,56].
Table 2. Hypotheses and variable relationships.
Table 2. Hypotheses and variable relationships.
Hypothesis CodeH1H2aH2bH3H4aH4bH4cH5aH5b
Predictor CodeTAIASKASKACRDTADTARBRACRACR
Dependent CodeDTADTATAIDTARBRRBERBERBRRBE
Table 3. Survey participant demographics.
Table 3. Survey participant demographics.
AgeYears of ServiceYears of Experience
in Dispatching
Educational Background
24–33:99 (46.3%)1–10:84 (39.3%)1–9:150 (70.1%)Bachelor’s Degree:189 (88.3%)
34–43:102 (47.7%)11–20:100 (46.7%)10–19:57 (26.6%)Associate Degree:17 (7.9%)
44–53:11 (5.1%)21–30:27 (12.6%)20–29:4 (1.9%)Graduate Degree:8 (3.7%)
54–63:2 (0.9%)31–40:3 (1.4%)30–39:3 (1.4%)
Table 4. Final factor loadings, Cronbach’s α, and explained variance after excluding low-loading items (with pre-exclusion values for excluded items).
Table 4. Final factor loadings, Cronbach’s α, and explained variance after excluding low-loading items (with pre-exclusion values for excluded items).
DescriptivesFactor Loadings
ItemMeanSDF1F2F3F4F5F6
ACR Q1
(excluded due to low factor loading)
3.7011.758 0.353
ACR Q24.2061.530 0.870
ACR Q33.7751.890 0.853
ASK Q14.7131.334 0.769
ASK Q25.1121.534 0.863
ASK Q35.0371.378 0.890
TAI Q13.9381.482 0.785
TAI Q24.1691.727 0.932
TAI Q33.6811.531 0.851
DTA Q13.6191.693 0.844
DTA Q23.7371.523 0.897
DTA Q33.3371.533 0.820
RBR Q14.8751.4440.868
RBR Q25.0881.4900.946
RBR Q34.9381.5120.842
RBE Q15.6751.315 0.932
RBE Q25.0941.268 0.805
RBE Q35.5061.274 0.834
Explained Variance (%) 14.1413.6213.3213.2613.179.42
Cumulative Explained Variance (%) 14.1427.7641.0854.3467.5176.93
Cronbach’s α 0.9180.9100.8930.8980.8910.860
Table 5. Confirmatory factors of the SEM.
Table 5. Confirmatory factors of the SEM.
95% Confidence Intervals
LatentObservedEstimateSELowerUpperβzpAVE
ACRACR.Q210110.950 0.753
ACR.Q31.0520.1220.8131.2920.8098.612<0.001
ASKASK.Q110110.798 0.735
ASK.Q21.2090.1021.0091.4080.83811.891<0.001
ASK.Q31.2070.0931.0251.3890.93213.015<0.001
TAITAI.Q110110.869 0.779
TAI.Q21.2040.0811.0461.3620.89814.938<0.001
TAI.Q31.0430.0720.9021.1840.87714.482<0.001
DTADTA.Q110110.858 0.732
DTA.Q20.9270.0700.7891.0640.88413.229<0.001
DTA.Q30.8700.0710.7321.0090.82412.336<0.001
RBRRBR.Q110110.859 0.79
RBR.Q21.1250.0710.9851.2650.93815.769<0.001
RBR.Q31.0560.0740.9111.2000.86614.339<0.001
RBERBE.Q110110.913 0.751
RBE.Q20.8680.0650.7400.9960.82213.272<0.001
RBE.Q30.9110.0650.7841.0370.85914.107<0.001
Table 6. Initial hypothetical model: standardized path coefficients between latent variables and their significance levels.
Table 6. Initial hypothetical model: standardized path coefficients between latent variables and their significance levels.
95% Confidence Intervals Hypothesis CodeValidation Status
DepPredEstimateSELowerUpperβzp
DTATAI0.2040.115−0.0210.4290.1821.7780.075H1
DTAASK0.1220.155−0.1810.4250.0900.7880.431H2aRejected
TAIASK0.6620.1020.4610.8620.5486.463<0.001H2b
DTAACR0.2060.0980.0150.3980.2082.1130.035H3
RBRDTA0.1720.0760.0240.3210.2012.2690.023H4a
RBEDTA−0.0400.074−0.1850.104−0.049−0.5490.583H4bRejected
RBERBR0.2770.0840.1120.4420.2863.2900.001H4c
RBRACR0.1680.0760.0200.3160.1972.2280.026H5a
RBEACR0.1870.0740.0420.3320.2272.5220.012H5b
Table 7. Fine-tuned model: standardized path coefficients between latent variables and their significance levels.
Table 7. Fine-tuned model: standardized path coefficients between latent variables and their significance levels.
95% Confidence Intervals Hypothesis CodeValidation Status
DepPredEstimateSELowerUpperβzp
DTATAI0.2510.0970.0600.4420.2252.5750.010H1Supported
TAIASK0.6650.1030.4640.8660.5496.482<0.001H2bSupported
DTAACR0.2240.0880.0530.3960.2272.5600.010H3Supported
RBRDTA0.1700.0760.0210.3190.1972.2380.025H4aSupported
RBERBR0.2670.0820.1060.4290.2763.2450.001H4cSupported
RBRACR0.1690.0750.0220.3170.1992.2530.024H5aSupported
RBEACR0.1740.0710.0350.3140.2112.4510.014H5bSupported
Table 8. Variability in task complexity under different operating conditions.
Table 8. Variability in task complexity under different operating conditions.
Descriptive StatisticsSignificance Statistics
Emergency ConditionNormal ConditionRepeated Measures ANOVA
(Non-Parametric)
Task Complexity FactorsNMeanStd.
Error
Mean
Standard
Deviation
MeanStd.
Error
Mean
Standard
Deviation
Friedman   χ 2 pMean
Difference
(I-J)
1—Size1605.3190.1121.4164.8560.1271.60911.636<0.0010.463
2—Variety1605.6500.0991.2555.0690.1141.44626.042<0.0010.581
3—Ambiguity1602.5940.1181.4932.6500.1181.4890.8210.365−0.056
4—Relationship1605.3690.1041.3164.6690.1311.65115.042<0.0010.700
5—Variability1605.9630.0760.9644.2000.1481.86670.738<0.0011.763
6—Unreliability1603.9310.1381.7423.2880.1371.73215.696<0.0010.644
7—Novelty1603.8560.1311.6633.3310.1341.69219.463<0.0010.525
8—Incongruity1604.8190.1191.5004.2310.1391.7537.3530.0070.588
9—Action Complexity1604.8940.1111.3993.8810.1391.75737.586<0.0011.013
10—Temporal Demand1605.7560.0981.2384.0940.1451.83267.930<0.0011.663
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Fang, W.; Bao, H. Trust in the Automatic Train Supervision System and Its Effects on Subway Dispatchers’ Operational Behavior. Appl. Sci. 2025, 15, 3839. https://doi.org/10.3390/app15073839

AMA Style

Wang J, Fang W, Bao H. Trust in the Automatic Train Supervision System and Its Effects on Subway Dispatchers’ Operational Behavior. Applied Sciences. 2025; 15(7):3839. https://doi.org/10.3390/app15073839

Chicago/Turabian Style

Wang, Jianxin, Weining Fang, and Haifeng Bao. 2025. "Trust in the Automatic Train Supervision System and Its Effects on Subway Dispatchers’ Operational Behavior" Applied Sciences 15, no. 7: 3839. https://doi.org/10.3390/app15073839

APA Style

Wang, J., Fang, W., & Bao, H. (2025). Trust in the Automatic Train Supervision System and Its Effects on Subway Dispatchers’ Operational Behavior. Applied Sciences, 15(7), 3839. https://doi.org/10.3390/app15073839

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop