Next Article in Journal
Mapping Invasive Species Pedicularis and Background Grassland Using UAV and Machine Learning Algorithms
Previous Article in Journal
Impact of Adverse Weather and Image Distortions on Vision-Based UAV Detection: A Performance Evaluation of Deep Learning Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Use of Simulation for Pre-Training of Drone Pilots

by
Alexander Somerville
1,
Timothy Lynar
2,
Keith Joiner
1 and
Graham Wild
3,*
1
School of Engineering & Technology, UNSW Canberra, Australian Capital Territory, Campbell 2612, Australia
2
School of Systems & Computing, UNSW Canberra, Australian Capital Territory, Campbell 2612, Australia
3
School of Science, UNSW Canberra, Australian Capital Territory, Campbell 2612, Australia
*
Author to whom correspondence should be addressed.
Drones 2024, 8(11), 640; https://doi.org/10.3390/drones8110640
Submission received: 20 September 2024 / Revised: 30 October 2024 / Accepted: 30 October 2024 / Published: 4 November 2024

Abstract

:
This study investigates the effectiveness of simulator-based training systems in enhancing human drone piloting skills and performance. The study utilized a true-experimental research design to assess the impact of simulation training on accuracy, efficiency, and workload perception among human drone pilots. Leveraging historical simulation practices in conventional crewed aviation and incorporating instructivist educational principles, this research evaluates the potential for structured simulator training to improve real-world drone operation proficiency. Performance evaluation was focused upon the precision with which the participants were able to return the aircraft to a defined point in space after conducting a standard flight maneuver. Results indicate a significant improvement in flight performance among participants undergoing simulator training, reflected in a 32% reduction in mean final displacement. This highlights the value of integrating advanced simulation technologies and instructivist methodologies into drone pilot training programs to meet the evolving needs of both industry and academia.

1. Introduction

Remotely Piloted Aircraft Systems (RPAS), commonly known as drones, have transformed various industries by offering innovative solutions for tasks ranging from inspection to delivery [1]. Noting the fact that the “civilian use of drones has accelerated in the past four decades” [2] (p. 2), training is critical to ensure safe operations, and to avoid common safety occurrences noted in the literature [3,4]. The integration of simulators, virtual reality (VR), and structured training approaches have emerged as cornerstones in drone education and skill development [5,6,7]. These technologies, as with their crewed aircraft counterparts, provide trainees with immersive and risk-free environments to hone their piloting skills and enhance their confidence in real-world operations [8]. As the demand for skilled drone operators continues to rise across sectors such as construction, agriculture, and public safety, understanding the efficacy and implications of these training methodologies becomes paramount.
In recent years, the use of simulators and particularly VR-based training systems has gained traction within the multifaceted domain of drone operations. Studies have demonstrated the efficacy of simulator-based training in improving piloting skills and performance outcomes among novice and expert pilots [7]. Similarly, workload perception tools have been developed to assess pilots’ cognitive workload during training and operational tasks, highlighting the importance of managing workload for safe and effective operations [5]. Evidence also exists suggesting that increased workload during high-skill tasks can negatively impact performance [9], and that training can reduce this workload [10]. Furthermore, VR-based training environments have provided students and professionals with opportunities to enhance their navigation skills and confidence in operating drones for inspection and surveillance purposes [6]. By investigating the effectiveness of simulator-based training systems, this research aims to inform educational programs and curriculum development in drone piloting training. Moreover, the findings of this study can guide industry stakeholders in adopting best practices for training and skill development, ultimately enhancing operational efficiency, safety, and productivity in RPAS-related tasks.
The aim of this study is to investigate the effectiveness of simulator-based training systems in enhancing drone piloting skills and performance. By employing a quasi-experimental research design, this study seeks to evaluate the impact of these training approaches on performance metrics such as accuracy, efficiency, and workload perception among RPAS pilots. Additionally, the study aims to assess the feasibility and practicality of integrating these technologies into drone education programs to meet the evolving needs of industry and academia. The specific research questions addressed by this study are:
  • Does the use of simulator-based training enhance drone piloting skills and performance metrics such as accuracy?
  • How does perceived workload impact the objective performance of ab initio drone pilots in real-world flight?
  • Do traditional education techniques used in conventional aviation achieve positive learning outcomes for RPAS pilots?
This study contributes to drone pilot training by systematically evaluating the effectiveness of simulator-based training through a quasi-experimental research design. By integrating instructivist educational principles and leveraging traditional simulation practices from crewed aviation, this study demonstrates the feasibility of incorporating advanced simulation technologies into drone education programs.
The remainder of this paper is structured as follows: First, the relevant theory and background context on education and training methods, with their application to drone training, are presented. Next, the methodology is covered, specifically the experimental design that was implemented. This is followed by the presentation of results, their discussion, and, finally, the conclusion.

2. Background

The use of simulation in aviation training can help shape the contemporary framework for RPAS pilot training. The inception of simulation-based training can be traced back to the early 20th century, marked by the development of the first [crewed] flight simulators, such as the “Link Trainer” [11]. These fundamental technologies set the stage for simulated learning environments that closely resemble real-world conditions without the associated risks and costs [12]. However, simulation for drone training programs remains an underdeveloped approach, despite its proven effectiveness in skill acquisition and competency development within controlled environments [13]. By leveraging the principles and practices developed over decades in crewed aviation, RPAS training programs can rapidly evolve, incorporating advanced simulation technologies to cater to the unique demands of RPAS aviation, thus enabling the transition of simulation-based educational practices from traditional aviation to drone operations.
In transitioning from the general application of simulators in aviation to the specific facets of drone operation, it is necessary to consider the justification for utilizing simulation in the training of fine motor skills for RPAS pilots, given the historical emphasis on procedural training in crewed flight simulation. Crewed aviation simulation has traditionally focused on procedural and situational training due to the complexity of cockpit operations and the criticality of decision-making under various flight conditions [14]. However, the operation of drones/RPAS requires a distinct skill set, where fine motor control plays a significant role in ensuring precise control inputs. Simulation-based training for such skills is supported by the cognitive theory of skill acquisition, which hypothesizes that the development of fine motor skills benefits from repetitive practice and immediate feedback, conditions that simulators can readily provide [15]. Advancements in simulation technology have enabled models that can accurately replicate the response aspects of drone control systems, making them potentially suitable for training the nuanced control skills needed for drone piloting [13]. Figure 1 presents a Venn diagram, showing the overlap and inherent difference of simulator training applied to crewed and uncrewed aviation.
Simulation-based training in an RPAS simulator, incorporating instructivist educational principles, allows for a structured approach to the training of remote pilots prior to conduct of flight in the real world. Instructivism, as a pedagogical theory, is an educator-focused theory which understands learning as being a transfer of knowledge or skills from expert to novice. This theory, which is still the dominant principle by which pilots of crewed aircraft are trained [16], is less often applied in modern technology-based training [17]. The theory supports a structured, directly guided approach to education, which makes it ideal for, and prevalent in, highly proceduralized and safety-critical domains. The method of instructivism incorporating simulation as utilized in this research is illustrated in Figure 2. Instructivism, in this research, explicitly conveys the techniques and standards required, allowing for a clear understanding and application of drone piloting by the learner. The research is also framed within contextual learning [18], as the learner engages in hands-on and immersive experiences, which enhance mastery of flight procedures. It is important to note that there is a wide variety of educational methods or frameworks, and each of them could be applied to RPAS training in various ways, to produce different educational outcomes. A snapshot of this is given in Table 1.
As with crewed aviation, and associated training and assessment, it is likely that there are factors beyond the mechanics which will impact objective performance after training. One such factor in crewed flight, and crewed-flight training, is the workload of the pilot. Given the potential of workload to impact flight performance in crewed flight, and other highly skilled tasks, it is reasonable to infer that the same is likely true in RPAS operations. The measurement of subjective task loading, and therefore workload, is mostly through the use of tools such as the NASA Task Load Index (NASA-TLX). The NASA-TLX is a widely utilized tool [19] that assesses perceived workload through factors such as mental demand, physical demand, temporal demand, performance, effort, and frustration level [20]. The ‘traditional’ measure for the TLX requires the use of a two-pass process during administration, in order that the paired comparison weights for each dimension can be calculated [20]. However, the production of a unitary measure (i.e., score) can be achieved by summing the scales to produce a “raw”-TLX (RTLX = SUM/6). This simplified approach nonetheless results in comparable means and standard deviations [21]. The summing of the dimensions of the TLX, each of which is a Likert-style scale (i.e., a measure which produces ordinal data), is considered integral in population analysis [22,23]. The NASA-TLX has previously been demonstrated for use in measuring pilot workload perception during drone flight simulation [24].

3. Literature Review

One of the primary advantages of using flight simulators for drone training is the ability to replicate complex flight scenarios that pilots may encounter in real-world operations. Training is important, as it has been shown to be useful in preventing negligence in drone operation [25]. Albeaino et al. [5] highlighted that VR-based flight training simulators can significantly reduce the risks associated with inexperienced pilots by allowing them to practice in a safe environment, thereby increasing overall training efficiency. Similarly, Go et al. [26] emphasize that interactive training methods using mixed reality can facilitate easier and safer control of drone flight, which is crucial for novice pilots. These findings underscore the importance of simulation in preparing pilots for the challenges they will face during actual flights.
Moreover, the use of simulators is not limited to basic flight training; they are also employed in specialized applications such as drone-mediated building inspections and emergency response scenarios. Bruzzone et al. [27] discuss how simulation can support training in critical environments, particularly in emergency management contexts where drone operation is essential. This is echoed by Law et al. [28] who note that drones have been successfully utilized in healthcare logistics, demonstrating the need for pilots to be trained in relevant situations. The ability to simulate various operational contexts allows trainees to gain experience in handling diverse situations, which is vital for effective drone operation. Furthermore, the development of gamified training approaches, as explored by Cardona-Reyes et al. [29], has shown promise in engaging students and improving their learning outcomes through interactive simulations [30].
In addition to improving pilot skills, flight simulators also serve as a platform for research and development in drone technology. For instance, the integration of artificial intelligence and machine learning into simulation environments allows for the testing of new algorithms and control systems in a risk-free manner [31]. This not only aids in the advancement of drone technology but also ensures that pilots are trained in the latest systems and procedures. The continuous evolution of simulation technology, including the use of haptic feedback and motion tracking, further enhances the realism of training scenarios, as highlighted by Ren et al. [32]. Additionally, the optimization of PID controllers using meta-heuristic algorithms has shown significant improvements in stabilizing quadcopter movements, contributing to more effective training and development environments [33].
In terms of specific flight training, there is a dearth of literature on the topic around using simulators during or prior to an RPAS training program. The concept appears to have been initially suggested by Weldon and Kozak [34]. Recent speculation by Maqsud [35] indicates that drone simulators will provide a cost-effective, safe, and efficient environment for training. Other related work has looked at how simulator performance is associated with video game usage [36]. There has been some qualitative assessment undertaken on the implementation of drone simulator training [37]. Liu, et al. [38] and Aláez, et al. [39] have each demonstrated the concept through single-case studies. The only substantial quantitative study, conducted by Rostáš, et al. [40], quantifies stress on the pilot but not the use of the flight simulator to improve flight performance. This highlights a clear gap in the literature in terms of quantitative assessment of the effectiveness of using simulators for human drone pilot training.

4. Materials and Methods

The implemented experimental methodology is a post-test-only control-group design. This is still a true experimental design rather than the similar static-group comparison design, which is a quasi-experimental design. This design allows for causal attribution to the treatment, which, in this case, is exposure to a computer-based drone flight simulator. The measurement was then performed on both the treatment and control group (not exposed to a computer-based drone flight simulator) when they were required to fly a real drone outside in the real world. The experimental design is illustrated in Figure 3. Specifical details will be explained throughout this section.
The computer used to run the drone simulator ran the Windows 10 operating system, with an Intel i7 (2.5 Ghz) CPU, an Nvidia RTX 3060 (12 GB) GPU, and 16 GB (DDR4) RAM. The simulation was displayed on a 27′ flat-panel monitor, with a resolution of 1920 × 1080 pixels and a refresh rate of 60 Hz. The drone simulator software used was the DJI Flight Simulator [41], specifically the Free Trail Version (discontinued but still available). The simulated aircraft used was the DJI Spark. The similar mass, performance, and design were considered an adequate analogue for the real RPAS used. The RPAS used was the DJI Mavic Mini SE. The RPAS weighed 249 g and used a standard quad-copter planform. The controller used for both RPAS and simulator was the DJI Mavic Mini Controller (Model: MR1SS5). For both simulator and RPAS, controller Mode 2 was used, which assigns the throttle and yaw controls to the left stick, and the pitch and roll controls to the right stick. Control Mode 2 is generally preferred for multi-rotor RPAS [42]. Only the single drone was utilized in the simulator, given that only one drone is used in the real-world flight training activities.
The intervention group completed flight patterns (i.e., circuits), both left and right of origin, for a total of six maneuvers. The maneuvers, shown in Figure 4, were: a square in the horizontal plane, without yawing the aircraft; a square in the vertical plane; and a square in the horizonal plane, with use of yaw to maintain nose-forward flight. These maneuvers involve many of the practical competencies required of remote pilots of helicopter (multirotor class) category RPAS under the Part 101 Manual of Standards Instrument 2019 [43]. Direct and specific guidance, and selected error catching, was provided on the right of reference, and therefore first, of each of these patterns. No guidance was provided on the left of reference patterns. Conduct of the pattern to return to the starting reference was the informed goal. Subsequently, both the intervention and control groups were assessed and instructed in real-world RPAS flight, with identical guidance and error catching. The real-world RPAS flight maneuvers were identical to, and conducted in the same order as, those in the RPAS simulator. During the RPAS flight, objective flight data were gathered on both control and intervention groups, and subjective data (i.e., NASA-TLX) were gathered immediately after a participant completed the flight. Data were available for 81 participants (n = 81), of which 39 (n = 39) received no prior simulator training (i.e., the control group) and 42 (n = 42) received the simulator pre-training (i.e., the intervention group). This post-test-only control-group design allowed for direct comparison of the outcomes for the two groups, isolating the training effect of the simulated circuit experience on the real-world-RPAS proficiency.
Flight data (i.e., objective performance data) from each participant, from both intervention and control, were used to calculate the relative displacement of the RPAS between the start and finish points for each maneuver. The starting position for each subsequent circuit was updated to the finishing position of the previous, allowing for the assessment of the pilot’s ability to accurately fly the pattern and return to a reference point over multiple flights—with each flight’s conclusion serving as the reference for the next. The mean of these displacement measurements was then calculated to provide a single objective score, representing the mean relative displacement across all circuits. The basic idea of a mean displacement from a single flight is illustrated in Figure 5. It is important to note that the task of the pilot is to overcome any drift associated with wind, because that is what real-world drone pilots need to overcome when flying surveys around houses or powerlines, etc. [44]. The same is true for pilots in traditional flight training who are required to actively conduct drift management [45]. This being the case, this metric of displacement is a direct analogue of a traditional pilot’s ability to correct for drift caused by wind or other environmental factors during flight, as required in, for example, the CASA Manual of Standards Part 61 Schedule 2—Aircraft Rating Standards (Section 4)—A3 (Control aeroplane in normal flight)—A3.6 (Perform circuits and approaches) [46].
A subjective raw task loading score (rTLX) was calculated using the standard method previously outlined. The summing of the multiple dimensions of the TLX, each of which is a Likert-style scale (i.e., a measure which produces ordinal data), is nonetheless considered interval data [22,23]. This continuous scale reflects the overall workload experience. As only this single scale, and not the internal six dimensions, was to be used in the final analysis, Cronbach’s Alpha was calculated to assess the scale’s internal consistency [47,48]; that is, to assess that the TLX internal scales are sufficiently intercorrelated as to allow the single rTLX score to reflect the underlying variable [49]—the “task loading”. In order to avoid the attenuation paradox, no artificial method (e.g., “alpha if item deleted”) was used. For exploratory research, Nunnally [50] states that a reliability of 0.5 or 0.6 is acceptable, though the changed value in the second edition [51] is more commonly chosen and cited [52].
The null hypothesis (H0) for this research was that, when controlling for task loading, there will be no significant difference in mean relative displacement between control and intervention groups. Conversely, the alternative hypothesis (H1) posited that the intervention group would have lower mean relative displacements, indicating an improved performance once the influence of subjective task loading is considered. The null hypothesis is
H0: μ1 = μ2,
where μ1 and μ2 are the adjusted means of the relative displacements for the control and intervention groups, after accounting for task loading. The alternative hypothesis being
H1: μ1 < μ2,
where μ1 is the adjusted mean relative displacement for the intervention group, which is lower than that of the control group, after accounting for task loading, thereby indicating improved flight performance. Hypothesis testing was conducted using Analysis of Covariance (ANCOVA), which allows examination of the intervention effect while accounting for the influence of task loading.
Prior to conducting ANCOVA, underlying assumptions of the normality and homoskedasticity of the objective performance data were checked. The Shapiro–Wilk test was used to test the distribution of the data. Levene’s test was used to check that the variance did not differ between control and intervention. The data were found to be both non-normal and heteroskedastic. In order to stabilize variance and normalize the distribution, which was right-skewed, a log transformation using the natural logarithm (base e) was applied. Log transformation is adequate for dealing with data that are heteroskedastic and is particularly suited for treating data that are non-normally distributed and right-skewed [53].
Following data transformation and retesting, ANCOVA was conducted using JASP [54] software (version 0.16). The ANCOVA model was fitted to the data, with group (i.e., control or intervention) as the fixed factor, the transformed performance data as the dependent variable, and the subjective task loading data as the covariant. The model is therefore defined as:
Y = β0 + β1X + β2Z + ε,
where Y is the dependent objective flight performance score (mean relative displacement), X is the categorical independent variable (group), Z is the covariate (TLX), and ε is the error term. Regression of the dependent flight score variable on the covariate (TLX score) was completed to obtain residuals, representing variance in the flight score not explained by the TLX. The group (control and interventions) means are then adjusted by removing the effect of the TLX (the covariate), in order to isolate the variance that is attributable to the group factor alone. The primary analysis is then performed on the adjusted means so as to determine the significance of the difference between the groups, while accounting for the influence of the TLX covariate. Statistical significance was set at the 0.05 alpha level, and the partial e effect size calculated to quantify the intervention impact. The associated p value is derived from the F statistics, given in Table 2 [55]. The individual sum of squares terms involve complex matrix algebra [56].
Given the nature of the ‘trial’, and the origins of the data, no a priori power calculations were conducted. In consideration of the risk of misinterpretation that post hoc power calculations can introduce, as outlined by Hoenig and Heisey [57], no such analysis was conducted after the fact. Such analysis would not provide meaningful insight beyond that provided by the calculation of effect size and confidence intervals from the obtained data.

5. Results

The flight performance scores (mean relative displacement) are shown below in Table 3. Again, the metric utilized is the individual mean relative displacement, which was determined by averaging each of the relative displacements (shown in Figure 5) for the six flights completed. Then for each group, there is an overall group mean, a group standard deviation, and a group median (Table 3). The data required assumption checking, and ultimately transformation, prior to ANCOVA. This is understandable given that a displacement can only have a minimum value of zero, and small displacements on average are expected, with a large variance possible. The log transformation of this dependent data (flight performance) addressed the non-normal distribution but did not fully address the heteroskedasticity. The heteroskedasticity of the data is due to the fact that the intervention group not only had a significant reduction on the mean displacement score, 1.475 m reducing to 0.998 m, but the interventional also resulted in a significant reduction in the standard deviation, 0.906 m to 0.478 m. Prior to transformation, the Shapiro–Wilk test (W = 0.864, p =< 0.001) showed significant deviation from normality, indicative of the right-skewed distribution of the data. Following transformation, results of the Shapiro-Wilk test (W = 0.983, p = 0.342) indicate that data conform to a normal distribution. Levene’s test for Equality of Variances prior to transformation showed significant heteroskedasticity between control and intervention data (F (1,79) = 18.263, p =< 0.001), which was reduced but nonetheless still significant after transformation (F (1,79) = 6.875, p = 0.01). These results suggest that transformation partially mitigated the inequality of variances between the groups, though it did not achieve homoskedasticity. However, this does not pose a problem, as parametric ANCOVA is robust to violation of the underlying assumption of either normality or homoskedasticity, though not to simultaneous violations of both assumptions [58]. Therefore, the analysis proceeded with the transformed data.
Prior to calculation of the subjective raw task loading scores (rTLX) from the internal dimensions, Cronbach’s Alpha was calculated. The analysis revealed a point estimate of 0.721 (95% CI [0.614, 0.803]) for the unidimensional reliability, indicating that the Task Load Index (TLX) scales are sufficiently interrelated to provide an adequate measure of the underlying task load experienced by participants. The calculated mean rTLX score for the control group was 28.29 (SD = 10.02), and for the intervention group was 34.89 (SD = 14.45).
Results of the ANCOVA, with group (i.e., control or intervention) as the fixed factor, the transformed performance data as the dependent variable, and the subjective task loading data (rTLX) as the covariant, are shown in Table 4. The ‘Group’ factor, which represents the independent variable, yields an F (1,78) = 7.643 (p = 0.007), indicating a significant impact of group assignment on the transformed performance scores. The effect size (ηp2 = 0.089) suggests that the influence of group on performance was small to moderate. Figure 6 shows the means of the transformed flight performance data and 95% confidence intervals for the control (M = 0.207, CI [−1.006,1.420]) and intervention groups (M = −0.110, CI [−1.055, 0.835]). These data are representative of the post-transformation data and are therefore not directly interpretable in ‘real’ performance. The task load (‘TLX’) did not significantly influence the transformed performance (F (1,78) = 0.942, p = 0.335) in this context. The residual analysis, with a sum of squares (SS) of 23.473 and degrees of freedom (df = 78), indicates that there is variability in the transformed performance scores that is not explained by group assignment or task loading.

6. Discussion

6.1. Findings

Results of the analysis clearly show that the intervention group performed better in the real-world ‘assessment’ than did the control group. Pre-analysis data, which has not been transformed for use in ANCOVA, is best suited to ready interpretation. The untransformed mean relative displacement of the aircraft for the intervention group is ~32% better than that of the control group. The ANCOVA further reinforces this, as shown in Figure 6. This difference between the groups easily reaches the level of significance. These findings, though unique in the existing RPAS literature, are largely aligned with the experiences of crewed simulation training. Crewed simulation in lower-fidelity simulators is largely concerned with the training of procedures, whilst higher-fidelity simulators can be used for training fine-motor skills. However, the high cost of such simulators often limits their use. RPAS simulators appear to offer a unique opportunity for the training of these fine-motor skills, due to the ability to achieve “higher” fidelity at relatively lower costs. RPAS flight is not without cost or risk, and simulations may therefore serve a valuable role. Further, the same ability to fully control the training environment has clear potential.
Contrary to the literature on workload in complex skill tasks, the TLX was not a significant influence on the objective flight performance of the RPAS pilots. The lack of predictive value from the TLX as a covariant of the dependent variable, i.e., flight performance, may stem from the intervention and control groups fundamentally evaluating different task loadings. The control group is ultimately evaluating an absolute task loading, where the intervention group is evaluating themselves relative to their task loading in the simulator. It may be that such things as environment and performance relative to, and between, the simulator and flight are different to such an extent as to result in participants providing different evaluations. That is, the reason a higher task loading score is present for the intervention group relative to the control group, is that they perceive the real-world flying of drones to be a higher task load than the simulated flying of drones. Additionally, it may be the case that, for example, the environment of the simulator room, which is fully climate controlled, is more comfortable than the harsh Australian outdoors. It would theoretically be possible to examine the effects of individual task loading dimensions on the flight performance. However, the assumptions which allow for the treatment of the main TLX scores as a continuous variable certainly do not apply to an individual dimension.
The use of flight simulation for RPAS pilots is clearly shown to have potential by the results of the analysis. That these results were achieved with only short exposure to the simulator, and using [relatively] inexpensive computer hardware (a PC with at least a GTX1000 series card), software (the DJI flight simulator is free), and flight peripherals (the same controller used to fly the real drone), may contribute to evidence of the practicality of the implementation. When considering the results, it is notable that the present implementation made use of ‘old-fashioned’ educational theory—instructivism. Much of the modern literature, in aviation specifically and the education space more broadly, is showing the benefits of modern education theories such as constructivism. There is potential that the use of such theory may improve future interventions. However, as explained by the previously outlined justification for the use of instructivism, there is much value in translating crewed training into the RPAS space. There are also other simulator technologies which might enhance RPAS simulation, such as virtual reality (VR). In considering these technologies for RPAS simulators, researchers would do well to consider whether they offer training benefits beyond those of PCATD.

6.2. Limitations and Assumptions

Several limitations and assumptions of this study need to be discussed. Firstly, the study’s quasi-experimental design, whilst robust, inherently limits the ability to establish causality definitively. The inability to use random assignment (due to ethics limitations) means that potential confounding variables, such as prior experience with drones or individual differences in learning styles, are not fully controlled. Also, the use of a specific simulator and drone model (DJI Flight Simulator and DJI Mavic Mini 2 SE) assumes that the findings are generalizable across different types of RPAS and simulation technologies, which may not be the case; that is, there are implications for external validity, which is the case in most research in the area. The study also assumes that the subjective workload assessments (via the NASA rTLX) accurately reflect the participants’ cognitive load, despite potential variations in individual perceptions and environmental factors. Next, the transformation of data to address non-normality and heteroskedasticity, while statistically necessary, introduces its own biases and affects the interpretability of the results. Finally, the reliance on a single semester of simulator training may not capture the long-term effects and retention of skills, which are critical for real-world applications.

6.3. Recommendations

For trainers, it is essential to continuously update and refine training programs to incorporate the latest advancements in simulation technology and educational theory. Emphasizing the use of high-fidelity simulators can significantly enhance the training experience, providing pilots with realistic and immersive scenarios that better prepare them for real-world operations. Trainers should also focus on developing comprehensive curricula that address both technical skills and cognitive workload management, ensuring that pilots are well-equipped to handle the complexities of RPAS operations.
Policy makers should consider establishing standardized training protocols and certification processes that mandate the use of advanced simulation technologies in drone pilot training. By setting clear guidelines and requirements, policy makers can ensure a consistent level of training quality across the industry, ultimately enhancing safety and operational efficiency.
Governments play a crucial role in supporting the integration of advanced training technologies into RPAS education programs. Providing funding and resources for the development and implementation of high-fidelity simulators can significantly enhance the capabilities of training institutions. Governments should also foster collaborations between academia, industry, and regulatory bodies to ensure that training programs are aligned with the evolving needs of the RPAS sector. By investing in the education and training of drone pilots, governments can help build a skilled workforce that is capable of safely and effectively operating RPAS in various applications.
Conducting detailed cost–benefit analyses of different training technologies and methodologies would help identify the most efficient and effective approaches for widespread adoption. Collaborating with regulatory bodies to develop standardized training protocols and certification processes for RPAS pilots would ensure consistency and quality across training programs. Finally, it is essential to consider and understand the role of human factors and ergonomics in RPAS training and operations to design more effective training systems and control interfaces, ultimately enhancing pilot performance and safety.

6.4. Future Work

Future research should include the integration of advanced technologies such as augmented reality (AR) and artificial intelligence (AI) into RPAS training programs, as these could provide more immersive and adaptive training environments, potentially enhancing skill acquisition and retention. Longitudinal studies are necessary to assess the sustained impact of simulator-based training on pilot performance over time, determining the long-term benefits and any potential decay in skills. Additionally, developing and testing a wider range of training scenarios, including emergency situations and complex operational environments, would ensure that pilots are well prepared for a variety of real-world challenges. Exploring the feasibility of personalized training programs that adapt to the individual needs and learning pace of each pilot, possibly through AI-driven analytics, could further enhance training effectiveness. Cross-disciplinary approaches, such as applying training methodologies from high-skill domains like medical surgery or military operations, might uncover new strategies for effective skill development. Investigating the influence of different environmental conditions, such as weather and terrain, on training effectiveness would assist in the design of more realistic and comprehensive training programs. Further research into the effectiveness of various workload management tools, beyond the NASA-TLX, in improving pilot performance is also warranted, potentially leading to the development of new metrics tailored specifically to RPAS operations. Finally, establishing mathematical models based on experimental data may facilitate in-depth analysis to potentially leverage AI/ML techniques.

7. Conclusions

The study’s findings underscore the efficacy of simulator-based training in enhancing the flight performance of drone pilots, demonstrating a clear advantage for those who undergo structured simulation training before engaging in real-world operations. The significant impact of the intervention group’s training on their performance, as indicated by the reduced mean relative displacement compared to the control group, validates the integration of advanced simulation technologies in drone pilot training programs. Contrary to expectations, task loading, as measured by the rTLX, did not have a significant impact on flight performance. Despite the challenges posed by heteroskedasticity and the non-normal distribution of the data, this research highlights the potential of instructivist principles and simulators in improving the skills of RPAS pilots.

Author Contributions

Conceptualization, A.S. and G.W.; methodology, A.S. and G.W.; validation, A.S., G.W. and K.J.; formal analysis, A.S. and G.W.; investigation, A.S. and G.W.; resources, G.W.; data curation, A.S.; writing—original draft preparation, A.S.; writing—review and editing, A.S., T.L., G.W. and K.J.; visualization, A.S. and G.W.; supervision, G.W., T.L. and K.J.; project administration, G.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was approved by the Human Research Advisory Panel of the University of New South Wales (Approval HC220421 on the 31 August 2022).

Informed Consent Statement

Patient consent was waived by the Human Research Advisory Panel of the University of New South Wales as the data and the task upon which this research are based was part of, and originally solely for the purpose of, an undergraduate laboratory course. All data were deidentified prior to provision to the researchers.

Data Availability Statement

The datasets presented in this article are not readily available due to restrictions on their distribution to parties external to the original research, in accordance with the ethics approval. Requests to access the datasets should be directed in the first instance to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wild, G.; Murray, J.; Ayiei, A.; Sathar Eqbal, M.A.; Batuwangala, E. Public perception of drones… or should that be remotely piloted aircraft systems? J. Aerosp. Eng. Mech. 2019, 3, 170–176. [Google Scholar]
  2. Murray, J.; Joiner, K.; Wild, G. Micro-Credentialing and Digital Badges in Developing RPAS Knowledge, Skills, and Other Attributes. Multimodal Technol. Interact. 2024, 8, 73. [Google Scholar] [CrossRef]
  3. Wild, G.; Gavin, K.; Murray, J.; Silva, J.; Baxter, G. A post-accident analysis of civil remotely-piloted aircraft system accidents and incidents. J. Aerosp. Technol. Manag. 2017, 9, 157–168. [Google Scholar] [CrossRef]
  4. Wild, G.; Murray, J.; Baxter, G. Exploring civil drone accidents and incidents to help prevent potential air disasters. Aerospace 2016, 3, 22. [Google Scholar] [CrossRef]
  5. Albeaino, G.; Eiris, R.; Gheisari, M.; Raja Raymond, I. DroneSim: A VR-based flight training simulator for drone-mediated building inspections. Constr. Innov. 2022, 22, 831–848. [Google Scholar] [CrossRef]
  6. Cardona-Reyes, H.; Parra-Gonz’alez, E.; Trujillo-Espinoza, C.; Villalba-Condori, K. Task Design in Virtual Reality Environments for Drone Pilot Training. In New Perspectives in Software Engineering; Springer: Berlin/Heidelberg, Germany, 2024; pp. 261–274. [Google Scholar]
  7. Nasir, A.N.M.; Arsat, M.; Noordin, M.K.; Sidek, M.A.M.; Tarmidi, M.Z. Structured Teaching Using Drone Simulators for Students’ Confidence in Real Flight. In Proceedings of the Asia Simulation Conference, Langkawi, Malaysia, 25–26 October 2023; pp. 125–136. [Google Scholar]
  8. Lee, A.T. Flight Simulation: Virtual Environments in Aviation; Ashgate: Aldershot, UK, 2005. [Google Scholar]
  9. Yurko, Y.Y.; Scerbo, M.W.; Prabhu, A.S.; Acker, C.E.; Stefanidis, D. Higher mental workload is associated with poorer laparoscopic performance as measured by the NASA-TLX tool. Simul. Healthc. 2010, 5, 267–271. [Google Scholar] [CrossRef]
  10. Singh, I.L.; Sharma, H.O.; Singh, A.L. Effect of training on workload in flight simulation task performance. J. Indian Acad. Appl. Psychol. 2005, 31, 81–90. [Google Scholar]
  11. Roscoe, S.N. The Adolescence of Engineering Psychology; Human Factors and Ergonomics Society: Santa Monica, CA, USA, 1997. [Google Scholar]
  12. Hays, R.T.; Singer, M.J. Simulation Fidelity as an Organizing Concept. In Simulation Fidelity in Training System Design: Bridging the Gap Between Reality and Training; Springer: Berlin/Heidelberg, Germany, 1989; pp. 47–75. [Google Scholar]
  13. Salas, E.; Bowers, C.A.; Rhodenizer, L. It is not how much you have but how you use it: Toward a rational use of simulation to support aviation training. Int. J. Aviat. Psychol. 1998, 8, 197–208. [Google Scholar] [CrossRef]
  14. Hays, R.T.; Jacobs, J.W.; Prince, C.; Salas, E. Flight simulator training effectiveness: A meta-analysis. Mil. Psychol. 1992, 4, 63–74. [Google Scholar] [CrossRef]
  15. Ericsson, K.A.; Krampe, R.T.; Tesch-Römer, C. The role of deliberate practice in the acquisition of expert performance. Psychol. Rev. 1993, 100, 363. [Google Scholar] [CrossRef]
  16. Somerville, A.; Lynar, T.; Joiner, K.; Wild, G. Applications of Extended-Reality in Pilot Flight Simulator Training: A Systematic Review with Meta Analysis. 2024, submitted for publication.
  17. Kavanagh, S.; Luxton-Reilly, A.; Wuensche, B.; Plimmer, B. A systematic review of virtual reality in education. Themes Sci. Technol. Educ. 2017, 10, 85–119. [Google Scholar]
  18. Herrington, J.; Reeves, T.C.; Oliver, R. Authentic Learning Environments. In Handbook of Research on Educational Communications and Technology; Spector, J.M., Merrill, M.D., Elen, J., Bishop, M.J., Eds.; Springer New York: New York, NY, USA, 2014; pp. 401–412. [Google Scholar]
  19. Helton, W.S.; Jackson, K.M.; Näswall, K.; Humphrey, B. The National Aviation and Space Agency Task Load Index (NASA-TLX): Does it Need Updating? Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2022, 66, 1245–1249. [Google Scholar] [CrossRef]
  20. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Advances in Psychology; Hancock, P.A., Meshkati, N., Eds.; Elsevier: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
  21. Byers, J.C.; Bittner, A.; Hill, S.G. Traditional and raw task load index (TLX) correlations: Are paired comparisons necessary. Adv. Ind. Ergon. Saf. 1989, 1, 481–485. [Google Scholar]
  22. Bolton, M.L.; Biltekoff, E.; Humphrey, L. The mathematical meaninglessness of the NASA task load index: A level of measurement analysis. IEEE Trans. Hum. Mach. Syst. 2023, 53, 590–599. [Google Scholar] [CrossRef]
  23. Norman, G. Likert scales, levels of measurement and the “laws” of statistics. Adv. Health Sci. Educ. 2010, 15, 625–632. [Google Scholar] [CrossRef]
  24. De la Torre, G.G.; Ramallo, M.A.; Cervantes, E. Workload perception in drone flight training simulators. Comput. Hum. Behav. 2016, 64, 449–454. [Google Scholar] [CrossRef]
  25. Șcheau, M.C.; Achim, M.V.; Găbudeanu, L.; Văidean, V.L.; Vîlcea, A.L.; Apetri, L. Proposals of Processes and Organizational Preventive Measures against Malfunctioning of Drones and User Negligence. Drones 2023, 7, 64. [Google Scholar] [CrossRef]
  26. Go, Y.-G.; Lee, J.-W.; Kang, H.-S.; Choi, S.-M. Interactive Training of Drone Flight Control in Mixed Reality. In Proceedings of the SIGGRAPH Asia 2020 XR Virtual Event Republic of Korea, Online Event, 4–13 December 2020; Association for Computing Machinery: New York, NY, USA, 2020; p. 11. [Google Scholar]
  27. Bruzzone, A.; Longo, F.; Massei, M.; Nicoletti, L.; Agresta, M.; Di Matteo, R.; Maglione, G.L.; Murino, G.; Padovano, A. Disasters and Emergency Management in Chemical and Industrial Plants: Drones Simulation for Education and Training. In Proceedings of the Modelling and Simulation for Autonomous Systems, Rome, Italy, 15–16 June 2016; pp. 301–308. [Google Scholar]
  28. Law, C.T.; Moenig, C.; Jeilani, H.; Jeilani, M.; Young, T. Transforming healthcare logistics and evaluating current use cases of UAVs (drones) as a method of transportation in healthcare to generate recommendations for the NHS to use drone technology at scale: A narrative review. BMJ Innov. 2023, 9, 150. [Google Scholar] [CrossRef]
  29. Cardona-Reyes, H.; Trujillo-Espinoza, C.; Arevalo-Mercado, C.; Muñoz-Arteaga, J. Training of drone pilots through virtual reality environments under the gamification approach in a university context. Interact. Des. Archit. 2021, 49, 64–83. [Google Scholar] [CrossRef]
  30. Truschenkov, I.V.; Trushchenkova, I.G. Use of a Computerized Simulator to Train Drone Operators for Law Enforcement Tasks. Vestn. Adv. Train. Inst. MIA Russ. 2023, 3, 67–72. [Google Scholar] [CrossRef]
  31. Caballero-Martin, D.; Lopez-Guede, J.M.; Estevez, J.; Graña, M. Artificial Intelligence Applied to Drone Control: A State of the Art. Drones 2024, 8, 296. [Google Scholar] [CrossRef]
  32. Ren, Y.; Zhu, F.; Sui, S.; Yi, Z.; Chen, K. Enhancing Quadrotor Control Robustness with Multi-Proportional–Integral–Derivative Self-Attention-Guided Deep Reinforcement Learning. Drones 2024, 8, 315. [Google Scholar] [CrossRef]
  33. Sheta, A.; Braik, M.; Maddi, D.R.; Mahdy, A.; Aljahdali, S.; Turabieh, H. Optimization of PID Controller to Stabilize Quadcopter Movements Using Meta-Heuristic Search Algorithms. Appl. Sci. 2021, 11, 6492. [Google Scholar] [CrossRef]
  34. Weldon, W.; Kozak, D.B. Effects of Simulator Training for Unmanned Aerial Systems in Undergraduate Education. In Proceedings of the 19th International Symposium on Aviation Psychology, Dayton, OH, USA, 8–11 May 2017; pp. 190–195. [Google Scholar]
  35. Maqsud, S. The Effectiveness of Simulators for Unmanned Aerial Vehicles. Int. J. Artif. Intell. Digit. Mark. 2024, 1, 33–39. [Google Scholar] [CrossRef]
  36. Potdar, R.; Dattel, A.R. The Transferability of Pilots’ Video Gaming Experience to the Skills and Situation Awareness of Operating UAVs. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2024, 10711813241260300. [Google Scholar] [CrossRef]
  37. Arsat, M.; Nasir, A.N.M.; Ismail, L.H.; Noordin, M.K.; Latif, A.A.; Arsat, Z.A.; Rosli, K.M. Use of Drone Flight Simulator for Bridging Theories of UAV Systems into Practice: A Pilot Study. In Proceedings of the Methods and Applications for Modeling and Simulation of Complex Systems, Kobe, Japan, 17–20 September 2024; pp. 137–145. [Google Scholar]
  38. Liu, H.; Bi, Z.; Dai, J.; Yu, Y.; Shi, Y. UAV Simulation Flight Training System. In Proceedings of the 2018 International Conference on Virtual Reality and Visualization (ICVRV), Qingdao, China, 22–24 October 2018; pp. 150–151. [Google Scholar]
  39. Aláez, D.; Olaz, X.; Prieto, M.; Porcellinis, P.; Villadangos, J. HIL Flight Simulator for VTOL-UAV Pilot Training Using X-Plane. Information 2022, 13, 585. [Google Scholar] [CrossRef]
  40. Rostáš, J.; Kováčiková, M.; Kandera, B. Use of a simulator for practical training of pilots of unmanned aerial vehicles in the Slovak Republic. In Proceedings of the 2021 19th International Conference on Emerging eLearning Technologies and Applications (ICETA), Kosice, Slovakia, 11–12 November 2021; pp. 313–319. [Google Scholar]
  41. DJI. DJI Flight Simulator; Shenzhen DJI Sciences and Technologies Ltd.: Shenzhen, China, 2019; Available online: https://www.dji.com/au/downloads/softwares/dji-flightsimulator-launcher (accessed on 14 March 2019).
  42. Park, W. A Study on Standardization on the Flight Controller Mode in Remotely Piloted Aircraft Drone: Focused on Drone Controller Mode Preference. J. Soc. Korea Ind. Syst. Eng. 2019, 42, 69–75. [Google Scholar] [CrossRef]
  43. Civil Aviation Safety Authority. Part 101 Manual of Standards. 2019. Available online: https://www.legislation.gov.au/F2019L00593/latest/text (accessed on 14 March 2019).
  44. Paradis, C.V.; Mbaye, S.; Davies, M.D.; Werner, C. A Grounded Theory of UAS Reported Accidents. In Proceedings of the AIAA Aviation Forum and Ascend 2024, Las Vegas, NV, USA, 29 July–3 August 2024; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2024. [Google Scholar]
  45. Federal Aviation Administration. Airplane Flying Handbook. 2021. Available online: https://www.faa.gov/sites/faa.gov/files/regulations_policies/handbooks_manuals/aviation/airplane_handbook/00_afh_full.pdf (accessed on 14 March 2019).
  46. Civil Aviation Safety Authority. Part 61 Manual of Standards. 2021. Available online: https://www.legislation.gov.au/F2014L01102/latest/text (accessed on 14 March 2019).
  47. Lydersen, S. Statistical review: Frequently given comments. Ann. Rheum. Dis. 2014, 1–3. [Google Scholar]
  48. Sullivan, G.M.; Artino, A.R., Jr. Analyzing and interpreting data from Likert-type scales. J. Grad. Med. Educ. 2013, 5, 541–542. [Google Scholar] [CrossRef]
  49. Tavakol, M.; Dennick, R. Making sense of Cronbach’s alpha. Int. J. Med. Educ. 2011, 2, 53. [Google Scholar] [CrossRef]
  50. Nunnally, J. Psychometric Theory; McGraw-Hill: New York, NY, USA, 1967. [Google Scholar]
  51. Nunnally, J. Psychometric Theory, 2nd ed.; McGraw-Hill: New York, NY, USA, 1978. [Google Scholar]
  52. Cho, E.; Kim, S. Cronbach’s Coefficient Alpha: Well Known but Poorly Understood. Organ. Res. Methods 2015, 18, 207–230. [Google Scholar] [CrossRef]
  53. Changyong, F.; Hongyue, W.; Naiji, L.; Tian, C.; Hua, H.; Ying, L. Log-transformation and its implications for data analysis. Shanghai Arch. Psychiatry 2014, 26, 105. [Google Scholar]
  54. Love, J.; Selker, R.; Marsman, M. JASP, version 0.16; JASP: Amsterdam, The Netherlands, 2021. [Google Scholar]
  55. Mirzaei Rafsanjani, H.; Sørensen, J.D.; Fæster, S.; Sturlason, A. Fatigue Reliability Analysis of Wind Turbine Cast Components. Energies 2017, 10, 466. [Google Scholar] [CrossRef]
  56. Huitema, B. The Analysis of Covariance and Alternatives: Statistical Methods for Experiments, Quasi-Experiments, and Single-Case Studies; Wiley: Hoboken, NJ, USA, 2011. [Google Scholar]
  57. Hoenig, J.M.; Heisey, D.M. The abuse of power: The pervasive fallacy of power calculations for data analysis. Am. Stat. 2001, 55, 19–24. [Google Scholar] [CrossRef]
  58. Olejnik, S.F.; Algina, J. Parametric ANCOVA and the rank transform ANCOVA when the data are conditionally non-normal and heteroscedastic. J. Educ. Stat. 1984, 9, 129–149. [Google Scholar] [CrossRef]
Figure 1. Venn diagram highlighting the core aspects of aviation that can be trained using simulators in crewed and uncrewed (RPAS) aviation, as well as both.
Figure 1. Venn diagram highlighting the core aspects of aviation that can be trained using simulators in crewed and uncrewed (RPAS) aviation, as well as both.
Drones 08 00640 g001
Figure 2. The method of instructivism as incorporated into this research with the use of simulation.
Figure 2. The method of instructivism as incorporated into this research with the use of simulation.
Drones 08 00640 g002
Figure 3. The post-test-only control-group design implemented, with the intervention being exposure to the drone flight simulator, and the measurement performed on the participants flights with real drones.
Figure 3. The post-test-only control-group design implemented, with the intervention being exposure to the drone flight simulator, and the measurement performed on the participants flights with real drones.
Drones 08 00640 g003
Figure 4. Vertical (a), horizontal (b), and yawing (c) square-circuit RPAS flight paths. Note, (a) is a front view, while (b) and (c) are top views.
Figure 4. Vertical (a), horizontal (b), and yawing (c) square-circuit RPAS flight paths. Note, (a) is a front view, while (b) and (c) are top views.
Drones 08 00640 g004
Figure 5. Example intended flight path for yawing square circuit, shown as blue arrows (a), a hypothetical realistic flight path by a human pilot, shown as yellow arrows (b), and resultant final displacement for the circuit once complete, shown with orange arrows (c).
Figure 5. Example intended flight path for yawing square circuit, shown as blue arrows (a), a hypothetical realistic flight path by a human pilot, shown as yellow arrows (b), and resultant final displacement for the circuit once complete, shown with orange arrows (c).
Drones 08 00640 g005
Figure 6. Transformed mean relative displacement of control and intervention, with 95% confidence intervals (a) and equivalent mean relative displacement in meters with 95% confidence intervals via a log transform (b).
Figure 6. Transformed mean relative displacement of control and intervention, with 95% confidence intervals (a) and equivalent mean relative displacement in meters with 95% confidence intervals via a log transform (b).
Drones 08 00640 g006
Table 1. Educational frameworks or methods and their application to RPAS training.
Table 1. Educational frameworks or methods and their application to RPAS training.
Framework/MethodDescriptionApplication in RPAS Training
Instructivist Educational PrinciplesStructured, educator-focused approach to knowledge transferProvides clear guidance and standards for drone piloting, ensuring a solid foundation before real-world application
Cognitive Theory of Skill AcquisitionEmphasizes repetitive practice and immediate feedback for skill developmentFacilitates the development of fine motor skills necessary for precise control inputs in drone operations
Contextual LearningHands-on and immersive experiences to enhance masteryEngages learners in realistic scenarios, improving their ability to handle real-world drone piloting tasks
Constructivist Learning TheoryLearners build their own knowledge through experiences and reflectionsEncourages problem-solving and critical thinking in dynamic flight scenarios
Scenario-Based TrainingUses realistic scenarios to teach skills and decision-makingPrepares pilots for real-world challenges by simulating complex flight situations
Competency-Based TrainingFocuses on achieving specific competencies rather than time-based trainingEnsures pilots meet defined skill levels and performance standards before advancing
Table 2. Computation table for the ANCOVA analysis.
Table 2. Computation table for the ANCOVA analysis.
CasesSum of SquaresdfMean SquareF
GroupsSSGk-1MSG = SSG/(k-1)MSG/MSRes
TLXSSTLXM-2kMSTLX = SSTLX/(M-2k)MSTLX/MSRes
ResidualsSSResM-k-1MSRes = SSRes/(M-2k)
Table 3. Descriptive statistics of the mean displacement results for the control and intervention groups, mean (group mean of the individual mean displacements), standard deviation, and median.
Table 3. Descriptive statistics of the mean displacement results for the control and intervention groups, mean (group mean of the individual mean displacements), standard deviation, and median.
GroupMean (M) [m]Standard Deviation (SD) [m]Median (Md) [m]
Control1.4750.9061.086
Intervention0.9980.4780.897
Table 4. ANCOVA Results.
Table 4. ANCOVA Results.
CasesSum of Squares 1dfMean SquareFpη2ηp2
Groups2.30012.3007.6430.0070.0890.089
TLX0.28410.2840.9420.0110.0110.012
Residuals23.473780.301
1 Type III Sum of Squares.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Somerville, A.; Lynar, T.; Joiner, K.; Wild, G. Use of Simulation for Pre-Training of Drone Pilots. Drones 2024, 8, 640. https://doi.org/10.3390/drones8110640

AMA Style

Somerville A, Lynar T, Joiner K, Wild G. Use of Simulation for Pre-Training of Drone Pilots. Drones. 2024; 8(11):640. https://doi.org/10.3390/drones8110640

Chicago/Turabian Style

Somerville, Alexander, Timothy Lynar, Keith Joiner, and Graham Wild. 2024. "Use of Simulation for Pre-Training of Drone Pilots" Drones 8, no. 11: 640. https://doi.org/10.3390/drones8110640

APA Style

Somerville, A., Lynar, T., Joiner, K., & Wild, G. (2024). Use of Simulation for Pre-Training of Drone Pilots. Drones, 8(11), 640. https://doi.org/10.3390/drones8110640

Article Metrics

Back to TopTop