1. Introduction
Today, e-learning is used in many educational contexts. The related scientific research has focused on establishing which factors are important for the effective management and implementation of online training [
1,
2,
3,
4,
5,
6,
7]. One of the most fruitful lines of research on online learning focuses on evaluating the impact of training on satisfaction with lived experience [
3]. This approach to online training evaluation is employed in this study. We are aware that when considering student satisfaction with online learning, we are addressing a complex and uncertain set of circumstances because online courses involve multiple commitments [
8]. A negative perception can result in unfavourable learning outcomes, including decreased motivation and persistence [
3] and consequently dissatisfaction with online training.
Satisfaction can be determined by the degree of congruence between users’ previous expectations associated with their online training experiences and the results obtained at the end of the training [
9]. Meta-analyses by Allen et al. [
1], Williams [
6], and Kauffman [
3] clarify the factors that predict satisfaction with online training. Allen et al. [
1] reviewed 450 studies, basing their analysis and subsequent publication on 24 of the studies that were published between 1989 and 1999. The requirement for a study to be included in their meta-analysis was that it compared student satisfaction in a distance education course with student satisfaction in a course conducted using traditional classroom methods. The researchers omitted case studies on distance education that did not use a control group, that did not provide sufficient statistical information to calculate effect size, that focused on teacher and administrator attitudes, and that examined persistence, although the latter may be an indicator of satisfaction, as Croxton [
10] and Kranzow [
11] conclude. The interaction level in live courses and the communication channel employed (i.e., a preference for audiovisual interaction over written instructions) proved to be good indicators of satisfaction. Williams [
6] compared 25 studies published between 1990 and 2003 that examined the performance of students in the health profession. The approach to examining the effectiveness of and satisfaction with distance education in healthcare courses was based on the types of student involved, the teaching models that were used, and the components of the implemented didactic design. Real-time and synchronous teaching models offered a greater degree of interaction and tutoring by teachers. The greatest influence on satisfaction levels was the type of interaction and information offered (i.e., components of the didactic design).
More recently, Kauffman [
3] performed a narrative synthesis of over 25 studies published between 2001 and 2013. The analysis used three categories: learning outcomes, instructional design, and learner characteristics. Students were satisfied with online courses characterized as structured, interactive (i.e., using a constructivist instructional design), relevant (i.e., practically significant), and having tutors who facilitated interaction and feedback. The study concluded that the factors most favourable to student satisfaction with online courses were the suitability of didactic methods, tutor/student support, and course structure/design [
12].
The results of these meta-analyses indicate that distance learning does not diminish student satisfaction compared with face-to-face classes [
1] and that such satisfaction may even be greater [
3]. There is even a small positive effect on the performance of distance students compared with classroom students [
6]. The strongest explanatory factors concern what we term the pedagogical design of the course, tutorial performance, and the design of the virtual environment; we leave the timing indicator for more thorough future assessment [
3].
In the scientific literature, three prototypical groups are recognized: a higher proportion of students who are satisfied with their experience, a smaller proportion who are genuinely dissatisfied, and ambivalent students who simultaneously express positive and negative feelings regarding their online experience [
8]. An example of this ambivalence is when a student appreciates the convenience of online learning but misses face-to-face interaction with the tutor. Thus, when students provide evaluations that are closer to the extremes of the satisfaction scale, their course assessments tend to be more intuitive. As their level of ambivalence increases, they become more analytical and specific, making separate and independent judgements regarding course quality [
8].
Online students who receive support from tutors and peers and who maintain a high degree of communication with both of these actors tend to display higher degrees of satisfaction with online courses [
13,
14,
15,
16,
17]. Swan [
13] found that clarity of design, interaction with the tutor, and active discussion among participants are three factors that significantly influence student satisfaction and perceived learning in online asynchronous courses. Higher levels of personal activity in the course and perceived interaction with tutors and peers result in greater satisfaction and perceived learning. The importance of interaction among participants is also a key factor in Arbaugh’s [
18] research. In a study by Swan [
13], most students believed that their levels of interaction with course materials, their peers, and the tutor were greater than in traditional face-to-face courses.
Khalid [
19] analysed Garrison, Anderson, and Archer’s Community of Inquiry (CoI) model [
20,
21] of satisfaction with online courses, a model that explains success in online teaching-learning processes. He found that only teaching presence and social presence were predictors of course satisfaction. Unlike in Rubin et al. [
15] and Bulu [
22], cognitive presence was not found to be a significant predictor of satisfaction. Khalid’s study was conducted with Malaysian students, while the other two studies were conducted with American and Turkish students, respectively. It appears that for Malaysian students, a high or low level of cognitive presence does not influence course satisfaction.
The findings of Artino [
23] indicate that the satisfaction of students with online courses can be explained in part by their beliefs and motivational attitudes towards learning. Artino’s regression model explained 54% of the variance with seven predictors: four control variables (including experience with technology and online learning) and three components of academic self-regulation (perception of instruction quality, self-efficacy, and the attributed value of the task).
Traditionally, the transfer of learning has been considered the ultimate goal of any training process [
24]. By transfer of learning, we understand the productive application of new knowledge, skills, and attitudes to the field of work [
25,
26,
27,
28,
29,
30]. Consequently, the motivation to transfer is defined as the conscious desire to use what is learned in training in the workplace [
31,
32,
33,
34].
Transfer factors are classified into three major groups by most studies: personal factors, training design factors, and workplace and organizational factors [
35]. Satisfaction and transfer are reinforced when online training is perceived by workers as a flexible and high-quality process [
7] with adequate feedback channels that help define learning objectives, improve results, and self-regulate learning [
36].
In studies conducted to evaluate satisfaction with online training, the
satisfaction with the training variable was correlated with transference [
37,
38,
39]. The educator’s challenge is not only to impart knowledge that the participants wish to learn but also for participants to react favourably to their training [
40]. Singleton [
41] evaluated a workplace training programme, using the already presented CoI model in the course design. Her results also pointed out that the CoI model (cognitive, social, and teaching) was a good basis for designing online courses for the workplace, with a variation: she divided the teaching presence into two presences, creating a design presence to distinguish between instructional design and content delivery.
Many of the activities that are performed in the workplace and those that we learn about are unintentional and fall under what is known as workplace learning [
42]. Moore and Klein concluded that practitioners use various methods to encourage workplace learning, such as sharing knowledge, chatting and asking questions, encouraging or promoting informal learning activities, or creating and curating materials and learning objects to support their teams’ informal learning. However, training in organizations mostly takes the form of formal training [
42].
In addition, Riley [
43] obtained a negative correlation between specific job satisfaction variables and the overall wellness of counsellor educators. She suggested that paying too much attention to one area such as job satisfaction, could negatively affect overall wellness.
A variety of instruments are used to measure satisfaction with online training courses. Elliott and Shin [
44] propose an instrument with 20 items. They start by evaluating satisfaction with online experiences through a single overall assessment item included at the end of a questionnaire. Such an assessment tool is of limited value because it does not always reflect satisfaction or dissatisfaction with all the elements involved in online training (e.g., tutoring, didactic design, communication, platform use, administrative management of the training). If a student experienced a problem with an aspect of the course (for example, an inability to access the course platform for two days due to a technical failure), he or she could evaluate the entire course as unsatisfactory based on only that attribute. In presenting their alternative questionnaire to these single-item assessment instruments included at the end of the questionnaire, Elliott and Shin conclude that the five most critical factors affecting student satisfaction are the value of course content, the registration process, teaching excellence, the opportunity to take desired classes, and the student placement rate.
Arbaugh [
18] used a Likert-type questionnaire with responses ranging from 1 (strongly disagree) to 7 (strongly agree). The questionnaire consisted of seven scales: usefulness, ease of use, course flexibility, programme flexibility, difficulty of interaction, performance of the tutor in the interaction, and use of the course’s website, with student satisfaction being the dependent variable. Similarly, Swan [
13] applied a four-choice Likert-type response instrument with five scales: course satisfaction, perceived learning, personal activity in the course, perceived interaction with the tutor, and perceived interaction with peers. Lee, Srinivasan, Trail, Lewis, and López [
14] designed a questionnaire on perceptions of support and satisfaction with an online course that included the following dimensions: instructional support, peer support, technical support, and course satisfaction. Three of the dimensions were positively correlated with satisfaction with the course.
One of the most prominent studies of student satisfaction with virtual courses is by Sun, Tsai, Finger, Chen, and Yeh [
45]. This study revealed seven critical factors that affect students’ perceived satisfaction with e-learning: the student’s anxiety about the computer, the e-learning tutor’s attitude, course flexibility, course quality, perceived usefulness, perceived ease of use, and diversity in assessments. Zambrano [
7] replicated the study by Sun et al. [
45] in a Spanish-speaking context. He obtained similar results, noting that the factors most strongly related to student satisfaction were course flexibility and course quality, whereas anxiety did not correlate significantly with satisfaction. Similarly, Arbaugh [
18] found that course flexibility and the ability to develop an interactive environment played stronger roles in student satisfaction than ease or frequency of use of the environment. However, satisfaction with the learning environment was found to affect satisfaction with the course [
15].
Focusing on transfer to the workplace, the Questionnaire on Work and Learning Habits for Future Professionals validated in the Spanish context [
46] offers 47 items and four dimensions: self-perception, information management, learning process management, and communication, although its focus is more on the construction of Personal Learning Environments [
47] by the trainees.
Torres-Gordillo and Cobos-Sanchiz [
48] presented the Questionnaire to Evaluate Online Training in the Workplace (CEFOAL). This instrument is adapted to the socioeconomic and cultural index (ISEC) of the Spanish context. In its design, a literature review was performed from which a matrix was obtained with the main elements of the didactic process of online training [
49,
50,
51]. The results of an exploratory factor analysis (EFA) of the instrument by the authors of this study [
5] revealed five factors: pedagogical design, tutor performance, virtual environment design, timing, and transfer of learning. Pedagogical design includes items related to the teaching-learning process. Tutor performance focuses on student-tutor interaction. Virtual environment design refers to the structure and resources offered by the platform for online training. Timing refers to the adequacy of the time provided during training. Finally, transfer of learning is understood as the application in the workplace of the knowledge acquired in the training.
In Spain, there is a need to measure the satisfaction with and impact of online training [
5,
48] due to the increase in online training over the last two decades, and there are a small number of instruments to assess what the CEFOAL measures. However, the factorial structure of the latter instrument has not been confirmed. Other instruments measure only partial aspects of what the CEFOAL intends. This instrument integrates satisfaction, impact, and transfer of learning in the workplace in online courses. Therefore, the aim of the study is to validate the factor structure of the CEFOAL by confirmatory factor analysis (CFA) to provide evidence to support the validity of the dimensions established by its authors or, where appropriate, to propose an alternative structure. The relevance of the problem is justified in the attempt to simplify this instrument to obtain a version that is easier to apply and has a greater chance of being completed by training-course participants without excessive loss of information about the main aspects assessed. The added value of this research is therefore reflected in this attempt to offer a validated and confirmed instrument to be applied to any online training course when the following three important elements are to be determined: satisfaction, impact, and transfer of learning in the workplace. Furthermore, this research corresponds to the demands of most educational institutions today to evaluate their training processes and the results achieved.
4. Discussion and Conclusions
An important issue related to e-learning is how to verify whether the online training experienced by users meets their expectations. In addressing this issue, we confirmed the factor structure by CFA. Thus, the article provides a valid and reliable tool for evaluating online courses in the work environment.
In this paper, we reviewed different instruments used to evaluate student satisfaction with online training [
7,
13,
14,
15,
16,
17,
18,
19,
23,
29,
35,
39,
46]. The instrument we present contributes to our scientific field by addressing relevant factors, such as pedagogical design, tutor performance, timing, virtual environment design, and transfer of learning, that are regarded in the scientific literature as determining levels of satisfaction with online courses. This instrument can be used to evaluate any online training process.
In comparison with these instruments reviewed in this article [
7,
13,
14,
15,
16,
17,
18,
19,
23,
29] and especially those focused on the Spanish context [
35,
39,
46], the CEFOAL includes new factors integrated in the same instrument with high values in the loadings of each factor, which makes it an innovative instrument. Unlike the Questionnaire of Transfer Factors [
35] and the FET (Factors for the Evaluation of Transfer) model [
38,
39], which only focus on the transfer of learning, the CEFOAL is a more complete instrument to be considered in the evaluation of online training in any context. The questionnaire by Feixas et al. [
35] focusses exclusively on the university environment. With respect to pedagogical design, the most significant improvements are in the objectives fulfilled, their correspondence with the contents, and the coordination between the different content sections.
Where the CEFOAL truly attains high values, however, is in tutor performance, virtual environment design, and timing, which the other instruments do not contemplate. Although we focus on e-learning and virtual environments in which independent learning (involving, e.g., self-regulation, self-motivation, time management, multitasking) plays an important role [
3], tutoring remains a primary element in online training. In addition, online feedback can be provided quickly and is transmitted through agile channels [
3,
36]. The CEFOAL provides very high values in the communication channels and the resources offered to manage learner self-learning.
Another contribution of our instrument is its incorporation of a factor related to timing. One of the complaints of learners most often mentioned in the scientific literature is related to frustration or poor design [
3], which implies extra hours of dedication with respect to what was planned by the teachers or training designers. Measuring the time dedicated to the course with the timing factor is an added value of the instrument. Timing was not incorporated into, or even indicated as being necessary to include for further study, the reviewed instruments [
3]. Thus, inclusion of this factor in evaluations of online training is an important contribution.
In addition, the instruments analysed in our literature review were developed mainly based on samples of students from different academic levels, particularly the bachelor’s and master’s degree levels. Our instrument also incorporates the factor of learning transfer to the job. This feature makes it valuable in the context of non-formal training, unlike other instruments aimed only at university training [
35]. Our instrument can be used to evaluate the satisfaction of workers with continuous online training received in institutions and companies. The design of training and learning (pedagogical design) appears to be a factor related to transfer, a result also found by Feixas et al. [
35], and the correlation between these factors was among the highest (.727) observed. Another important correlation is between transfer of learning and the design of the virtual environment, which reflects how sensitive students are to the online learning environment [
15]. Therefore, the incorporation of transfer of learning is very prominent, at the same level as other specific instruments [
38,
39], in an instrument that also aims to assess the satisfaction and impact in the training reaching acceptable values.
Regarding this study’s limitations, the collected data were drawn from assessments provided by participants in the studied training processes. We were unable to test the factor referred to as transfer in situ, a limitation that Quesada-Pallarès [
33] notes, observing the temporary and economic cost of studying learning transfer and the difficulties involved in finding access to companies. The difficulty of collecting data in situ, a common problem for the scientific community, is an undesirable handicap that substantially affects socio-educational research in general.
With these results and with the current increase in online courses in companies and training institutions, the need for online evaluation is becoming more visible than ever. What a few years ago was perceived as a limitation due to being online is today being overcome by the need to incorporate online evaluation in all training institutions and companies. Companies themselves analyse their investment in training when they have evidence that learning is transferred to the workplace [
58]. In such environments, the use of this instrument would be very valuable for improving training actions and for the confidence that companies have in the profitability of putting it into practice, both for economic improvement and for improving products. Both universities and other institutions that offer on-the-job training can consider the suitability of applying the CEFOAL to determine satisfaction, impact, and transfer of learning.
This research should continue with more studies using different participant samples (e.g., students training in different topics), implementing varied didactic designs, organizing training times in different ways, or involving more tutoring (thereby developing new tutoring methods). Additionally, how feedback quality affects satisfaction represents a further topic for consideration. Input from workers regarding their satisfaction with the transfer of learning to their jobs can also be assessed. More research is also required to determine whether the use of technology and online training meets the expectations of students with disabilities [
3]. As the authors contend, an important challenge for institutions is to design courses to meet the needs and expectations of all students.