*3.2. Data Collection*

During the winter semester of 2019/2020, the data was collected through online surveys from individuals studying at the British University in Dubai (BUiD) from 15 January to 20 February 2020. The aggregated response rate was 93%; 400 questionnaires were circulated, out of which 372 were answered by respondents. This means that 372 questionnaires were filled out correctly and found to be useful, while 28 were rejected because of missing values. The prospective sample size was 306 respondents with respect to a population of 1500. Thus, the sample size of 372 correct responses was suitable, according to [71], because—bearing in mind the required sample size—the sample size of 372 is a higher figure. Thus, this sample size could be reviewed using structural equation modeling [72] to verify the hypotheses. It must be noted that hypotheses were based on the current theories and were adjusted to the e-learning context. In order to assess the measurement model, the researchers used structural equation modeling (SEM) [72]. Further treatment was performed using a final path model.

#### *3.3. Students' Personal Information/Demographic Data*

The assessment of personal/demographic data is covered in Table 2. The percentage of males was 53%, while for females it was 47%. A total of 33% of students had ages ranging from 18 to 29 years, while 67% of respondents were aged over 29. In terms of academic background, 39% were students from the Faculty of Engineering and IT, 35% were from the Faculty of Education and 26% belonged to the Faculty of Business and Law. The majority of respondents came from sophisticated families and held university degrees; 49% of participants had bachelor's degrees, 42% had master's degrees, and 9% had a doctoral degree. When the respondents were ready to volunteer and were easily approachable, the purposive sampling approach was used as per [3]. This sample was created by students coming from different faculties, with different ages, enrolling in diverse programs at different levels. Moreover, with the aid of IBM SPSS Statistics ver. 23, the demographic data was evaluated. Table 2 depicts the complete demographic data of the respondents.


**Table 2.** Demographic data of the respondents.

#### *3.4. Study Instrument*

The survey instrument used to validate the hypothesis was determined in this research. The survey, consisting of 30 items, was used for the measurement of seven constructs in the questionnaire. Table 3 depicts the sources of the constructs. The questions from prior studies were modified in order to enhance the appropriateness of the research.



Note: TPACK = Technological pedagogical content knowledge; TSE = Technology self-efficacy; PEOU = Perceived ease of use; PU = Perceived usefulness; POS = Perceived organizational support; CTRLM = Controlled motivation; CU = Continuous intention to use e-learning platform.

#### *3.5. Pilot Study for the Questionnaire*

A pilot study was conducted to check the reliability of the questionnaire items. Approximately 40 students and teachers were chosen on a random basis from the given population to establish the pilot study. The sample size was set based on 10% of the aggregated sample size of this study (400 students and teachers) and thus adhered strictly to the research criteria. Cronbach's alpha test was utilized for the computation of internal reliability through IBM SPSS Statistics ver. 23, in order to judge the outcomes of the pilot study. Thus, the appropriate findings were shown for the measurement items. A value of 0.7 was taken to be an acceptable value for the reliability coefficient, considering the model for social science research [14]. Tables 4 and 5 show the Cronbach's alpha values for the seven measurement scales for teachers and students.

**Table 4.** Cronbach's alpha values for the pilot study (Cronbach's alpha ≥ 0.70) for teachers (Model A).


Note: TPACK = Technological pedagogical content knowledge; TSE = Technology self-efficacy; PEOU = Perceived ease of use; PU = Perceived usefulness; POS = Perceived organizational support; CU = Continuous intention to use e-learning platform.

#### *3.6. Survey Structure*

The questionnaire survey given to students and teachers had two sections. Within the first part, personal data was given to gather information about students and teachers. The second section had a group of questions related to the main factors of the proposed models. The teachers' questionnaire had six sub-sections coinciding with the six factors proposed in the model. Similarly, the students' questionnaire had five sub-sections related to the five factors proposed in the model. With the help of the five-point Likert Scale, the 42

items were evaluated. The scales included the following: (1) strongly disagree, (2) disagree, (3) neutral, (4) agree, and (5) strongly agree.

**Table 5.** Cronbach's alpha values for the pilot study (Cronbach's alpha ≥ 0.70) for students (Model B).


Note: TSE = Technology self-efficacy; PEOU = Perceived ease of use; PU = Perceived usefulness; CTRLM = Controlled motivation; CU = Continuous intention to use e-learning platform.

#### **4. Findings and Discussion**

#### *4.1. Data Analysis*

Along with the help of SmartPLS V.3.2.7 software, the partial least squares-structural equation modeling (PLS-SEM) was utilized to conduct the data analysis in this research [15]. The assessment approach had two steps of a structural model and a measurement model allowed to study the collected data [16]. There were various reasons for choosing PLS-SEM in the study. First, as the research is an extension of a current theory, PLS-SEM was considered the best option [17]. Second, the complex models within exploratory research can be effectively tackled with the help of PLS-SEM [18]. Third, PLS-SEM analyzes a complete model as a single unit, so there is no need to divide it [19]. Lastly, PLS-SEM provides concurrent analysis for measurement, as well as a structural model, leading to more accurate calculations [20].

#### *4.2. Convergent Validity*

In order to review the measurement model, it was suggested by [16] that construct reliability—including composite reliability (CR), Dijkstra–Henseler's rho (pA), and Cronbach's alpha (CA) and validity (including convergent and discriminant validity)—must be considered. Cronbach's alpha (CA) has values between 0.782 and 0.895, as Tables 6 and 7 show, in order to determine construct reliability. These statistics are higher than the threshold value of 0.7 [78]. According to Tables 6 and 7, the outcomes also show that the composite reliability (CR) has values from 0.796 to 0.882; these values are evidently bigger than the recommended value of 0.7 [79]. As an alternative, the construct reliability must be appraised by researchers by means of the Dijkstra–Henseler's rho (pA) reliability coefficient [80]. Like CA and CR, the reliability coefficient ρA must show 0.70 or higher in exploratory studies and values of more than 0.80 or 0.90 for further stages of research [78,81,82]. The reliability coefficient ρA of each measurement construct is above 0.70 according to Tables 6 and 7. According to these outcomes, the construct reliability is verified and all the constructs were considered to be accurate.

Convergent validity can be measured by testing the average variance extracted (AVE) as well as the factor loading [16]. Tables 6 and 7 suggest that all values of factor loadings exceeded the threshold value of 0.7. Moreover, Tables 6 and 7 show that the values obtained for the AVE were higher than the threshold value of 0.5, ranging from 0.509 to 0.718. Depending on the expected results, the convergent reliability can be obtained for all the constructs.


**Table 6.** Convergent validity results that ensure acceptable values (Factor loading, Cronbach's alpha, composite reliability (CR), Dijkstra–Henseler's rho (pA) ≥ 0.70 and average variance extracted (AVE) > 0.5) (Model A).

**Table 7.** Convergent validity results that ensure acceptable values (Factor loading, Cronbach's alpha, composite reliability, Dijkstra–Henseler's rho ≥ 0.70 & AVE > 0.5) (Model B).



**Table 7.** *Cont*.

### *4.3. Discriminant Validity*

The two criteria that were suggested should be measured to obtain the measurement of discriminant validity were the Fornell–Larcker measure and the Heterotrait–Monotrait ratio (HTMT) [16]. As per the findings of Tables 8 and 9, these needs have been verified by the Fornell–Larker criterion as each AVE value, together with its square root, exceeds the value of the correlation of AVE with other constructs [83].

**Table 8.** Fornell–Larcker Scale (Model A).


Note: TPACK = Technological pedagogical content knowledge; TSE = Technology self-efficacy; PEOU = Perceived ease of use; PU = Perceived usefulness; POS = Perceived organizational support; CU = Continuous intention to use e-learning platform.



Note: TSE = Technology self-efficacy; PEOU = Perceived ease of use; PU = Perceived usefulness; CTRLM = Controlled motivation; CU = Continuous intention to use e-learning platform.

Tables 10 and 11 show the outcomes of the HTMT ratio, indicating that the threshold value of 0.85 is bigger than the values of other constructs [27], and hence confirming the HTMT ratio. These outcomes play a role in the evaluation of the discriminant validity. The outcomes of the analysis indicated a smooth and simple assessment of the measurement model in terms of the model's validity and reliability. To conclude, it can be said that the collected data was appropriate for additionally evaluating the structural model.


**Table 10.** Heterotrait–Monotrait Ratio (HTMT) (Model A).

Note: TPACK = Technological pedagogical content knowledge; TSE = Technology self-efficacy; PEOU = Perceived ease of use; PU = Perceived usefulness; POS = Perceived organizational support; CU = Continuous intention to use e-learning platform.


Note TSE = Technology self-efficacy; PEOU = Perceived ease of use; PU = Perceived usefulness; CTRLM = Controlled motivation; CU = Continuous intention to use e-learning platform.

#### *4.4. Model Fit*

The subsequently mentioned fit measures are ensured by SmartPLS: the standard root mean square residual (SRMR), exact fit criteria, Euclidean distance (d\_ULS), geodesic distance (d\_G), Chi-square, Normed Fit Index (NFI), and RMS Theta show the model fit in PLS-SEM [84]. The difference between experimental correlations and the correlation matrix inferred from model [85] are indicated by SRMR, and values smaller than 0.08 are assumed to serve as good model-fit measures [86]. The NFI values that are higher than 0.90 point out a good model fit [87]. The NFI is a ratio of the Chi-square value of the proposed model to the null model (also known as the benchmark model) [88]. The NFI increases with larger parameters and therefore, the NPI is not suggested as a model-fit pointer [85]. The discrepancy between the empirical covariance matrix and the covariance matrix, inferred from the composite factor model, is indicated by the metrics of squared Euclidean distance (d\_ULS) and the geodesic distance (d\_G) [80,85]. The RMS Theta helps in the measurement of the degree of outer model residuals correlation and is appropriate for reflective models only [88]. The nearer the RMS Theta value is to zero, the more superior the PLS-SEM model, and their values of less than 0.12, are assumed to be a good fit, with anything other than this suggesting an absence of fit [89]. The saturated model evaluates the correlation between all constructs, as recommended by [29], while the approximate model takes all the effects and model structure into consideration. The RMS Theta value was 0.073 in Model A and 0.073 in Model B, as given in Tables 12 and 13, which gives an idea that the specific goodness-of-fit for the PLS-SEM model was big enough to prove global PLS model validity.


**Table 12.** Model fit indicators (Model A).

**Table 13.** Model fit indicators (Model B).


### *4.5. Hypotheses Testing Using PLS-SEM*

The interdependence between different theoretical constructs related to the structural model was studied by using a combination of the structural equation model with maximumlikelihood estimation and SmartPLS [38,39]. This indicates the procedure of analysis of the proposed hypothesis. About 83% and 71% variance were found within the continuous intention to use the e-learning platform, as shown in Tables 14 and 15, which indicates a high predictive power of Models A and B [37]. For all the proposed hypotheses, outcomes of the PLS-SEM technique provided the beta (β) values, *t*-values, and *p*-values, which have been stated in Tables 16 and 17. It is evident that each and every hypothesis is supported by all the researchers. The empirical data backs hypotheses H1, H2, H3, H4, H5, and H6 on the basis of the analyzed data. The standardized path coefficients and path significances are demonstrated in Figures 3 and 4.

**Table 14.** R2 of the endogenous latent variables (Model A).


**Table 15.** R2 of the endogenous latent variables (Model B).


In Model A, technological pedagogical content knowledge (TPACK), technology selfefficacy (TSE), perceived ease of use (PEOU), perceived usefulness (PU), and perceived organizational support (POS) have significant effects on continuous intention to use the e-learning platform (CU) ((β = 0.336, *p* < 0.001), (β = 0.426, *p* < 0.05), β = 0.589, *p* < 0.05), (β = 0.625, *p* < 0.05) and (β = 0.553, *p* < 0.001), respectively); hence, H1, H2, H3, H4, and H5 are supported.

In Model B, technology self-efficacy (TSE), perceived ease of use (PEOU), perceived usefulness (PU), and controlled motivation (CTRLM) have significant effects on continuous intention to use the e-learning platform (CU) ((β = 0.290, *p* < 0.001), (β = 0.357, *p* < 0.05), β = 0.465, *p* < 0.05) and (β = 0.243, *p* < 0.05), respectively); hence, H2, H3, H4, and H6 are supported.

**Table 16.** Hypotheses testing of the research model (significant at \*\* *p* < = 0.01, \* *p* < 0.05) (Model A).


Note: TPACK = Technological pedagogical content knowledge; TSE = Technology self-efficacy; PEOU = Perceived ease of use; PU = Perceived usefulness; POS = Perceived organizational support; CU = Continuous intention to use e-learning platform.

**Table 17.** Hypotheses testing of the research model (significant at \*\* *p* < = 0.01, \* *p* < 0.05) (Model B).


Note: TSE = Technology self-efficacy; PEOU = Perceived ease of use; PU = Perceived usefulness; CTRLM = Controlled motivation; CU = Continuous intention to use e-learning platform.

**Figure 3.** Path coefficient of the model (significant at \*\* *p* <= 0.01, \* *p* < 0.05) (Model A).

**Figure 4.** Path coefficient of the model (significant at \*\* *p* < = 0.01, \* *p* < 0.05) (Model B).

#### **5. Discussion and Conclusions**

This study proposed two unique CU models that took into consideration factors that affect both instructors' and students' attitudes. The two models can be theoretically extended to enhance other technology-supported educational environments and instructional processes. The first research model of instructors' CU was proposed, taking into consideration certain social cognitive theory along with personal, behavioral, and environmental elements that are closely related to instructors' CU. In general, the results of SEM analysis has supported all the proposed hypothesis. From a practical perspective, this study has proven that POS is the most influential factor that affects instructors' CU of e-learning platforms.

Ref. [90–92] seem to agree with the current conclusion in stating that POS could motivate its staff members, leading to an upgrade of the organization. However, a study by [93] placed an emphasis on staff members' personalities and readiness to change. This implies that the lack of organizational support may have negative consequences. One of the studies by [60] has proposed that when instructors feel that there is no adequate organizational support, they are less likely to continuously use the technology, especially in an educational atmosphere where instructors are supposed to implement various in-class pedagogical changes to facilitate a better learning environment for the students.

In fact, POS is not the only factor that affects instructors' CU, but rather instructors' TPACK is another key factor that affects CU. Most of the previous research has proven that the organizational support may affect users' motivation to use the technology. A study by [94] put emphasis on the effect of PACK in facilitating the e-learning process by both teachers and adult learners. This seems to be in line with the results obtained from this study where TPACK affected, to great extent, the teachers' performance. It is assumed that whenever teachers' content, technological, and pedagogical knowledge is high, it implies that his or her ability to change the teaching material to suit the newly used technology will be more practical and effective. Obviously, teachers are more motivated when there is reliable technical support and IT staff that can facilitate the process of establishing new computer-supported knowledge [66,68].

Similarly, CTRLM has an effect on students' CU. CTRLM, along with technology selfefficacy (TSE), are the key factors that have a great impact on students' CU. The study has proven that CTRLM is connected deeply with the willingness to use the e-learning teaching platform. The higher the motivation is, the more effective results are obtained. Previous studies have tackled the effect of CTRLM on students' performance and have proven that there are both pedagogical and non-pedagogical elements that affect motivation [95–97]. These studies have indicated that technology development has placed a positive effect on motivation, as it urges students to get engaged in the new learning platform. This makes motivation very high and involvement in classes even higher, increasing the students' willingness to learn, and thus, using the technology continuously [98,99].
