Next Article in Journal
Biofortification—Present Scenario, Possibilities and Challenges: A Scientometric Approach
Next Article in Special Issue
Research Perceived Competency Scale: A New Psychometric Adaptation for University Students’ Research Learning
Previous Article in Journal
CA-BASNet: A Building Extraction Network in High Spatial Resolution Remote Sensing Images
Previous Article in Special Issue
Learning Self-Regulation Questionnaire (SRQ-L): Psychometric and Measurement Invariance Evidence in Peruvian Undergraduate Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Artificial Intelligence to Predict Students’ Academic Performance in Blended Learning

1
College of Science and Theoretical Studies, Saudi Electronic University, Riyadh 11673, Saudi Arabia
2
College of Computing and Informatics, Saudi Electronic University, Riyadh 11673, Saudi Arabia
3
College of Sciences & Human Studies, Prince Mohammad Bin Fahd University, Al Khobar 31952, Saudi Arabia
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(18), 11642; https://doi.org/10.3390/su141811642
Submission received: 31 July 2022 / Revised: 11 September 2022 / Accepted: 12 September 2022 / Published: 16 September 2022

Abstract

:
University electronic learning (e-learning) has witnessed phenomenal growth, especially in 2020, due to the COVID-19 pandemic. This type of education is significant because it ensures that all students receive the required learning. The statistical evaluations are limited in providing good predictions of the university’s e-learning quality. That is forcing many universities to go to online and blended learning environments. This paper presents an approach of statistical analysis to identify the most common factors that affect the students’ performance and then use artificial neural networks (ANNs) to predict students’ performance within the blended learning environment of Saudi Electronic University (SEU). Accordingly, this dissertation generated a dataset from SEU’s Blackboard learning management system. The student’s performance can be tested using a set of factors: the studying (face-to-face or virtual), percentage of attending live lectures, midterm exam scores, and percentage of solved assessments. The results showed that the four factors are responsible for academic performance. After that, we proposed a new ANN model to predict the students’ performance depending on the four factors. Firefly Algorithm (FFA) was used for training the ANNs. The proposed model’s performance will be evaluated through different statistical tests, such as error functions, statistical hypothesis tests, and ANOVA tests.

1. Introduction

E-Learning is an educational strategy that uses telecommunication technology to deliver information for education and training [1]. It is defined as the ability of a system to transfer, manage, support, and supervise learning materials electronically [2]. E-learning encompasses more than just computer-based teaching through the Internet or online information transmission [3]. The two key elements of e-learning are electronic technology and the learning experience, respectively. The term “e-Learning” refers to both the technology and the method of instruction [4]. In the last few years, we have seen an upward growth in the use of e-learning methods. This has been possible because of the availability of digital channels and the cost-effectiveness of e-learning. The main advantages of e-learning, such as the absence of constraints (in terms of date, room, duration of training), teacher availability, and cost-effectiveness in course delivery and management, prompted educational and training institutions to adopt e-learning by implementing an expanding list of technology-enabled platforms [5]. According to a systematic study by the World Health Organization, 29% of studies evaluating knowledge improvement and 40% of studies evaluating skills improvement showed the benefit of e-learning compared to traditional learning [6].
Because of the sudden outbreak of the COVID-19 pandemic worldwide, the education system across the globe is forced to switch from traditional learning systems to e-learning systems. UNESCO recommended that educational institutes gear up themselves with online learning tools [7]. Due to COVID-19, e-learning strategies have become more popular and are now the most accessible means for education [8]. However, with the successful usage of e-learning systems, there are some challenges facing the e-learning process, such as content transmission and delivery as well as enabling technologies [9]. From the point view of Almaiah et al. [10], student’s acceptance of e-learning is considered as one of the main challenges for the success of e-learning systems, while a study conducted by Al-Arabi et al. [11] considers the technical issues as the main criteria for the success of e-learning system.
The blended learning system is an educational approach that combines e-learning and teaching face to face. Currently, blended learning is the most popular teaching method adopted by educational institutions due to its provision of flexible, timely, and continuous learning. In addition, blended learning increases the interaction between teachers and their students [12,13]. The importance and benefits of a blended learning approach to improving teaching and learning are demonstrated in various studies, and many researchers consider it the “new normal” [13,14,15]. The increasing popularity of blended learning has created a demand for improving the efficiency of this type of learning. Current research focuses on improving the quality of blended learning by using accurate prediction mechanisms to predict various aspects of blended learning efficiency, such as quality, probability of successful completion, the satisfaction of the participants in the training, detection of the learning styles, etc. Both management and educators benefit from the universities’ or schools’ effective development and evolution of the intervention plans, which are made possible by student performance prediction at the entry-level and subsequent periods [16]. Several studies in the literature have addressed issues related to e-learning prediction and forecasting. Ioanna et al. [17] proposed a student achievement prediction method to dynamically predict students’ final achievement and cluster them in two virtual groups using multiple feed-forward ANNs according to their performance. Using Fuzzy C-Means as a clustering algorithm, El Aissaoui et al. [18] proposed a generic approach for detecting learning styles automatically according to a given learning styles model. Another study by Viloria et al. [19] models student engagement with the study material using data mining approaches to enhance e-learning processes. They used prediction rules whose interpretation will detect the educational process’s weaknesses and evaluate the study material’s quality. For improving learning systems in an e-learning environment, Novais et al. [20] proposed an intelligent learning system to monitor the patterns of students’ behavior during e-assessments, to support the teaching procedure within school environments.
There are opportunities to investigate the effect of technology in raising student grades in a mixed learning environment where ANNs and statistical analysis are merged. The parallel and iterative ANN, which uses small processing units called neutrons, is a technique. While the perceptron, a web of primary neurons, is the basic unit of a multilayer neural network. This paper describes a new ANN model using many factors to predict early the final grades of the students. Multilayer perceptron neural networks (MLPNNs) were chosen for their efficiency performance in predicting, and for their generalizability. In addition, we used the nature-inspired metaheuristic algorithms to train ANN models. In parallel, the effectiveness of new factors for predicting student scores that made up the data set of this study was tested.
In current years, metaheuristic optimization algorithms have become very popular in artificial intelligence (AI), optimization, engineering applications, data mining, and machine learning [21,22,23,24]. These metaheuristic algorithms are now one of the most widely used techniques for optimization. They have several advantages over conventional techniques. The two most advantages are flexibility and simplicity. Metaheuristic algorithms are highly flexible and can deal with diverse objective functions, either discrete, continuous, or mixed [23,25]. Moreover, these algorithms are simple to apply because of their ability to solve complex problems. They solve several real-world optimization problems in different domains, such as engineering, AI, and operation research. Various metaheuristic optimization algorithms such as Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and FFA have been used to find the optimal parameters of the ANNs models [26]. In this study, the FFA is applied as an optimization method to enhance the performance of the ANNs models. Note that, ANNs have effective, robust, and prominent features in capturing the nonlinear relationships between parameters and response in a complex system. FFA has proven that this algorithm has a high ability to search for the global optimum solution. Employing FFA and MLPNNs as tools to predict and improve the Students’ Academic Performance will be very interesting and effectiveness. As a result, an integration of MLPNN with FFA has been undertaken in order to construct a hybrid prediction model for students’ academic performance in blended learning modeling.

2. Literature Review

Sayed and Baker [27] introduced an ANN model as a type of supervised learning. They explained how to employ ANNs to create a convergent mathematical model using e-learning interactions and social analytics. Over eight years, Paul and Jefferson [28] found that a more successful teaching strategy had been proven. They looked at score variations between genders and classifications to ascertain whether a certain teaching method had a larger influence on particular groups. There was no discernible difference in student performance between face-to-face and online students. Their results show that, regardless of gender or class position, environmental science concepts may be similarly translated for non-STEM majors in both conventional and online platforms. In their approach, Lau et al. [29] combined traditional statistical analysis with ANN modeling and prediction of student performance. As the backpropagation training rule, the Levenberg–Marquardt algorithm is used. Through the use of statistical techniques, the performance of the neural network model is assessed. Despite its drawbacks, the neural network model has a decent overall prediction accuracy of 84.8%. In order to choose high-influence attributes with student performance, Khasanah and Harwati [30] used feature selection. They used Bayesian networks and decision trees, comparing the results to determine which provided the best predictions. The results demonstrated that student attendance and grade point average in the first semester was in the top rank of all feature selection methods, and Bayesian Network outperformed Decision Tree since it has a greater accuracy rate. Maheswari and Preethi [31] employed deep learning to improve the placement performance of the student. The deep learning approach might influence and lead to the welfare of teachers, students, and educational institutions. To forecast the student’s performance in placement, they used the Logistic Regression algorithm, and they looked at which algorithm provided the best accuracy for the dataset. Sultana et al. [32] used the educational data mining technique to assess student performance. It predicts student performance and learning behavior by uncovering hidden knowledge from learning records or educational data. Using various classification systems, they investigated student performance and found the best one that produced the best results. Oyedeji et al. [33] utilized machine learning algorithms to examine the data collected from past student tests, including data on each student’s age, family history, and attitude toward learning. They looked at linear regression for supervised learning, linear regression with deep learning and ANN, and training data with linear regression for supervised learning having the best mean average error. The cumulative grade point average (CGPA) upon graduation was used by Ibrahim and Rusli [34] to gauge academic success. ANNs, decision trees, and linear regression are the three prediction models they created. They demonstrated that all three models produced more than 80% accuracy and that ANNS perform better than the other two models. Mandal et al. [35] determined the ANNs weights by minimizing function cost or error. To train an ANN, the FFA—a powerful metaheuristic optimization technique—is used, inspired by the firefly’s natural movement toward greater light. The simulation results demonstrate the Firefly Optimization training process’s high computational efficiency. Based on literature review, this study mainly used a new ANN model (MLPNN with FFA) to analyze the predictability of student performance and verified the generality of the prediction through new factors, which are mid-exams, assignments, attendances, and virtual/face.

3. Mathematical Modeling

3.1. Firefly Algorithm

Swarm intelligence algorithms are kinds of heuristic algorithms that are inspired by nature-observed phenomena. Such algorithms are looking for optimal solutions in the research space by conducting a cooperative population search. The FFA is one of the practical swarm intelligence algorithms proposed by Yang et al. in 2008 [36]. The FFA and its variants have recently emerged as powerful instruments for solving several optimization problems in different disciplines, such as engineering optimization, machine learning, path planning, and production scheduling.
The FFA simulates the fireflies’ flashing behavior and effectively determines the optimal solutions (both global and local solutions) [23]. Every natural firefly exhibits luminous behavior to attract other mates. The attractiveness of the fireflies depends on factors, including the light intensity that the fireflies emit, their size, and location. The idealized fireflies’ flashing behavior occurs through the following scenario. First, each of the fireflies is attracted to other bright fireflies. Second, the fireflies’ attraction is decided by the firefly’s brightness, and it depends on the value of an objective function [37]. This means that any less bright firefly moves towards the brighter one, and if there is no brighter firefly, the firefly will move randomly [9]. The updates of the fireflies will be iterated until reaching the tolerance from the optimum value, or the maximum number of iterations is conducted. Mathematically, the firefly’s new position i about a brighter firefly j is given by Equation (1) [20,38]:
x i t + 1 = x i t + β 0 e i j γ * r i j 2 ( x j t x i t ) + α t i t ,   i = 1 ,   2 ,   ,   N
where x i t + 1 (as a solution vector to a problem) represents the new firefly’s position xi at time t+1, N represents the total number of fireflies (the dimension of the solution vector). The value of β 0 is considered as 1, α represents a random vector over [0, 1], γ * is absorption coefficient that distributed over [0, ∞], r i j 2 is the distance between any two fireflies. i t represents a random parameter at time t, where Figure 1 illustrates the FFA structure.

3.2. ANN

MLPNN is a feed-forward ANN with three layers (input layer, hidden layers, and output layer), as shown in Figure 2 [39,40,41,42]. In this study, we used a single hidden layer that contains ten hidden neurons and the hidden activation function is the sigmoid function, as determined by:
y i = 1 1 + e w k i * x k
where x k is the input value of neuron i, w k i the input weight with the hidden neuron i.
In the output layer of the proposed MLPNN, we have one input neuron representing the final grades. Moreover, we have a hyperbolic tangent transfer function with an output ranging [−1, 1] (Equation (3)).
y j = 2 ( 1 + e 2 w i j * y i ) 1
where w i j *  is the output weight value between the hidden neuron i and the output neuron j, and y j is the output neuron j value. The following figure presents the structure of an MLPNN.
Finding the importance of the parameters of an ANN leads to becoming an ANN model since the supervised learning method of ANNs is the best methodology used to establish the ideal values of all ANN parameters, i.e., the “input weights” and “output weights. This process is called the training process, since the observed data (training data) and optimization algorithms are used in the process (see Figure 3). The error functions, such as root means squared error (RMSE) function, are used as a fitness function for testing the performance of the MLPNNs models, whereas the correlation coefficient (R) is used to enhance the performance. The equations of RMSE and R are as follows [43,44]:
RMSE = ( O E ) 2 n
R = n O E O E [ n O 2 ( O ) 2 ] [ n E 2 ( E ) 2 ]
where n is the size of data set; O is the sum of all observed data; E is the sum of all expected data; O 2 is the sum of all squared observed data; ( O ) 2 is the square of the sum of all observed data; E 2 is the sum of all squared expected data; and ( E ) 2 is the square of the sum of all expected data.

4. Research Methodology and Framework

The major goal of the current study is to predict the impact of the student’s activity on his final grade during the semester. Other goals of the study include exploring the effects of teaching in virtual or face-to-face teaching and factors affecting student achievement. The data used were collected from the results of students from SEU for several different years and for other subjects. The study examined the four hypotheses as to the following:
H1. 
There is no effect of attendance on final exams at the significance level (α = ≤0.05).
H2. 
There is no effect of assignments on final exams at the significance level (α = ≤0.05).
H3. 
There is no effect of mid-exams on final exams at the significance level (α = ≤0.05).
H4. 
There is no effect of virtual/face on final exams at the level of significance (α = ≤0.05).
Table 1 shows the descriptive statistics of the input variables (mid-exams, assignments, attendance, and virtual/face) and output variable (final exams). The number of observations is 234 students. In the output variable, the final exams have (Mean = 35, standard deviation = 15.11, the maximum number = 50, skewness = −1.051, and kurtosis = 0.141). In the input variables, the mid-exams have (Mean = 16.06, standard deviation = 6.22, the maximum number = 40, skewness = −0.674, and kurtosis = 0.159). Furthermore, the assignments have (Mean = 21.2, standard deviation = 6.25, the maximum number = 25, skewness = −2.349, and kurtosis = 4.729). Moreover, the attendances have (Mean = 2.71, standard deviation = 2.942, the maximum number = 18, skewness = 1.867, and kurtosis = 4.557). Finally, the virtual/faces have (Mean= 0.49, standard deviation = 0.501, the maximum number = 1, skewness = 0.034, and kurtosis = −2.016).

5. Results and Discussions

Studying the dataset using various statistical analysis techniques is crucial to assess the student’s performance. This step allows researchers to understand the dataset before applying the ANNs. Multicollinearity tests, OLS, Fixed effect, and Random effect regression are used to test the effect of the significant input variables on the students’ performance. The input variables are (mid-exams, assignments, attendances, and virtual/face). To achieve our goal, we conducted a statistical analysis using R software given in the following:
  • Multicollinearity tests: Table 2 shows the correlation between input and output variables. The variables are removed because of multicollinearity between input variables. The “no multicollinearity” refers to the absence of perfect multicollinearity, which is an exact (non-stochastic) linear relation among the input values. According to the conducted tests, we removed some variables from input variables that are strongly related to other input variables. The result shows weak correlations between input variables because it is less than 50%.
  • Furthermore, there is a strong correlation between assignments and mid-exam. There are strong negative correlations between attendance and final exams.
  • The first plot in Figure 4, “Residuals vs. Fitted,” aids in evaluating Linearity and Homoscedasticity: If the residuals (points on the plot) are primarily spread around the zero line, zero red line (fitted line), linearity is present. The x-axis is the fitted value, and the y-axis is the residual that refers to the different distance between the fitted value in the red line and observation; the residuals are said to be normally distributed if the red trend line is approximately flat and near zero. Homoscedasticity refers to the absence of a clear pattern in the residuals. This is also referred to as a residual distribution. The second plot is a scatterplot with two sets of quantiles against one another. It is called a Q-Q plot (or quantile–quantile plot). Plotting the theoretical quantiles of the normal distribution on the x-axis and the quantiles of the residual distribution on the y-axis allows you to determine if the residuals are normally distributed. It is acceptable to infer that the residuals follow a normal distribution if the Q-Q plot forms a diagonal line. As we can see, this is generally true for the observed values. By looking at the scale-location plot, sometimes referred to as the spread-location plot, the third plot, this assumption may be verified. This graphic demonstrates if residuals are distributed similarly throughout the predictor ranges. A horizontal line with evenly spaced points is ideal. This is the case in the scenario we provide. The final plot, also known as the residuals vs. leverage plot, is a diagnostic diagram that shows which observations in a regression model are most important. Within the plot, each observation from the dataset is represented by a single point. Each point’s leverage is shown on the x-axis, and its standardized residual is shown on the y-axis. Leverage measures how much the regression model’s coefficients would change if particular datasets were left out. High-leverage observations have a significant impact on the regression model’s coefficients. The coefficients of the model will change dramatically if we remove these observations. The residual is the uniform difference between a value that was predicted and the observation’s actual value. Any site in this plot outside of Cook’s range (the red dashed lines) is regarded as a significant finding. Any position in this plot outside Cook’s distance (the red dashed lines) is regarded as a significant finding. In our model, there are no influential points. In the end, the least-squares estimation is significantly based on previous assumptions.
  • OLS, fixed effects, and random-effects models for input variables issues: A statistical approach used to analyze the relationship between a single output variable and input variables is multiple regression. The ordinary least square (OLS) aims to use the input variables known to predict the value of the single output value by their values. The weights are weighed for each predictor value, which denotes their relative contribution to the overall prediction.
    Y i = + i = 1 n β i x i + ε i
    where Y i are the output values (final exams) of the ith observations, and x i ,   i = 1 , 2 , n , are the input values (virtual/face, mid-exams, assignments, and attendance), β i ,   i = 1 , 2 , n are the input values, α is constant, and ε i represents unobserved value (error).
We investigate our proposed model using three approaches: the OLS, the fixed effects, and the random effects [45,46]. Table 3 shows how to input variable bias may result in inaccurate results. To investigate the direct impact of the input factors on the output variable, the OLS analysis is conducted. In the OLS, the input variables (virtual/face, midterm exams, assignments) positively affect final exams with a significant level < 1%. Therefore, we reject H2, H3, and H4. However, there are adverse effects between attendance on final exams, with a significant level < 5%. Therefore, we reject H1. The R-square is 70%, the adjusted R-square is 69.63%, and F-statistic is 134.6 with a significant level < 1%.
The fixed effects is an estimator model that can be used to analyze panel data. It does not allow the lag of the output variables to be included as an input variable in the model [47]. In this study, we divided our cross-sectional data into two groups based on virtual/face, and then the fixed and random effects were applied. In fixed effect, the input variables (virtual/face, midterm exams, assignments) positively affect final exams with a significant level < 1%. Therefore, we reject H2, H3, and H4. However, there is no effect between attendance on final exams, with a significant level < 5%. Therefore, we accept H1. The R-square is 64.2%, the adjusted R-square is 24.85%, and F-statistic is 49.76, with a significant level of less than 1%.
The random-effects model, a variance components model, is used to analyze panel data. In contrast to the fixed effects model, the individual-specific impact is a random variable that is unrelated to the input variables in the random-effects model. The input variables (virtual/face, midterm exams, assignments) have a beneficial effect on final exams with a significant level < 1%. Therefore, we reject H2, H3, and H4. However, there are adverse effects between attendance on final exams with a significant level < 5%. Therefore, we reject H1. The R-square is 69.23%, the adjusted R-square is 68.69%, and F-statistic is 518.514 with a significant level < 1%.
We conduct a Hausman test to compare the two fixed effects and random effects models, with the null hypothesis being that random effects are favored over fixed effects [48]. The null hypothesis is not true; hence, this test checks if the distinct mistakes correspond with the regressors in the models. We utilize the fixed effects model if the p-value is significant (for instance, p-value 0.05). Otherwise, the random-effects model is applied. When we used the Hausman test, the key findings were as follows: χ 2 = 1.8935, df = 4, and p-value = 0.7553. As a result, we accept the null hypothesis “H0: the model is a random effect”, because the p-value > 5%.
All techniques, such as ANNs, used to predict the students’ performance, take into account different types of data, for example, demographic characteristics and the grades from some tasks. However, each case study has its own characteristics and nature, so different techniques can be selected as the best option to predict students’ behavior. Lu et al. [49] applied principal component regression in the bended learning system to predict students’ final performance. Xu et al. [50] used K-nearest neighbors (KNNs) and gradient boosting decision tree (GBDT) in the blended learning system to the predict of student performance and the possibility of early intervention. Raga and Raga [51] proposed a deep ANN model for prediction of student performance in blended learning. Yang et al. [52] proposed a model for predicting students’ academic performance in blended learning system by using the activities that are homework, quizzes, and video-based learning. As a result, this work differs from previous studies in terms of student activities used in predicting the final grade of students, and in the structural form of the MLPNN model that was established. This is in addition to the use of swarm optimization algorithms to improve the performance of the proposed MLPNN model. In this study, we proposed an MLPNN model with four input neurons, ten hidden neurons, and one output neuron to predict the student’s performance in blended learning. The input data represents virtual/face, midterm exams, assignments, and attendance. At the same time, the output data represents the final grades. Furthermore, several studies have investigated the possibility of improving the accuracy of ANNs by coupling with the Firefly algorithm [53,54]. FFA was used for the training to determine the optimal parameters of the model. Additionally, by choosing the minimum value of the RMSE function, the ANN was trained for 50 trials with 1000 iterations and 50 populations in each test. As a result, the minimum error is RMSE = 0.39, as shown in Figure 5.

6. Conclusions

This study aimed to improve the students’ academic performance in Blended Learning. The data is collected from student activities at SEU. The statistical analysis successfully proved that there are sufficient effects of selected factors on the students’ performance. Multicollinearity tests, OLS, Fixed effect, and Random effect regression are used to select sufficient factors. The four effective factors are mid-exams, assignments, attendances, and virtual/face. After that, we proposed successfully an MLPNN model that effectively predicts student achievement. The structure of our model has four input neurons, ten hidden neurons, and one output neuron. The input neurons represent the four factors, and the output neuron represents final exams. In parallel, the optimal values of parameters (the input and output weights) for the MLPNN model were calculated successfully using FFA. Note that FFA has an excellent convergence rate and an intense exploration ability. Therefore, the hybrid MLPNN-FFA models are highly robust in prediction accuracy. Therefore, we recommend researchers to use the hybrid MLPNN–FFA models in their future studies.
The proposed MLPNN model can predict the student’s performance in blended learning, to improve the learning process, and to reduce academic failure rates. It can also help administrators to enhance the blended learning system results.
The limitation of our study is the use only in the blended learning system phenomena. Furthermore, the researchers could explore other factors to predict of students’ performance such as historical record exams. Therefore, this model can be developed by adding more other inputs variables.

Author Contributions

Conceptualization, N.N.H.; methodology, N.N.H. and S.A.; software, N.N.H.; validation, N.N.H. and W.A.K. and S.A.; investigation, K.A.A., A.A. and S.A.; resources, N.N.H., K.A.A. and A.A.; data curation, N.N.H.; writing—original draft preparation, N.N.H., K.A.A., W.A.K., S.A. and A.A.; writing—review and editing, N.N.H., K.A.A., W.A.K., S.A. and A.A.; supervision, N.N.H. and S.A.; project administration, N.N.H.; funding acquisition, N.N.H., K.A.A., W.A.K., S.A. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deputyship for Research and Innovation, Ministry of Education in Saudi Arabia, for funding this research work through the project no. 7993.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sun, P.-C.; Tsai, R.J.; Finger, G.; Chen, Y.-Y.; Yeh, D. What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction. Comput. Educ. 2008, 50, 1183–1202. [Google Scholar] [CrossRef]
  2. Olafsen, R.N.; Cetindamar, D. E-Learning in a Competitive Firm Setting. In Innovations in Education and Teaching International; Taylor & Francis: Abingdon, UK, 2005; Volume 42, pp. 325–335. [Google Scholar]
  3. Masie, E. What Is the Meaning of the E in E-Learning. In ASTD Handbook for Workplace Learning Professionals; Association for Talent Development: Alexandria, VA, USA, 2008; p. 227. [Google Scholar]
  4. Gabai, S.; Ornager, S. Smart e-Learning Through Media and Information Literacy. In Proceedings of the Fourth TCU International e- Learning Conference: Smart Innovations in Education & Lifelong Learning, Bangkok, Thailand, 14 June 2012. [Google Scholar]
  5. Hu, P.J.-H.; Hui, W. Examining the role of learning engagement in technology-mediated learning and its effects on learning effectiveness and satisfaction. Decis. Support Syst. 2012, 53, 782–792. [Google Scholar] [CrossRef]
  6. Wolbrink, T.A.; Burns, J.P. Internet-based learning and applications for critical care medicine. J. Intensive Care Med. 2012, 27, 322–332. [Google Scholar] [CrossRef] [PubMed]
  7. Crawford, J.; Butler-Henderson, K.; Rudolph, J.; Malkawi, B.; Glowatz, M.; Burton, R.; Magni, P.; Lam, S. COVID-19: 20 countries’ higher education intra-period digital pedagogy responses. J. Appl. Learn. Teach. 2020, 3, 1–20. [Google Scholar]
  8. Kaup, S.; Jain, R.; Shivalli, S.; Pandey, S.; Kaup, S. Sustaining academics during COVID-19 pandemic: The role of online teaching-learning. Indian J. Ophthalmol. 2020, 68, 1220–1221. [Google Scholar] [CrossRef]
  9. Shailaja, J.; Sridaran, R. Taxonomy of E-Learning Challenges and an Insight to Blended Learning. In Proceedings of the 2014 International Conference on Intelligent Computing Applications, Coimbatore, India, 6–7 March 2014; pp. 310–314. [Google Scholar]
  10. Almaiah, M.A.; Al-Khasawneh, A.; Althunibat, A. Exploring the critical challenges and factors influencing the E-learning system usage during COVID-19 pandemic. Educ. Inf. Technol. 2020, 25, 5261–5280. [Google Scholar] [CrossRef]
  11. Al-araibi, A.A.M.; Mahrin, M.N.R.B.; Yusoff, R.C.M. Technological aspect factors of E-learning readiness in higher education institutions: Delphi technique. Educ. Inf. Technol. 2019, 24, 567–590. [Google Scholar] [CrossRef]
  12. Jusoff, K.; Khodabandelou, R. Preliminary study on the role of social presence in blended learning environment in higher education. Int. Educ. Stud. 2009, 2, 79–83. [Google Scholar] [CrossRef]
  13. Rasheed, R.A.; Kamsin, A.; Abdullah, N.A. Challenges in the online component of blended learning: A systematic review. Comput. Educ. 2020, 144, 103701. [Google Scholar] [CrossRef]
  14. Singh, H. Building Effective Blended Learning Programs. In Challenges and Opportunities for the Global Implementation of E-Learning Frameworks; IGI Global: Hershey, PA, USA, 2021; pp. 15–23. [Google Scholar]
  15. Dangwal, K.L. Blended learning: An innovative approach. Univers. J. Educ. Res. 2017, 5, 129–136. [Google Scholar]
  16. Albreiki, B.; Zaki, N.; Alashwal, H. A systematic literature review of student’performance prediction using machine learning techniques. Educ. Sci. 2021, 11, 552. [Google Scholar] [CrossRef]
  17. Lykourentzou, I.; Giannoukos, I.; Mpardis, G.; Nikolopoulos, V.; Loumos, V. Early and dynamic student achievement prediction in e-learning courses using neural networks. J. Am. Soc. Inf. Sci. Technol. 2009, 60, 372–380. [Google Scholar] [CrossRef]
  18. El Aissaoui, O.; El Alami El Madani, Y.; Oughdir, L.; El Allioui, Y. A fuzzy classification approach for learning style prediction based on web mining technique in e-learning environments. Educ. Inf. Technol. 2019, 24, 1943–1959. [Google Scholar] [CrossRef]
  19. Viloria, A.; Angulo, M.G.; Kamatkar, S.J.; Hoz–Hernandez, J.D.L.; Guiliany, J.G.; Bilbao, O.R.; Hernandez, -P.H. Prediction Rules in E-Learning Systems Using Genetic Programming; Springer: Singapore, 2020; pp. 55–63. [Google Scholar]
  20. Novais, P.; Gonçalves, F.; Durães, D. Forecasting Student’s Preference in E-learning Systems; Springer: Cambridge, UK, 2018; pp. 198–203. [Google Scholar]
  21. Yang, X.-S.; Deb, S.; Fong, S. Accelerated Particle Swarm Optimization and Support Vector Machine for Business Optimization and Applications. In International Conference on Networked Digital Technologies; Springer: Berlin/Heidelberg, Germany, 2011; pp. 53–66. [Google Scholar]
  22. Koziel, S.; Yang, X.-S. Computational Optimization, Methods and Algorithms; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  23. Tilahun, S.L.; Ngnotchouye, J.M.T.; Hamadneh, N.N. Continuous versions of firefly algorithm: A review. Artif. Intell. Rev. 2019, 51, 445–492. [Google Scholar] [CrossRef]
  24. Chiroma, H.; Gital, A.Y.U.; Rana, N.; Abdulhamid, S.I.M.; Muhammad, A.N.; Umar, A.Y.; Abubakar, A.I. Nature inspired Meta-heuristic Algorithms for Deep Learning: Recent Progress and Novel Perspective. In Advances in Computer Vision; Springer: Cham, Switzerland, 2019; pp. 59–70. [Google Scholar]
  25. Shah, P.; Sekhar, R.; Kulkarni, A.J.; Siarry, P. Metaheuristic Algorithms in Industry 4.0. CRC Press: Boca Raton, FL, USA, 2021. [Google Scholar]
  26. Ghorbani, M.A.; Deo, R.C.; Karimi, V.; Kashani, M.H.; Ghorbani, S. Design and implementation of a hybrid MLP-GSA model with multi-layer perceptron-gravitational search algorithm for monthly lake water level forecasting. Stoch. Environ. Res. Risk Assess. 2019, 33, 125–147. [Google Scholar] [CrossRef]
  27. Sayed, M.; Baker, F. E-Learning optimization using supervised artificial neural-network. J. Softw. Eng. Appl. 2015, 8, 26. [Google Scholar] [CrossRef]
  28. Paul, J.; Jefferson, F. A comparative analysis of student performance in an online vs. face-to-face environmental science course from 2009 to 2016. Front. Comput. Sci. 2019, 2019, 7. [Google Scholar] [CrossRef]
  29. Lau, E.; Sun, L.; Yang, Q. Modelling, prediction and classification of student academic performance using artificial neural networks. SN Appl. Sci. 2019, 1, 1–10. [Google Scholar] [CrossRef]
  30. Khasanah, A.U. A Comparative Study to Predict Student’s Performance Using Educational Data Mining Techniques. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2017; p. 012036. [Google Scholar]
  31. Maheswari, S. A review on predicting student performance using deep learning technique. Tierärztliche Prax. 2020, 40, 766–776. [Google Scholar]
  32. Sultana, J.; Rani, M.U.; Farquad, M. Student’s performance prediction using deep learning and data mining methods. Int. J. Recent Technol. Eng. 2019, 8, 1018–1021. [Google Scholar]
  33. Oyedeji, A.O.; Salami, A.M.; Folorunsho, O.; Abolade, O.R. Analysis and prediction of student academic performance using machine learning. J. Inf. Technol. Comput. Eng. 2020, 4, 10–15. [Google Scholar] [CrossRef]
  34. Ibrahim, Z.; Rusli, D. Predicting students’ academic performance: Comparing artificial neural network, decision tree and linear regression. In Proceedings of the 21st Annual SAS Malaysia Forum, 5th September, Kuala Lumpur, Malaysia, 5 September 2007. [Google Scholar]
  35. Mandal, S.; Saha, G.; Pal, R.K. Neural network training using firefly algorithm. Glob. J. Adv. Eng. Sci. 2015, 1, 7–11. [Google Scholar]
  36. Xin-She, Y. Firefly algorithm. Nat.-Inspired Metaheuristic Algorithms 2008, 20, 79–90. [Google Scholar]
  37. Kumar, V.; Kumar, D. A systematic review on firefly algorithm: Past, present, and future. Arch. Comput. Methods Eng. 2021, 28, 3269–3291. [Google Scholar] [CrossRef]
  38. Hamadneh, N.N.; Khan, W.A.; Khan, I. Second Law Analysis and Optimization of Elliptical Pin Fin Heat Sinks Using Firefly Algorithm. CMC-Comput. Mater. Contin. 2020, 65, 1015–1032. [Google Scholar] [CrossRef]
  39. Hamadneh, N.; Khan, W.; Tilahun, S. Optimization of microchannel heat sinks using prey-predator algorithm and artificial neural networks. Machines 2018, 6, 26. [Google Scholar] [CrossRef] [Green Version]
  40. Hamadneh, N. Dead Sea Water Levels Analysis Using Artificial Neural Networks and Firefly Algorithm. Int. J. Swarm Intell. Res. 2020, 11, 1–11. [Google Scholar] [CrossRef]
  41. Hamadneh, N.N.; Khan, W.A.; Ashraf, W.; Atawneh, S.H.; Khan, I.; Hamadneh, B.N. Artificial neural networks for prediction of covid-19 in Saudi Arabia. Comput. Mater. Contin. 2021, 66, 2787–2796. [Google Scholar] [CrossRef]
  42. Konakoglu, B. Prediction of geodetic point velocity using MLPNN, GRNN, and RBFNN models: A comparative study. Acta Geod. Geophys. 2021, 56, 271–291. [Google Scholar] [CrossRef]
  43. Mefoued, S. Assistance of Knee Movements Using an Actuated Orthosis through Subject’s Intention Based on MLPNN Approximators. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–6. [Google Scholar]
  44. Triola, M.F.; Goodman, W.M.; Law, R.; Labute, G. Elementary Statistics. Pearson/Addison-Wesley Reading: London, UK, 2006. [Google Scholar]
  45. Alkhawaldeh, A.A.; Jaber, J.J.; Boughaci, D.; Ismail, N. A novel investigation of the influence of corporate governance on firms’ credit ratings. PLoS ONE 2021, 16, e0250242. [Google Scholar] [CrossRef]
  46. Ullah, S.; Akhtar, P.; Zaefarian, G. Dealing with endogeneity bias: The generalized method of moments (GMM) for panel data. Ind. Mark. Manag. 2018, 71, 69–78. [Google Scholar] [CrossRef]
  47. Wintoki, M.B.; Linck, J.S.; Netter, J.M. Endogeneity and the dynamics of internal corporate governance. J. Financ. Econ. 2012, 105, 581–606. [Google Scholar] [CrossRef]
  48. Greene, W.H. The Econometric Approach to Efficiency Analysis. In The Measurement of Productive Efficiency and Productivity Growth; Oxford University Press: Oxford, UK, 2008; pp. 92–250. [Google Scholar]
  49. Lu, O.H.; Huang, A.Y.; Huang, J.C.; Lin, A.J.; Ogata, H.; Yang, S.J. Applying learning analytics for the early prediction of Students’ academic performance in blended learning. J. Educ. Technol. Soc. 2018, 21, 220–232. [Google Scholar]
  50. Xu, Z.; Yuan, H.; Liu, Q. Student performance prediction based on blended learning. Proc. IEEE Trans. Educ. 2020, 64, 66–73. [Google Scholar] [CrossRef]
  51. Raga, R.C.; Raga, J.D. Early Prediction of Student Performance in Blended Learning Courses Using Deep Neural Networks. In Proceedings of the 2019 International Symposium on Educational Technology (ISET), Hradec Kralove, Czech Republic, 2–4 July 2019; pp. 39–43. [Google Scholar]
  52. Yang, S.J.; Lu, O.H.; Huang, A.Y.; Huang, J.C.; Ogata, H.; Lin, A.J. Predicting students’ academic performance using multiple linear regression and principal component analysis. J. Inf. Process. 2018, 26, 170–176. [Google Scholar] [CrossRef]
  53. Kaushik, A.; Tayal, D.K.; Yadav, K.; Kaur, A. Integrating firefly algorithm in artificial neural network models for accurate software cost predictions. J. Softw. Evol. Process. 2016, 28, 665–688. [Google Scholar] [CrossRef]
  54. Bisoyi, S.K.; Pal, B.K. Optimization of blasting parameters in opencast mine with the help of firefly algorithm and deep neural network. Sādhanā 2022, 47, 1–11. [Google Scholar] [CrossRef]
Figure 1. Flowchart of FFA.
Figure 1. Flowchart of FFA.
Sustainability 14 11642 g001
Figure 2. Structure of an MLPNN.
Figure 2. Structure of an MLPNN.
Sustainability 14 11642 g002
Figure 3. Process of training MLPNNs.
Figure 3. Process of training MLPNNs.
Sustainability 14 11642 g003
Figure 4. Normality and Homoscedasticity analysis.
Figure 4. Normality and Homoscedasticity analysis.
Sustainability 14 11642 g004
Figure 5. The optimal performance of FFA in training the ANN model in terms of RMSE.
Figure 5. The optimal performance of FFA in training the ANN model in terms of RMSE.
Sustainability 14 11642 g005
Table 1. Descriptive statistics.
Table 1. Descriptive statistics.
VariablesMeanStd. Dev.Min.Max.SkewnessKurtosis
StatisticStd. ErrorStatisticStd. Error
Final exams3515.111050−1.0510.1590.1410.317
Mid-exams16.066.227025−0.6740.1591.3890.317
Assignments21.26.253025−2.3490.1594.7290.317
Attendance2.712.942081.8670.1594.5570.317
Virtual/Face0.490.501010.0340.159−2.0160.317
Table 2. Correlation between inputs and outputs variables.
Table 2. Correlation between inputs and outputs variables.
Final ExamVirtual/FaceMid-ExamAssignmentAttendance
Final exam10.474 **0.573 **0.633 **−0.646 **
Virtual/Face 10.132 *0.059−0.06
Mid-exam 10.494 **−0.476 **
Assignment 1−0.484 **
Attendance 1
Note: Signif. codes: ‘**’ 0.01, ‘*’ 0.05 (2-tailed).
Table 3. OLS, fixed effect, and random effect.
Table 3. OLS, fixed effect, and random effect.
VariablesOLSFixed EffectRandom Effect
CoefficientStd. Err.CoefficientStd. Err.CoefficientStd. Error
(Intercept)10.1592 *3.3144 9.7678 *3.3421
Virtual/Face12.4691 *1.09869.3633 *11.074012.4444 *1.1963
Mid exam0.4303 *0.11040.4163 *0.15180.4298 *0.1096
Assignment0.7639 *0.11670.898 *0.17000.7847 *0.1169
Attendance−1.6207 *0.2622−1.63250.4080−1.6334 *0.2643
Observations234234234
R-square0.70150.64200.6923
Adjusted R-square0.69630.24850.6869
F-statistic/Chisq134.6000 *49.7612 *518.514 *
Signif. codes: ‘*’ 0.01.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hamadneh, N.N.; Atawneh, S.; Khan, W.A.; Almejalli, K.A.; Alhomoud, A. Using Artificial Intelligence to Predict Students’ Academic Performance in Blended Learning. Sustainability 2022, 14, 11642. https://doi.org/10.3390/su141811642

AMA Style

Hamadneh NN, Atawneh S, Khan WA, Almejalli KA, Alhomoud A. Using Artificial Intelligence to Predict Students’ Academic Performance in Blended Learning. Sustainability. 2022; 14(18):11642. https://doi.org/10.3390/su141811642

Chicago/Turabian Style

Hamadneh, Nawaf N., Samer Atawneh, Waqar A. Khan, Khaled A. Almejalli, and Adeeb Alhomoud. 2022. "Using Artificial Intelligence to Predict Students’ Academic Performance in Blended Learning" Sustainability 14, no. 18: 11642. https://doi.org/10.3390/su141811642

APA Style

Hamadneh, N. N., Atawneh, S., Khan, W. A., Almejalli, K. A., & Alhomoud, A. (2022). Using Artificial Intelligence to Predict Students’ Academic Performance in Blended Learning. Sustainability, 14(18), 11642. https://doi.org/10.3390/su141811642

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop