Next Article in Journal
Cornus mas L. Extract-Mediated Modulations of the Redox State Induce Cytotoxicity in Schizosaccharomyces pombe
Previous Article in Journal
Empirical Study of Fully Homomorphic Encryption Using Microsoft SEAL
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Perspective Adaptive Paperless Examination Cheating Detection System Based on Image Recognition

1
School of Information Science and Technology, Nantong University, Nantong 226019, China
2
School of Educational Sciences, Nantong University, Nantong 226019, China
3
School of Transportation and Civil Engineering, Nantong University, Nantong 226019, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(10), 4048; https://doi.org/10.3390/app14104048
Submission received: 21 March 2024 / Revised: 28 April 2024 / Accepted: 6 May 2024 / Published: 10 May 2024

Abstract

:
This paper proposes a multi-perspective adaptive examination cheating behavior detection method to meet the demand for automated monitoring throughout the entire process in paperless online exams. Unlike current dual-perspective cheating behavior detection methods, we expand the monitoring field of view by using three cameras with different perspectives: the overhead perspective, the horizontal perspective, and the face perspective. This effectively covers areas where cheating may occur. An adaptive cheating behavior detection system based on three perspectives is proposed, including a gaze direction recognition model based on Swin Transformer, a cheating tool detection model based on Lightweight-YOLOv5-Coordinate Attention, and a cheating behavior determination model based on Multilayer Perceptron. To reduce computational complexity and ensure efficient processing while expanding the monitoring field of view, the system uses the results of the gaze direction recognition model to adaptively select the cheating behavior detection model from different perspectives, reducing the three-perspective system to dual-perspective. In online simulation tests, our method achieves cheating behavior determination at 35 frames per second, with an average recognition rate of 95%. It has good real-time performance, accuracy, and a large monitoring range.

1. Introduction

With the continuous progress of artificial intelligence technology, different fields of education have been deeply affected. In this context, different countries have formulated relevant policies aimed at promoting the deep integration of intelligent technology and education, empowering teaching with intelligent technology to innovate education systems and improving education quality. In May 2023, the Office of Educational Technology of the United States released a report titled “Artificial Intelligence and the Future of Teaching and Learning” [1] which presents the latest insights and policy recommendations on the development of artificial intelligence in teaching. In January 2024, the Chinese Ministry of Education proposed at the World Digital Education Conference that it would implement artificial intelligence empowerment actions to promote the deep integration of intelligent technology and education [2]. As an important part of ensuring educational fairness, invigilation puts forward the new requirements for its intelligent development. Therefore, the improvement of traditional invigilation forms has become an urgent problem to be solved. With the support of intelligent technology, achieving the intelligent auxiliary monitoring of cheating behavior in examinations can greatly reduce the burden on invigilators, improve the efficiency of invigilation, and maintain the fairness and impartiality of the examination.
Cheating can be accomplished individually or with the assistance of others. Individual cheating refers to the independent use of cheating tools by candidates during an exam, and cheating with the assistance of others refers to the indirect use of cheating tools with the help of third parties. Due to the concealment, complexity, and diversity of cheating behaviors, the development of automatic online examination monitoring faces challenges. To address these issues, methods have been proposed to detect cheating behaviors based on visual, acoustic, physiological, and other signals. Among these, the use of cameras for remote monitoring is the most effective and intuitive method.
According to the location and number of cameras, there are both single- and dual-perspective cheating detection methods. The common single-perspective method uses the candidate’s face as the monitoring viewpoint [3] and determines cheating behavior by identifying the gaze direction [4,5] or face direction [6,7]. This method has the advantages of fewer sensors, lower cost, and faster processing; however, there is the problem of insufficient evidence of cheating. Dual-perspective methods monitor from two directions simultaneously, such as the face and overhead perspective [8] or the face and horizontal perspective [9,10]. Dual-perspective methods have a wider monitoring range and provide more comprehensive evidence of cheating than single-perspective methods. However, they still have shortcomings. Specifically, the combination of the face and overhead lacks the perspective of the candidate’s field of view. When the candidate looks up at the cheating tool in the field of view, the two cameras cannot detect the cheating tool due to a large monitoring blind zone, as shown in Figure 1a. The range of view of the glasses camera differs greatly from the actual gaze direction of the candidate. Especially when the candidate looks down at the desktop, the close distance between the glasses and desktop can cause a narrow range of view, resulting in a blind zone, as shown in Figure 1b.
In response to the above issues, this paper proposes a multi-perspective adaptive paperless examination cheating detection system. At present, active window detection algorithms have often been embedded in paperless examination systems to monitor whether students engage in cheating by using a hidden second screen on the computer screen. Therefore, this study aims to assist in the invigilation of paperless examination systems from a visual monitoring perspective.
This paper is organized as follows: Section 2 provides a literature review on single-perspective and dual-perspective cheating behavior detection methods. Section 3 introduces the framework of the three-perspective adaptive examination cheating behavior detection system, which includes a gaze direction recognition model, two object detection models under overhead and horizontal perspectives, and a multimodal cheating behavior determination model based on the BP neural network. Section 4 conducts experiments and analyzes the results. Section 5 draws a research conclusion and proposes research prospects.

2. Related Work

2.1. Single-Perspective Cheating Behavior Detection Method

Single-perspective cheating detection uses information collected by a single camera, such as from the candidate’s face, eyes, mouth, hands, and background environment. Based on the types of information collected, there are five categories of such methods.
1. Gaze or facial direction. Dlini [4], Singh [5], and Alrubaish [11] detected the gaze direction through Support Vector Machine (SVM) and cascade classifiers, respectively, and Susithra [6] used Multilayer Perceptron (MLP). Garg [7] combined a Convolutional Neural Network (CNN) and cascade classifier for face recognition and tracking. Hossain [12] and Indi [13] used a hybrid classifier to identify cheating behavior based on the information of head posture and gaze direction. Bawarith [14] implemented the identity recognition and gaze direction discrimination of candidates based on a fingerprint recognizer and eye tracker.
2. Mouth movements. Mouth movements and sounds form an important basis for determining whether there is communication with others during an examination. Masud [15], Hu [16], and Tejaswi [17] used the opening and closing state of the mouth to determine whether there were communication behaviors. Motwani [18] and Prathish [19] used microphones to detect abnormal sounds. Elshafey [20], Jia [21], Tweissi [22], Maniar [23], and Soltane [24] used Deep Speech, Julius, and Long Short-Term Memory (LSTM) to convert sound signals into text and extract keywords for cheating behavior analysis.
3. Facial expression. The changes in a candidate’s psychological state during cheating are revealed through facial expressions. Gopane [25], Ozdamli [26], and Malhotra [27] proposed cheating behavior determination methods based on these.
4. Cheating tools, such as books and mobile phones. Abozaid [28], Ahmad [29], Ashwinkumar [30], Ozgen [31], and Sapre [32] used Retinanet, YOLO, and Mobilenet-SSD methods to detect cheating tools or abnormal objects in the examination environment.
5. Hand or mouse movement. Fan [33] observed the hand gestures of candidates and combined the temporal and frequency dimensions of movements to distinguish cheating behavior. Li [34] determined whether a candidate was cheating through abnormal mouse movement data.

2.2. Dual-Perspective Cheating Behavior Detection Method

Dual-perspective cheating behavior detection uses two cameras to monitor the candidate from different perspectives. Pandey [35] used the front and rear cameras of mobile devices, as well as microphones, to monitor images and sounds of the examination environment in real time. Kaddoura [9] used a laptop and glasses camera to obtain images of the candidate’s face and field of view and a microphone to obtain sound information. After extracting visual and auditory feature information based on DCNN and a discrete Fourier transform, a soft voting strategy was used to distinguish cheating behavior. Atoum [10] used dual-perspective cameras and microphone devices to achieve the feature extraction of six types of information, namely identity, gaze direction, text detection, voice, activity window, and phone, and determined cheating behavior based on an SVM classifier. Li [36] added sensor devices such as EEG and gaze trackers based on dual-perspective monitoring with laptop and glasses cameras, whose monitoring results were marked as cheating alerts. When the number of alerts reached a threshold, it was transmitted to a review committee for manual judgment.
In summary, dual-perspective combination methods can be divided into face and horizontal, face and overhead, and horizontal and overhead. Compared with single-perspective monitoring, these methods effectively expand the monitoring perspective. However, there are still significant visual monitoring blind zones, as shown in Figure 1a,b.

3. Proposed Three-Perspective Adaptive Paperless Online Cheating Detection System

3.1. Three-Perspective Cheating Detection System Framework

To address the problem of blind zones in dual-perspective cheating behavior detection methods, as shown in Figure 1, we propose a three-perspective adaptive paperless online cheating behavior detection system. Figure 2a shows the monitoring perspective arrangement of the three cameras in this system. The first is the overhead perspective, as shown in the blue area in Figure 2a, to detect cheating tools on the desktop. The second is the face perspective, as shown in the yellow area in Figure 2a, to monitor the candidate’s gaze direction. The third is the horizontal perspective, as shown in the green area in Figure 2a, to monitor abnormal objects and cheating tools in the candidate’s forward field of view. Figure 2b shows the framework of the proposed system, consisting of a gaze direction recognition model, cheating tool detection model, and cheating behavior determination model. Due to the large differences in the types and appearances of cheating tools that appear under different perspectives, the cheating tool detection model includes models under the overhead and horizontal perspectives. Because it is impossible for candidates to simultaneously look down and up, the system automatically selects the cheating tool detection model under different perspectives based on the results of the gaze direction recognition model, effectively reducing computational complexity and improving processing speed. Then the gaze direction recognition and cheating tool detection results are input to the cheating behavior determination model, which is based on the BP neural network to make decisions, and it outputs the final detection results, as shown in Figure 2b.

3.2. Gaze Direction Recognition Model Based on Swin Transformer

The direction of the candidate’s gaze is an important characteristic used to determine cheating. We use the candidate’s nose as the origin of the coordinate axis, and a step is defined as 45° counterclockwise. The direction of the candidate’s gaze is divided into nine categories: upleft, left, downleft, upright, right, downright, down, normal, and up. All except “normal” are labeled as abnormal gazes indicating possible cheating, as shown in Figure 3.
To accurately identify the gaze direction of the candidate, we adopt the Swin Transformer [37] for gaze direction recognition, as shown in Figure 4. The detected facial RGB image undergoes patch partition layer flattening before four stages of feature extraction. Except for the first stage, which consists of linear embedding and Swin Transformer blocks, the stages of the feature extraction model use patch merging and Swin Transformer blocks. The patch merging layer performs down-sampling to achieve half the height and width of the feature map and twice the depth. The Swin Transformer block implements self-attention weight adjustment between windows through SW-MSA and uses W-MSA for self-attention calculation within each window. After four stages of feature extraction, the normalization layer is used to normalize the feature information, the normalized features are subjected to adaptive average pooling and channel flattening to obtain one-dimensional feature vectors, and the confidence of different gaze directions is output through a fully connected layer.

3.3. Cheating Tool Detection Model Based on Lightweight-YOLOv5-CA Network

In paperless online exams, common cheating tools include books, paper, mobile phones, and people. Due to the significant differences in appearance characteristics of these tools from different perspectives, the proposed cheating tool detection model includes object detection models from two perspectives. These are the horizontal and the overhead perspective abnormal object detection model. The training dataset of the horizontal perspective abnormal object detection model includes books, paper, mobile phones, and people. The overhead perspective abnormal object detection model includes books, paper, and mobile phones.
To achieve fast and accurate object detection, we propose a lightweight object detection network Lightweight-YOLOv5-Coordinate Attention (Lightweight-YOLOv5-CA) model based on Group Fast Spatial Pyramid Pooling (GFSPP). As shown in Figure 5, Coordinate Attention (CA) [38] and GFSPP are introduced successively at the end of the YOLOv5 backbone network to expand the receptive field while reducing computation and parameter volume and improving the representation ability of network features. Due to the small size and partial occlusion of the object in cheating tools such as paper and mobile phones, missed and false detection can easily occur. The CA mechanism can improve the representation ability of network features by using the importance of position information. The optimization process is as follows.
Given feature X, the pooling kernels (H, 1) and (1, W) are used to obtain horizontal and vertical direction features of channel C, as shown in Formulas (1) and (2),
z c h h = 1 W 0 i < W x c ( h , i )
z c w w = 1 H 0 i < H x c j , w .
The features in the two directions are concatenated, and 1 × 1 convolution is performed again to obtain the intermediate features, as shown in Formula (3),
f = δ ( F 1 ( [ z h , z w ] ) ) ,
where r is a reduction factor, and δ is a nonlinear activation function. Then, the feature map f is segmented, and the channels are expanded to obtain the attention weights, as shown in Formulas (4) and (5),
g h = σ F h f h ,
g w = σ F w f w .
The output of the CA model is obtained, as shown in Formula (6),
y c i , j = x c ( i , j ) × g c h ( i ) × g c w ( j ) .
The proposed GFSPP combines the ideas of CSPNet [39] and grouped convolution [40] based on multi-scale pooling structure Spatial Pyramid Pooling Fast (SPPF), which is beneficial for the extraction of the feature information of object details while reducing the computational complexity and parameter volume of the convolution process, as shown in Figure 5.

3.4. Multimodal Cheating Behavior Determination Model Based on BP Neural Network

Cheating behavior in paperless online exams can be described as the presence of cheating tools in the field of view with an abnormal gaze direction. Therefore, we construct a decision-making model based on two important factors: gaze direction and cheating tools. We use the result of gaze direction recognition and cheating tool detection as inputs to determine the types of cheating behavior based on the BP neural network. Similar to the cheating tool detection model, the cheating behavior determination model is divided into an overhead perspective cheating behavior determination model and a horizontal perspective cheating behavior determination model.
As shown in Figure 6, the above two determination models adopt a three-layer network structure of input, hidden, and output. However, there are slight differences in the numbers of the input and output neurons of the different perspective network models. There are a total of 12 input neurons in the overhead cheating behavior determination model, comprising the confidence of nine gaze directions and three cheating tools. The output consists of nine neurons, representing different cheating tools appearing at different positions on the desktop. The input neurons of the horizontal perspective cheating behavior determination model include the confidence of nine gaze directions and four cheating tools. The four output neurons represent three types of cheating situations, with the assistance of third parties and normal behavior. The output neurons of the network under the two perspectives are defined in Table 1 and Table 2, determined as shown in Figure 7a–f and Figure 8.

3.5. Adaptive Three-Perspective Cheating Behavior Determination Model

It is impossible to simultaneously cheat from both the horizontal and overhead perspective. Hence, we simplify the three-perspective cheating detection system to an adaptive two-perspective cheating system driven by gaze direction recognition. That is, the gaze direction recognition result is used to adaptively select the cheating behavior determination model. The algorithm flow is shown in Figure 9, and the correspondence between gaze direction and the cheating behavior determination model is summarized in Table 3.

4. Experimental Results and Analysis

4.1. Experimental Environment and Data

We built a three-perspective adaptive cheating behavior determination system using the Python 3.7 language and PyTorch 1.12.1 framework under the Ubuntu 18.04 system. The server hardware configuration used for training each model consisted of an NVIDIA TITAN XP GPU and Intel Xeon 10 CPU. The gaze direction recognition dataset consisted of 3000 images from 17 students in a simulated online examination environment, as well as some public data on conventional gaze direction. As shown in Figure 10, taking one of the participants as an example, the nine gaze directions corresponding to the dataset are upleft, left, downleft, up, normal, down, upright, right, and downright. Considering the influence of lens reflection on gaze direction recognition, 8 of 17 students wore glasses. To overcome the influence of different pupil colors on gaze direction classification, 750 images from the MPIIFaceGaze dataset [41] were added to the 2250 self-built images.
The cheating tool detection model in this paper includes a desktop perspective abnormal object detection model and a glasses perspective abnormal object detection model. The types and appearances of cheating tools vary from different perspectives. Therefore, we established an overhead perspective abnormal object dataset and a horizontal perspective abnormal object dataset. The overhead perspective abnormal object dataset includes 7600 images of books, paper, and mobile phones, as shown in Figure 11a. The horizontal perspective abnormal object dataset includes 8899 images of books, paper, mobile phones, and people, as shown in Figure 11b.
The three-perspective cheating behavior determination dataset used in this article was created from monitoring images of 17 students in a simulated online exam environment. The dataset includes 5687 sets of images from three perspectives—horizontal, face, and overhead—with 70% of the dataset used for training and 30% for testing. Figure 12 shows several examples of data collection.

4.2. Results of Gaze Direction Recognition

The 3000 images of the gaze direction recognition dataset were divided into training, validation, and test sets in a 7:2:1 ratio. The experiment trained the gaze direction recognition dataset for 300 epochs, stored the training weights every 10 epochs, and continued until the model was trained for 300 epochs to obtain the optimal training weight value. Table 4 gives the accuracy results of nine gaze direction recognitions on the test set. It can be seen that among the nine gaze directions, the recognition accuracy of upleft, down, up, and normal reached 100%; the accuracy of left and upright exceeded 90%; and the accuracy of downleft and right was between 80% and 90%. This indicates that when the gaze is upward, the eyes are relatively open, and the position and size of the pupil can better represent the gaze direction. Furthermore, when the gaze is downward, the eyes are relatively closed, and it is difficult to represent the gaze direction with the position and size of the pupil, leading to the relatively weak expression ability of the downleft and downright gaze directions.
By observing the area of the confusion matrix indicated with red frames shown in Figure 13, it can be seen that 17% of the downleft direction is incorrectly identified as downright, 21% of downright is incorrectly identified as downleft, and 10% of left is incorrectly identified as right. Because downleft, downright, left, and right are abnormal gazes in the overhead perspective, some confusion between these four directions will not affect the accuracy of final cheating behavior identification.

4.3. Result of Cheating Tool Detection

To verify the effectiveness of the proposed Lightweight-YOLOv5-CA object detection network, a 7600-image overhead perspective abnormal object dataset and an 8899-image horizontal perspective abnormal object detection dataset were divided into training, validation, and test sets in a 7:1:2 ratio. In the same experimental environment, a comparative experiment was conducted on the detection performance of traditional YOLOv5 and Lightweight-YOLOv5-CA models. Table 5 presents the abnormal object detection results of the two models under the two perspectives. Through comparison, it can be seen that the Lightweight-YOLOv5-CA model under overhead and horizontal perspectives has better mAP50:90 detection results for books, papers, people, and mobile phones than the traditional YOLOv5 model, with average accuracy being 0.5% and 1.1% higher, respectively. Compared to YOLOv5, the number of parameters has been reduced by 3.5%.
Figure 14 shows an example of the detection result of the Lightweight-YOLOv5-CA model for cheating tools from the desktop perspective. From the detection results in Figure 11a, it can be seen that the proposed cheating tool detection model for the overhead perspective can detect books, papers, and mobile phones that appear at different positions on the desktop, achieving the effective detection of cheating tools.

4.4. Results of Three-Perspective Adaptive Cheating Behavior Detection

To validate the proposed three-perspective adaptive cheating behavior detection method, we selected existing dual-perspective (overhead and horizontal, face and horizontal, and face and overhead) cheating behavior detection methods as comparison objects. We tested and evaluated the accuracy of cheating behavior detection on our dataset, and the results are summarized in Table 6. The accuracy calculation Formula (7) is as follows: True Positive (TP) is the true example, True Negative (TN) is the true negative example, False Positive (FP) is the false positive example, and False Negative (FN) is the false negative example,
Accuracy = (TP + TN)/(TP + TN + FP + FN)
Through comparison, it can be seen that the accuracy of the three existing dual-perspective cheating detection methods mentioned above is relatively low. The adaptive three-perspective cheating behavior detection method proposed in this paper avoids the misjudgment caused by a single detection to a certain extent, effectively expands the visual monitoring range, reduces detection blind spots, and improves the robustness and accuracy of cheating behavior detection.
Figure 15 shows the detection results of eight common cheating behaviors. Each set of monitoring images consists of images from the perspectives of overhead, face, and horizontal. It can be seen that when the candidate’s gaze direction is left, right, downleft, downright, or down, a student is often using a tool from the desktop to cheat; when the candidate’s gaze direction is upleft, up, or upright, there is often a third party assisting in cheating. The monitoring images from the three perspectives can clearly and effectively describe the cheating behavior.
To measure the probability of non-cheating students being detected for cheating, using the false positive rate (FPR) as the standard, Formula (8) is calculated as follows:
FPR = FP/(FP + TN)
This study uses overhead perspective and horizontal perspective classifiers to detect cheating behavior based on the gaze direction. Therefore, the FPR of the classifier in the two perspectives is calculated separately, and the FPR is 1.27% in the overhead perspective and 0.05% in the horizontal perspective.

5. Conclusions

In response to the obvious blind zones in current dual-perspective cheating behavior detection systems, we designed a three-camera monitoring installation plan based on the overhead, face, and horizontal perspectives. To expand the scope of examination monitoring while ensuring real-time system performance, a three-perspective adaptive cheating behavior detection system driven by candidate gaze direction was proposed. Nine gaze directions were categorized as either normal, horizontal perspective, or overhead perspective. Based on the gaze direction recognition results, the cheating tool detection model and cheating behavior determination model under the best perspective were automatically selected. To quickly and accurately identify cheating tools, a lightweight object detection network model, Lightweight-YOLOv5-CA, was proposed. Through online testing results, it was verified that the three-perspective capture and evidence collection method can clearly describe cheating behavior and effectively expand the monitoring area. The adaptive cheating behavior detection model driven by gaze direction recognition effectively extends the monitoring area while ensuring real-time performance. Among them, the accuracy of the gaze direction recognition model from the face perspective reached 92.4%, and the mAP50 of overhead and horizontal object detection reached 97.4% and 98.8%, respectively. In the final cheating behavior determination model, the accuracy reached 95.4%. The system has been put into use in a certain school, and the application effect is good after verification, which realizes intelligent monitoring, greatly reduces students’ cheating, and saves invigilators’ resources. In the future, the system will be further promoted in other schools, exploring potential problems in practical application, and enriching the resources of the three-perspective dataset to improve the universality of the system.

Author Contributions

Conceptualization, H.W.; Methodology, Z.H.; Software, Y.J.; Validation, Z.H.; Formal analysis, Z.H. and Y.J.; Investigation, Y.J.; Resources, Z.H.; Data curation, Y.J.; Writing—original draft, Y.J.; Writing—review & editing, Z.H., G.W. and H.W.; Supervision, Z.H., G.W. and H.W.; Project administration, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Postgraduate Education Reform Project of Jiangsu Province grant number JGKT23_C075.

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to we have recruited volunteers and they have signed informed consent statement for portrait usage.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Office of Educational Technology of the United States. Artificial Intelligence and the Future of Teaching and Learning. 2023. Available online: https://digital.library.unt.edu/ark:/67531/metadc2114121 (accessed on 1 May 2023).
  2. Ministry of Education of the People’s Republic of China. Huai Jinpeng, Minister of Education in China, Delivered a Keynote Speech at the 2024 World Digital Education Conference: Jointly Promote the Application, Sharing, and Innovation of Digital Education. 2024. Available online: http://www.moe.gov.cn/jyb_xwfb/moe_176/202402/t20240201_1113761.html (accessed on 1 February 2024).
  3. Luong, H.H.; Khanh, T.T.; Ngoc, M.D. Detecting Exams Fraud Using Transfer Learning and Fine-Tuning for ResNet50. In Future Data and Security Engineering. Big Data, Security and Privacy, Smart City and Industry 4.0 Applications, Proceedings of the 9th International Conference (FDSE), Ho Chi Minh City, Vietnam, 23–25 November 2022; Springer: Singapore, 2022. [Google Scholar]
  4. Dilini, N.; Senaratne, A.; Yasarathna, T. Cheating detection in browser-based online exams through eye gaze tracking. In Proceedings of the 2021 6th International Conference on Information Technology Research(ICITR), Moratuwa, Sri-Lanka, 1–3 December 2021. [Google Scholar]
  5. Singh, J.; Aggarwal, R.; Tiwari, S. Exam Proctoring Classification Using Eye Gaze Detection. In Proceedings of the 2022 3rd International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 20–22 October 2022. [Google Scholar]
  6. Susithra, V.; Reshma, A.; Gope, B. Detection of anomalous behaviour in online exam towards automated proctoring. In Proceedings of the 2021 International Conference on System, Computation, Automation and Networking(ICSCAN), Puducherry, India, 30–31 July 2021. [Google Scholar]
  7. Garg, K.; Verma, K.; Patidar, K. Convolutional Neural Network based Virtual Exam Controller. In Proceedings of the 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 13–15 May 2020. [Google Scholar]
  8. Jadi, A. New detection cheating method of online-exams during COVID-19 pandemic. Int. J. Comput. Sci. Netw. Secur. 2021, 21, 123–130. [Google Scholar]
  9. Kaddoura, S.; Gumaei, A. Towards effective and efficient online exam systems using deep learning-based cheating detection approach. Intell. Sys. Appl. 2022, 16, 200153. [Google Scholar] [CrossRef]
  10. Atoum, Y.; Che, L.; Liu, A.X. Automated online exam proctoring. IEEE Trans. Multimed. 2017, 19, 1609–1624. [Google Scholar] [CrossRef]
  11. Alrubaish, F.A.; Humaid, G.A.; Alamri, R.M. Automated detection for student cheating during written exams: An updated algorithm supported by biometric of intent. In Proceedings of the Advances in Data Science, Cyber Security and IT Applications: First International Conference on Computing (ADSCSITA), Riyadh, Saudi Arabia, 10–12 December 2019. [Google Scholar]
  12. Hossain, Z.T.; Roy, P.; Nasir, R. Automated Online Exam Proctoring System Using Computer Vision and Hybrid ML Classifier. In Proceedings of the 2021 IEEE International Conference on Robotics, Automation, Artificial-Intelligence and Internet-of-Things (RAAICON), Dhaka, Bangladesh, 3–4 December 2021. [Google Scholar]
  13. Indi, C.S.; Pritham, V.; Acharya, V. Detection of malpractice in e-exams by head pose and gaze estimation. Int. J. Emerg. Technol. Learn. 2021, 16, 47. [Google Scholar] [CrossRef]
  14. Bawarith, R.; Basuhail, A.; Fattouh, A. E-exam cheating detection system. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 1–6. [Google Scholar] [CrossRef]
  15. Masud, M.M.; Hayawi, K.; Mathew, S.S. Smart online exam proctoring assist for cheating detection. In Proceedings of the Advanced Data Mining and Applications: 17th International Conference(ADMA), Sydney NSW, Australia, 2–4 February 2022. [Google Scholar]
  16. Hu, S.; Jia, X.; Fu, Y. Research on Abnormal Behavior Detection of Online Examination Based on Image Information. In Proceedings of the 2018 10th International Conference on Intelligent Human-Machine Systems and Cybernetics(HMSC), Hangzhou, China, 25–26 August 2018. [Google Scholar]
  17. Tejaswi, P.; Venkatramaphanikumar, S.; Kishore, K.V.K. Proctor net: An AI framework for suspicious activity detection in online proctored examinations. Measurement 2023, 206, 112266. [Google Scholar] [CrossRef]
  18. Motwani, S.; Nagpal, C.; Motwani, M. AI-based proctoring system for online tests. In Proceedings of the 4th International Conference on Advances in Science & Technology (ICAST), Sion, Mumbai, 7 May 2021. [Google Scholar]
  19. Prathish, S.; Bijlani, A.N.S.K. An intelligent system for online exam monitoring. In Proceedings of the 2016 International Conference on Information Science (ICIS), Kochi, India, 12–13 August 2016. [Google Scholar]
  20. Elshafey, A.E.; Anany, M.R.; Mohamed, A.S. Dr. Proctor: A Multi-modal AI-Based Platform for Remote Proctoring in Education. In Proceedings of the Artificial Intelligence in Education: 22nd International Conference(AIED), Utrecht, The Netherlands, 14–18 June 2021. [Google Scholar]
  21. Jia, J.; He, Y. The design, implementation and pilot application of an intelligent online proctoring system for online exams. Interact. Technol. Smart Educ. 2022, 19, 112–120. [Google Scholar] [CrossRef]
  22. Tweissi, A.; Etaiwi, W.; Eisawi, D. The Accuracy of AI-Based Automatic Proctoring in Online Exams. Electron. J. E-Learn. 2022; 20, 419–435. [Google Scholar]
  23. Maniar, S.; Sukhani, K.; Shah, K. Automated Proctoring System using Computer Vision Techniques. In Proceedings of the 2021 International Conference on System, Computation, Automation and Networking (ICSCAN), Puducherry, India, 30–31 July 2021. [Google Scholar]
  24. Soltane, M.; Laouar, M.R. A Smart System to Detect Cheating in the Online Exam. In Proceedings of the 2021 International Conference on Information Systems and Advanced Technologies (ICISAT), Tebessa, Algeria, 27–28 December 2021. [Google Scholar]
  25. Gopane, S.; Kotecha, R. Enhancing Monitoring in Online Exams Using Artificial Intelligence. In Proceedings of the International Conference on Data Science and Applications (ICDSA), Kolkata, India, 10–11 April 2021. [Google Scholar]
  26. Ozdamli, F.; Aljarrah, A.; Karagozlu, D. Facial Recognition System to Detect Student Emotions and Cheating in Distance Learning. Sustainability 2022, 14, 13230. [Google Scholar] [CrossRef]
  27. Malhotra, N.; Suri, R.; Verma, P. Smart Artificial Intelligence Based Online Proctoring System. In Proceedings of the 2022 IEEE Delhi section conference (DELCON), New Delhi, India, 11–13 February 2022. [Google Scholar]
  28. Abozaid, A.; Atia, A. Multi-Modal Online Exam Cheating Detection. In Proceedings of the 2022 International Conference on Electrical, Computer and Energy Technologies(ICECET), Prague, Czech Republic, 20–22 July 2022. [Google Scholar]
  29. Ahmad, I.; AlQurashi, F.; Abozinadah, E. A novel deep learning-based online proctoring system using face recognition, eye blinking, and object detection techniques. Int. J. Adv. Comput. Sci. Appl. 2021, 12. [Google Scholar] [CrossRef]
  30. Ashwinkumar, J.S.; Kumaran, H.S.; Sivakarthikeyan, U.; Rajesh, K.P.; Lavanya, R. Deep Learning based Approach for Facilitating Online Proctoring using Transfer Learning. In Proceedings of the 2021 5th International Conference on Computer, Communication and Signal Processing(ICCCSP), Chennai, India, 24–25 May 2021. [Google Scholar]
  31. Ozgen, A.C.; Öztürk, M.U.; Torun, O. Cheating Detection Pipeline for Online Interviews. In Proceedings of the 2021 29th Signal Processing and Communications Applications Conference(SIU), Istanbul, Turkey, 9–11 June 2021. [Google Scholar]
  32. Sapre, S.; Shinde, K.; Shetta, K. AI-ML based smart online examination framework. In Progresses in Artificial Intelligence & Robotics: Algorithms & Applications, Proceedings of 3rd International Conference on Deep Learning, Artificial Intelligence and Robotics (ICDLAIR), Virtual, 10 April 2022; Springer: Cham, Switzerland, 2022. [Google Scholar]
  33. Fan, Z.; Xu, J.; Liu, W. Gesture based misbehavior detection in online examination. In Proceedings of the 2016 11th International Conference on Computer Science & Education(ICCSE), Nagoya, Japan, 23–25 August 2016. [Google Scholar]
  34. Li, H.; Xu, M.; Wang, Y. A visual analytics approach to facilitate the proctoring of online exams. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (HFCS), Online, 8–13 May 2021. [Google Scholar]
  35. Pandey, A.K.; Kumar, S.; Rajendran, B. e-Parakh: Unsupervised Online Examination System. In Proceedings of the IEEE Region 10 International Conference (TENCON), Osaka, Japan, 16–19 November 2020. [Google Scholar]
  36. Li, X.; Chang, K.M.; Yuan, Y. Massive open online proctor: Protecting the credibility of MOOCs certificates. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing, Vancouver BC, Canada, 14–18 March 2015. [Google Scholar]
  37. Liu, Z.; Lin, Y.; Cao, Y. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021. [Google Scholar]
  38. Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  39. Wang, C.Y.; Mark Liao, H.Y.; Wu, Y.H. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  40. Xie, S.; Girshick, R.; Dollár, P. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  41. Zhang, X.; Sugano, Y.; Fritz, M. MPIIGaze: Real-world dataset and deep appearance-based gaze estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 1, 162–175. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Defects of dual-perspective examination cheating detection system.
Figure 1. Defects of dual-perspective examination cheating detection system.
Applsci 14 04048 g001
Figure 2. Three perspectives of adaptive paperless online examination cheating behavior detection system.
Figure 2. Three perspectives of adaptive paperless online examination cheating behavior detection system.
Applsci 14 04048 g002
Figure 3. Examples of gaze directions from face perspective.
Figure 3. Examples of gaze directions from face perspective.
Applsci 14 04048 g003
Figure 4. Structure of gaze direction recognition network based on Swin Transformer.
Figure 4. Structure of gaze direction recognition network based on Swin Transformer.
Applsci 14 04048 g004
Figure 5. Structure of Lightweight-YOLOv5-CA network.
Figure 5. Structure of Lightweight-YOLOv5-CA network.
Applsci 14 04048 g005
Figure 6. Cheating behavior determination model from different perspectives based on BP neural network.
Figure 6. Cheating behavior determination model from different perspectives based on BP neural network.
Applsci 14 04048 g006
Figure 7. Examples of overhead perspective cheating behavior determination.
Figure 7. Examples of overhead perspective cheating behavior determination.
Applsci 14 04048 g007
Figure 8. Examples of horizontal perspective cheating behavior determination.
Figure 8. Examples of horizontal perspective cheating behavior determination.
Applsci 14 04048 g008
Figure 9. Adaptive three-perspective cheating behavior determination model driven by gaze direction.
Figure 9. Adaptive three-perspective cheating behavior determination model driven by gaze direction.
Applsci 14 04048 g009
Figure 10. Gaze direction dataset.
Figure 10. Gaze direction dataset.
Applsci 14 04048 g010
Figure 11. Abnormal objects of different frames in dataset from different perspectives.
Figure 11. Abnormal objects of different frames in dataset from different perspectives.
Applsci 14 04048 g011
Figure 12. Examples of three-perspective cheating behavior determination.
Figure 12. Examples of three-perspective cheating behavior determination.
Applsci 14 04048 g012
Figure 13. Confusion matrix of gaze direction recognition.
Figure 13. Confusion matrix of gaze direction recognition.
Applsci 14 04048 g013
Figure 14. Examples of cheating tool detection results from different perspectives.
Figure 14. Examples of cheating tool detection results from different perspectives.
Applsci 14 04048 g014
Figure 15. Examples of detection results of adaptive cheating behavior from three perspectives.
Figure 15. Examples of detection results of adaptive cheating behavior from three perspectives.
Applsci 14 04048 g015
Table 1. Output of overhead perspective cheating behavior determination network.
Table 1. Output of overhead perspective cheating behavior determination network.
Network OutputDefinition of Candidate’s Behavior
Gaze DirectionCheating Tools
Y1 (cheating)Desktop (left/right/downleft/down/downright)Paper
Y2 (cheating)Desktop (left/right/downleft/down/downright)Book
Y3 (cheating)Desktop (left/right/downleft/down/downright)Phone
Y4 (cheating)Behind computer (left/right/downleft/downright)Paper
Y5 (cheating)Behind computer (left/right/downleft/downright)Book
Y6 (cheating)Behind computer (left/right/downleft/downright)Phone
Y7 (cheating)Desktop (left/right/downleft/down/downright)Paper, Book, Phone
Y8 (cheating)Behind computer (left/right/downleft/downright)Paper, Book, Phone
Y9 (normal)Left/right/downleft/down/downright/normal/up/upleft/uprightNo
Table 2. Output of horizontal perspective cheating behavior determination network.
Table 2. Output of horizontal perspective cheating behavior determination network.
Network OutputDefinition of Candidate’s Behavior
Gaze DirectionCheating Tools
Y1 (cheating)Horizontal perspective (upleft/up/upright)People and paper
Y2 (cheating)Horizontal perspective (upleft/up/upright)People and book
Y3 (cheating)Horizontal perspective (upleft/up/upright)People and phone
Y4 (normal)Left/right/downleft/down/downright/center/up/upleft/uprightNo
Table 3. Output of horizontal perspective cheating behavior determination network.
Table 3. Output of horizontal perspective cheating behavior determination network.
Gaze DirectionCheating Behavior Determination Model
Upleft/up/uprightGlasses
Left/right/downleft/down/downrightDesktop
Table 4. Accuracy of different gaze directions.
Table 4. Accuracy of different gaze directions.
Type of Gaze DirectionGaze
Direction
AccuracyType of Gaze DirectionGaze
Direction
Accuracy
Abnormal
(overhead
Perspective)
Left90%Abnormal
(horizontal perspective)
Upleft100%
Right88%Up100%
Down100%Upright92%
Downleft83%NormalNormal100%
Downright79%Average92.4%
Table 5. Detection results of abnormal objects from different models in overhead perspective.
Table 5. Detection results of abnormal objects from different models in overhead perspective.
PerspectiveIndicationModelBookPaperPeoplePhoneAverage
Overhead
perspective
mAP50YOLOv599.3%99.3%99.5%91%97.3%
Lightweight-YOLOv5-CA99.5%99.3%99.5%91.5%97.4%
mAP50:90YOLOv589%88.6%88.9%71.2%84.6%
Lightweight-YOLOv5-CA89.8%89.37%89.1%71.6%85.1%
Horizontal
perspective
mAP50YOLOv599.2%99.3%99.4%97%98.7%
Lightweight-YOLOv5-CA99.3%99.3%99.4%97.1%98.8%
mAP50:90YOLOv588.7%88.4%88.8%75.4%85.3%
Lightweight-YOLOv5-CA89.7%89.6%89%77.4%86.4%
Table 6. Comparison results of different cheating detection methods.
Table 6. Comparison results of different cheating detection methods.
MethodMonitor PerspectiveOverhead
Object
Detection
Gaze
Recognition
Horizontal Object
Detection
Cheating
Behavior
Determination
Ref. [38]Overhead and Horizontal89.1%×--81.1%
Ref. [12]Face and Horizontal×82.9%89%65.3%
Ref. [11]Face and Overhead91.87%68.9%×81.83%
Ref. [28]Face and Overhead60%89.95%×--
Ref. [10]Face and Overhead----×83.56%
OursOverhead and Face and Horizontal 92.4%98.8%95.7%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, Z.; Jing, Y.; Wu, G.; Wang, H. Multi-Perspective Adaptive Paperless Examination Cheating Detection System Based on Image Recognition. Appl. Sci. 2024, 14, 4048. https://doi.org/10.3390/app14104048

AMA Style

Hu Z, Jing Y, Wu G, Wang H. Multi-Perspective Adaptive Paperless Examination Cheating Detection System Based on Image Recognition. Applied Sciences. 2024; 14(10):4048. https://doi.org/10.3390/app14104048

Chicago/Turabian Style

Hu, Zuhui, Yaguang Jing, Guoqing Wu, and Han Wang. 2024. "Multi-Perspective Adaptive Paperless Examination Cheating Detection System Based on Image Recognition" Applied Sciences 14, no. 10: 4048. https://doi.org/10.3390/app14104048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop