1. Introduction
Contract cheating is a form of academic misconduct in which students outsource their assignments to a third party and submit them as their own work. Cheating services can be categorised based on how these assignments are outsourced, commercial, non-commercial, or Artificial Intelligence (AI)-enabled. Commercial contract cheating occurs when a student hires a third party to complete their work on a fee-for-service basis [
1]. Non-commercial contract cheating occurs when a student’s friends or family complete the work without financial compensation [
2,
3,
4,
5]. AI-enabled contract cheating involves the use of AI tools without proper authorisation or through inappropriate or undisclosed use. Content created by such methods is not considered original work, as it fails to demonstrate the student’s understanding, learning, or proper acknowledgment. This form of contract cheating has emerged in the university education system with the widespread availability of Generative Artificial Intelligence (GenAI) tools and technologies to the public [
6,
7,
8].
Multiple factors contribute to students’ participation in contract cheating. Some of these include a lack of understanding of assignment criteria or academic integrity requirements; not receiving constructive, meaningful, and timely feedback from academics; a lack of confidence in academic writing; and the pressure to receive good grades [
1,
2,
9,
10,
11,
12,
13]. The increasing number of students involved in contract cheating creates a growing threat to the legitimacy and integrity of the qualifications they attain [
14,
15,
16,
17]. Although student submissions are often passed through plagiarism detection software, some instances of plagiarism remain undetected [
18,
19,
20]. Moreover, authorship detection software is still under development and is not infallible, requiring some level of interpretation [
21,
22,
23]. The motivation for building a framework stems from our personal experience as academics in the Australian higher education sector and the significant prevalence of contract cheating among students [
24]. In our previous research [
25], we investigated the underlying factors driving students to engage in contract cheating, which led to the creation of a Three-Tier Framework (TTF) designed to address and mitigate this issue. The framework considers all stakeholders, emphasising that each party must recognise their roles and responsibilities and work collaboratively to review and improve the relevant processes, policies, and procedures involved [
14]. Both supervised and unsupervised assignment submissions are included in the framework; however, the e-proctoring option is excluded due to the challenges students face in setting up the necessary platforms to join online exams. Furthermore, e-proctoring raises significant privacy concerns, as it intrudes on the test-takers privacy during exams, which in turn adversely affects students’ psychological well-being [
18,
26,
27]. For detailed information on the development and structure of the framework, refer to [
25]. A concise overview of the proposed framework is provided in
Section 3. The primary objective of this research is to examine academics’ perceptions of the proposed TTF before its implementation.
The key contributions of this research work are as follows:
Assess academics’ perceptions of the clarity and functionality of the proposed framework.
Evaluate the effectiveness of the proposed framework and identify potential challenges associated with implementing the monitoring process.
Understand academics’ perspectives on the limitations and enablers in detecting and mitigating contract cheating through the proposed framework.
The outline of the paper is as follows. A brief review of the literature of related work is presented in
Section 2, which is followed by a brief description of the proposed framework in
Section 3. A detailed description of the methodology used to collect the survey data and analysis is presented in
Section 4. The results and analysis are presented in
Section 5.
Section 6 summarizes the main findings of the research, along with limitations and recommendations.
2. Literature Review
Various studies have explored the challenges associated with contract cheating; however, the literature still lacks a comprehensive framework to address this complex problem effectively. A recent review on contract cheating in higher education by Ashan et al. [
28] highlights the key factors contributing to contract cheating and proposes a conceptual framework to address contract cheating within higher education context [
29]. However, the review does not provide a clear implementation strategy for applying the framework in higher-educational settings. Furthermore, many universities in Australia have classified the unauthorised use of Gen AI as a form of contract cheating. This includes cases where GenAI-generated text is used without proper acknowledgment in the assessments where GenAI is allowed, as well as instances where students use GenAI despite it being restricted in the assessment [
29]. A framework for AI integration in nephrology academic writing and peer review is introduced in [
30]. In this framework, the authors consider several components, including the use of AI in general and its role in the peer-review process, ethical issues, and the impact and role of AI. Additionally, the framework outlines the objectives of each component, associated objectives, action items, stakeholders, and evaluation criteria. However, this work is specific to a certain field and not generic enough to adopt in the educational setting. Therefore, there is a pressing need for a new framework that considers the factors involved in both traditional contract cheating and contract cheating introduced by GenAI. The framework we propose considers both types of contract cheating and also emphasises the practical implementation aspect of the framework in higher-educational settings.
Furthermore, Ellis et al. [
31] proposed a framework to address and respond to student cheating using the educational integrity enforcement pyramid. This framework is grounded in the theory of responsive regulation and assessment security, providing a theoretical foundation to identify gaps in institutional strategies to combat academic misconduct. While the framework offers valuable insights, it lacks detailed guidance on the practical application of these strategies. Numerous educational frameworks have been proposed in the literature to enhance strategies for upholding academic integrity, offering valuable theoretical insights and conceptual guidelines. However, they often fall short in offering clear and actionable steps for practical implementation. For example, the authors in refs. [
12,
32] proposed a conceptual framework for authentic design aimed at achieving teaching and learning goals for students. The framework considers cognitive and instructional validity, and its effectiveness was evaluated through a survey. Brauns et al. [
33] emphasised the need for an inclusive approach to science education and proposed a framework to ensure interaction and learning for all students. They used qualitative measures such as the foundation, reproducibility, reliability, transferability, relevance, and ethics of the framework to validate their framework. The authors in [
34,
35] presented various frameworks for e-learning, along with methods for their validation. Similarly, in [
36], the authors argue that a shift has been observed in the global education paradigm from educator-oriented to student-centric and they have further proposed a framework to assess and validate teaching competencies and effectiveness. They evaluated their framework by analysing survey feedback from domain experts and also using confirmatory factor analysis. A review of the validation of educational assessments for educators and researchers is presented in [
37]. The paper addresses the issue of prioritisation by identifying four key inferences in an assessment activity.
Within the realm of education, frameworks are proposed to prevent and mitigate instances of academic misconduct [
38,
39,
40,
41,
42,
43]. The study by Baird et al. [
9] highlighted the importance of preventing contract cheating by removing the opportunities for it and understanding how the crime is committed. This study focused only on a business capstone unit. Recent studies have explored innovative strategies aligned with the identified gaps in current approaches to mitigate contract cheating. In [
38], an ethical framework is introduced to maintain academic integrity considering eight ethical aspects; (1) everyday ethics, (2) institutional ethics, (3) ethical leadership, (4) professional and collegial ethics, (5) instructional ethics, (6) students’ academic conduct, (7) research integrity and ethics, and (8) publication ethics. This framework looks only at ethical aspects to maintain academic integrity. Our previous research study [
25] identified significant gaps, challenges, and problems in existing methods, such as absence of tools, data, evidence, and knowledge to analyse patterns and behaviours in students’ performance across assignments or units. Additionally, there was a lack of comprehensive strategy or procedure to address these issues. Another critical shortcoming identified was the absence of dedicated units or integrity offices within institutions to manage and mitigate academic misconduct systematically. Subsequently, a conceptual framework was formulated that integrates and addresses the identified gaps. A brief description of the framework will be discussed next.
In the literature, various frameworks are introduced to adopt GenAI and combat GenAI in educational settings. In [
39], the authors introduce an AI policy education framework to the multifaceted implications of AI integration in university teaching and learning. In [
40], the authors highlight the academic misconduct introduced by GenAI and discuss the relevance of the existing crime prevention frameworks for addressing GenAI-facilitated academic misconduct. A framework for GenAI adoption in education is introduced in [
44]. In [
45], a text-mining and scenario modelling framework is introduced by predicting threats to academic integrity. Despite various advancements, a critical gap persists in the absence of standardised frameworks for evaluating the effectiveness of contract cheating mitigation strategies.
In response to these gaps, our framework integrates both adversarial and cooperative approaches while offering specific implementation steps, strategies to engage most of the stakeholders, and mechanisms to monitor the effectiveness of these interventions. The monitoring phase of the framework is designed to discourage students from using AI-enabled tools to generate answers. In addition, instructors are supported with snapshots of students’ progress in the subject through a dashboard.
Research Questions
In this research, we aim to explore academics’ perspectives on the proposed TTF for addressing contract cheating. By examining their insights the research seeks to evaluate the framework’s clarity, functionality, and effectiveness while identifying potential implementation challenges and opportunities for improving the functionality of the framework. The following research questions guide this investigation:
What are academics’ perceptions of the clarity and functionality of the proposed TTF in addressing contract cheating?
What are academics’ perceptions of the monitoring process within the proposed TTF for addressing contract cheating?
What are academics’ perceptions of the implementation process of the proposed TTF in addressing contract cheating?
3. Three-Tier Framework (TTF) to Combat Contract Cheating
The paper expands the research analysis to evaluate the TTF introduced in [
25] to address and mitigate contract cheating in academic institutions. The proposed framework is designed with an emphasis on the institutional contexts of the issue to standardise the response process and foster a consistent approach for detecting and mitigating contract cheating.
Figure 1 provides a high-level view of the framework, while
Figure 2 presents a detailed breakdown of the proposed TTF. The proposed framework comprises three interconnected tiers: awareness, monitoring, and evaluation, each serving a specific role in addressing contract cheating.
Figure 1 illustrates the framework’s circular and iterative structure, with each level contributing to a continuous cycle of improvement. Re-visiting and revising each stage based on evaluation insights, the framework fosters a sustainable approach to mitigate contract cheating.
Overall, the proposed framework incorporates three key features that demonstrate its adaptability to future changes, including the integration of GenAI and other technologies.
The framework establishes a foundational level of awareness on contract cheating among all stakeholders (level 1), ensuring it remains flexible and responsive to emerging challenges in academic integrity. This flexibility enables the framework to adapt to GenAI and other future technological advancements.
The emphasis on process over product, through the framework’s monitoring process (level 2), is designed to address both traditional contract cheating and academic integrity challenges posed by GenAI technologies in an adaptable manner [
46,
47].
The adaptive framework enables continuous updates to policies and processes (level 3) to address changes, including the emergence of technologies such as GenAI.
At the awareness level, both student awareness and staff proficiency in recognising contract cheating are enhanced through various targeted activities. The monitoring phase records students’ assignment-related activities using a Pre-Defined Template (PDT) allowing for the systematic recording and analyses of data on potential contract cheating instances across three dedicated databases.
The final stage involves regularly reviewing and updating institutional policies, procedures, and services related to preventing contract cheating. An Academic Integrity Office (AIO) assesses instances recorded in the monitoring database at the course level. At the same time, unit coordinators conduct unit-level evaluations to see students’ performance within the degree program across the courses he/she enrolled in. Insights and findings from this level feed back into level 1, creating a continuous improvement loop that reinforces awareness and enhances the overall effectiveness of the framework.
3.1. Level 1—Awareness
Awareness activities on the first tier of the framework will be conducted for students and the staff before the semester starts or during the orientation or induction, aiming to improve awareness of contract cheating, including both traditional contract cheating and contract cheating generated by GenAI [
19,
48,
49]. Some key points are highlighted below, but this is not exhaustive.
Academic integrity, including all forms of contract cheating;
Penalties associated with academic breaches, including those related to contract cheating;
Information on the proper and ethical use of GenAI tools and technologies in education and assessments;
Details of student support services and facilities in the Centre of Learning (CoL);
Special consideration application process.
In parallel with student-awareness programs, training for teaching staff can be conducted on:
Trends, patterns, and irregularities in potential contract cheating cases, with examples;
Assessment design and evaluation techniques that can mitigate contract cheating, including GenAI-associated contract cheating;
Provide printed copies of institution policies and a simple flowchart of the procedure they need to follow if they detect contract cheating.
3.2. Level 2—Monitor
The initiatives within the second tier of the proposed framework are designed to span the entire semester, focusing on monitoring student progress consistently. The primary objective is to ensure continuous intervention when necessary, fostering a proactive approach to maintaining academic integrity. To facilitate these operations, three documents, including two centralised databases, have been introduced as part of the framework’s implementation at this level. The three documents are listed below. In addition to these tools, level 2 of the framework incorporates strategies to actively engage stakeholders, such as students, academics, and administrative staff, ensuring a collaborative approach to uphold academic integrity. This phase is specially designed to discourage students from relying on GenAI tools, and instructors are encouraged to use scaffolding techniques when designing assignments to support this process. To further support academics, the framework provides instructors with a user-friendly dashboard offering real-time snapshots of students’ progress in the subject. These snapshots include detailed insights into grouping in step 2 and progress tracking in Steps 3 and 4, enabling instructors to make informed decisions and provide feedback.
Monitoring database: a centralised shareable database updated by academics for recording assignment marks, progress of assignment monitoring, assessor input, and suspected cheating;
Academic Integrity Breach Database: to record confirmed academic breaches by AIO;
Software analysis reports from authorship investigation analysis or similar software from a Learning Management System uploaded by specialised administrative staff at AIO.
3.2.1. Monitoring Database: Pre-Defined Template (PDT)
The PDT is designed to monitor student work and gather evidence before making any allegations of academic dishonesty against students. This PDT template can be customised according to the specific context of the course or application. At Melbourne Institute of Technology (MIT), for example, students are required to complete two supervised assessments, an in-class test (ICT), a final exam, and two unsupervised assessments such as formative assessment (FA) and assignment 2. In addition, class participation and contributions are also evaluated. The PDT ensures that there is a systematic objective approach to monitor academic performance, helping to maintain integrity and fairness in the evaluation process A sample PDT is shown in
Table 1.
3.2.2. Other Documents
The academic integrity breach database is usually maintained by academic institutes according to their policy. However, software analysis needs to be produced by the IT team according to demand during the investigation.
3.2.3. Monitoring Steps
A concise overview of the steps outlined in level 2 of the framework (refer to
Figure 2) is provided below.
STEP 1: Tutor records student marks for FA (unsupervised) and ICT (supervised) in PDT and compares to identify irregularities. Their observations are recorded in the OIS1 column ("Y” or “N”) and comments in the comments column
STEP 2: A2 progress will be demonstrated in class for three (3) weeks (6, 8, and 10) and progress will be recorded on a scale of 1 (lowest) to 10 (highest) in the PDT. Then, the students will be assigned to either Group A or Group B based on the following rules:
- –
Group A—Successful—if average A2 progress is greater than 5 AND no irregularities are observed in OIS1 AND regular class participation;
- –
Group B—Unsuccessful—if average A2 progress is less than or equal to 5 OR irregularities are observed in OIS1 OR irregular class participation.
STEP 3: Tutor evaluates A2 considering the grouping in step 2 and records the observation in OIS2 with comments. Any suspicious cases will be reported to UC.
STEP 4: The final exam, and class participation will be evaluated by observing the information in PDT Database. Any suspicious cases will be reported to UC.
STEP 5: UC performs data analysis on the unit level using data in the PDT. Any suspicious cases will be reported to the AIO.
STEP 6: The AIO conducts course-wide data analysis for each student using data in the monitoring database for all units, the academic integrity breach database, and the authorship investigation report. Interviews will be conducted for suspicious cases.
STEP 7: The AIO applies a penalty for confirmed cases using the analysis report and the interview results according to the policy.
STEP 8: The AIO will decide and apply the penalty according to the policy after analysing the data in three databases, course-level data, assessor reports on the centralised databases, and also using reports from software tools. Finally, the academic integrity breach database is updated.
3.3. Level 3—Evaluation
The evaluation stage involves consistently evaluating and revising institutional policies, procedures, and services concerning contract cheating, which includes GenAI-associated cases. The findings from the evaluation are feedback to level 1, the awareness level, to update activities to ensure that the activities are according to industry and technology trends.
4. Survey and Analysis Methodology
This study employed a survey questionnaire as a research instrument.
Figure 3 explains the procedural sequence adhered to to conduct the research.
1. Setting Objectives: This survey questionnaire was developed to evaluate the TTF designed and published in our previous research [
25]. However, this survey will evaluate only the first two levels of the framework as it is distributed among all academics teaching in both the Melbourne and Sydney campuses of MIT.
2. Ethical Approval: Before starting the research, the study proposal was reviewed and evaluated by the ethics committee at MIT and this study was approved by the MIT Research Ethics Committee.
3. Pre-Survey Information Session: Preceding the survey, a presentation was given to the participants illustrating details about the underlying research and the purpose of the survey. Participants were allowed to ask questions during the presentation.
4. Design of Data Collection Instrument: After gaining an understanding of the objectives, the questionnaire was designed to contain two main sections. The initial section aimed to collect demographic information from the academics taking part in the study, namely their current position, academic experience, working campus, and their level of understanding of the concept of contract cheating in higher education settings. The objective of the second segment of the survey was to obtain the academics’ opinions on the suitability of the proposed framework to combat contract cheating within higher-educational settings. The survey utilised open-ended and closed-ended survey questions (refer to
Appendix A for detailed information) to gain the participants’ insights on the designed framework and their understanding of the topic. The questionnaire provided in the
Appendix A was developed using established guidelines and standard processes [
50,
51], combined with insights from educational experiences. This development was followed by reliability testing to ensure its consistency and validity. We perform spilt-half reliability to check the internal consistency, where the survey questions are divided into two halves and we calculate the correlation between the scores from each half. Some of the closed-ended survey questions were rated along a band of agreement based on 5-point Likert-type scales, ranging from ’Strongly Disagree’ to ‘Strongly Agree’. Informed consent was obtained from participants before starting the data collection.
5. Data Collection: The survey was distributed to 151 academics, including Tutors, Lecturers, Senior Lecturers, Associate Professors, and Professors, teaching at an institute in Australia. A Google form containing the questionnaire and a link to the recorded pre-survey information session was emailed to the participants, and 32 participants out of 151 (roughly 21.2%) responded to the survey.
We minimised sampling bias by distributing the survey to all academic staff across various levels and employment types, ensuring an inclusive approach. The survey was sent to Professors, Senior Lecturers, Lecturers, and Tutors, both casual and full-time. This method allowed us to collect responses from various perspectives within the academic community. As shown in
Figure 4, this response distribution further supports this as 59.4% of the participants were Lecturers, 28% were Senior Lecturers or higher, and 12.5% were working as Tutors. These proportions indicate participation across the academic hierarchy, reflecting diverse viewpoints and experiences. Although the representation of roles aligns with our goals, we acknowledge a potential limitation, a non-responsive bias that we were unable to fully mitigate despite broad outreach efforts.
6. Data Analysis and Interpretation: The data analysis will start with a demographic analysis of the respondents and be followed by an overall sentiment analysis among academics for contract cheating. Finally, the analysis of academics’ perception of the framework (in three areas: clarity, monitoring, and implementation aspects) is performed using composite, weighted composite, and Z-score analysis. This quantitative analysis is based on the concepts described in
Section 4.1 and
Section 4.2, whereas the open-ended questions are analysed using thematic analysis, which involves identifying, analysing, and interpreting themes in three areas (clarity, monitoring process, and implementation of the framework) within those answers [
52].
4.1. Composite and Weighted Composite Analysis
A
composite score (CS) is calculated by taking the sum of each
component score. The formula can be represented as in Equation (
1)
where
n is the number of components and where
component scorei is the score of the
ith component. A
weighted composite score (WCS) is calculated by taking the sum of each component score multiplied by its respective weight. The formula can be written as in Equation (
2)
where
is the weight assigned to the
ith component.
The doubly weighted composite score (DWCS) involves adding an extra level of weighting to the WCS. It is essentially the weighted sum of the WCS. The formula for the DWCS can be written as Equation (
3):
4.2. Normalised Z-Score Analysis
The Z-score, also known as the standard score, is a statistical measure that describes the position of a raw score in terms of its distance from the mean, measured in units of the standard deviation. Z-score helps in detecting any outliers in the data and allows to comparison of different groups of data even if they are on different scales. The Z-score is calculated as in Equation (
4):
where
X is the composite score or weighted composite score,
is the mean of all the composite scores or weighted composite scores (Equation (
5)),
is the standard deviation of the composite score or weighted composite score (Equation (
6)) and
N is the number of respondents.
We also obtained the Kernel Density Estimate (KDE) of all the samples received. We used it to estimate a continuous probability density function for the sample responses received. The KDE works by placing a Gaussian kernel on each point in the data set. Adding up all these individual kernels results in a smooth estimation of the overall dataset.
where:
(x) is the estimate of the probability density function at point x;
Kh(.) is the (Gaussian) kernel function;
n is the number of data points;
Xi represents the individual data points;
h is the bandwidth (a parameter that controls the width of the kernel).
5. Results and Analysis
This section outlines the results and analysis of the survey, i.e., step 6 of the survey and analysis methodology (
Figure 3). The survey questionnaire was designed to assess academic perception of three key aspects of the framework, clarity of the framework’s functionality and purpose, effectiveness and challenges in implementing the monitoring process utilized in level 2, and detecting and mitigating contract cheating using the proposed framework.
Table 2 illustrates the mapping of the question numbers in the questionnaire (
Appendix A) to the primary areas of the investigation indicating their respective usage in the analysis.
In this study, the process yielded a correlation of 0.86, indicating a strong level of internal consistency. This high reliability suggests that the survey items consistently measure the same construct across both halves. Such consistency underscores the robustness of the survey design, ensuring reliable and stable measurement throughout.
5.1. Respondent Demographics
This section outlines the demographics of the academic respondents in terms of position, academic experience, and working campus; details are shown in
Table 3. The academics’ position varies from Tutor to Professor and the majority of respondents, i.e., 59.4%, are Lecturers. Similarly, 75% of the academics have more than six years of academic experience. This suggests that the respondents have considerable experience in academia and predominantly respond from the Melbourne campus. The sessional staff working at our institute bring valuable experience and insight from teaching at other universities, as well on contract cheating matters.
5.2. Awareness
Prior to providing feedback on the proposed framework, respondents were asked about their familiarity with the concept of contract cheating, their comprehension of its implications, and any recent training they may have received on contract cheating, and the details are provided in
Table 4. A notable 56.3% of respondents have engaged in recent training or research in contract cheating within the last year. This indicates that there is an ongoing effort to stay updated on this issue, although 21% have never participated in such training or research. This may affect the awareness level of those academics.
Figure 5 and
Figure 6 demonstrate how the academics’ level of familiarity with contract cheating fluctuates under their training and experience, providing insights into their depth of understanding of the concepts involved.
As shown in
Figure 5, 6% of the academics who reported to be expert practitioners and 59% of those who reported to be quite familiar or familiar had been in the field for more than 6 years. In total, 65% of them had more than 5 years of experience in the field and they also are familiar with the concepts needed to provide feedback about the proposed framework.
Similarly,
Figure 6 illustrates that 3% of the academics who identified themselves as expert practitioners, 49% of those who described themselves as quite familiar, and 15% of those who described themselves as familiar had received training in the previous year. In total, 67% of them had undergone recent training on contract cheating; it can be confidently stated that they possess the necessary familiarity with the concepts required to provide feedback about the proposed framework. These values indicate that around 65% of the surveyed academics had a high level of awareness about issues.
Figure 7 depicts to what extent these academics agree on the concept “Contract cheating has a negative impact on the student learning”. Six per cent of expert practitioners, 53% of those who describe themselves as quite familiar, and 22% who are familiar agreed that contract cheating has a negative impact on student learning. The overwhelming majority, 81% of the respondents, express agreement regarding the detrimental effects of contract cheating. Therefore, there is a pressing need for a clear and implementable framework to mitigate and detect contract cheating, safeguarding both institutional reputation and student learning.
5.3. Academic Perception of Framework
In this section, we perform an analysis of the perception of academics in the proposed framework in three areas: clarity, monitoring, and implementation of the framework. The component score of these questions (refer to
Table 2) are converted into the [−2, 2] range according to
Table 5. The primary (familiarity) and secondary (experience) weight of the [0.25, 1] range is used according to
Table 6 in the weighted composite analysis. The composite analysis, doubly weighted composite analysis, and Z-score analysis are performed using Equation (
1), Equation (
3), and Equation (
4), respectively.
5.3.1. Clarity of Framework
The responses from five questions (refer to
Table 2) are used for analysis of the clarity of the framework and the composite and weighted composite scenarios. These scores are calculated using Equations (
1) and (
3) and the scores are in the range [−10, 10].
Figure 8 presents the histogram and Kernel Density Estimation (KDE) of the composite scores for clarity of the framework. Two of the respondents (out of thirty-two) have an overall negative sentiment about the clarity of the framework, with scores of −6 and −2, respectively. Six respondents perceived the framework’s clarity positively, with a maximum possible composite score of 10. The mean composite score is 5.13, indicating a good positive sentiment towards the framework’s clarity.
Figure 9 presents the histogram and KDE of weighted composite scores for clarity of the framework. For this plot, the user responses are weighted (refer to
Table 6) by their respective familiarity and experience with contract cheating. Unlike the previous case, the minimum weighted composite score is −4.5, for only one respondent. That means this one respondent is concerned about the clarity and usability of the proposed framework; however, the other familiar and experienced respondents are either neutral or positive about the framework. The maximum score of 7 (out of the highest possible score of 10) indicates the strong agreement of these respondents with the framework. The mean weighted composite score is 3.31, indicating a good positive sentiment toward the framework’s clarity, which is significantly above the midpoint of the range.
Table 7 summarises the statistical parameters of composite and weighted composite responses. The increase/decrease in minimum/maximum weighted score indicates that highly experienced and familiar respondents have generally positive sentiments about the framework.
We further compared the
Z-normalised scores to gain further insights into the comparative insights into the sentiments of the respondents.
Figure 10 presents the distribution of Z-score-normalised composite scores for clarity of the framework for both scenarios. The approximate bell shape of the two curves and the overlap of KDE indicates and ensures the sufficiency of the number of data points as per the Centre Limit Theorem [
53]. The overlap with a slight shift in the peak of the weighted curve toward the right indicates that experienced and familiar respondents find it clear and agree with the proposed framework approach.
With the analysis of all three curves, we can conclude with a sufficient level of confidence that there is a consensus on the clarity of the framework presented, while individual perceptions may vary. The overall sentiment of the framework is positive and not sensitive to the familiarity and experience of the respondents.
5.3.2. Monitoring Process in Framework
The responses from two questions (refer
Table 2) are used for the analysis of the monitoring process (level 2) of the framework. Therefore, both composite and weighted composite scenarios are in the range [−4, 4].
Figure 11 presents the histogram and KDE of the composite scores for the monitoring process of the framework. One of the respondents (out of thirty-two) has an overall negative sentiment about the monitoring process used in the framework, with scores of −3, and there are two neutral respondents. Seven respondents perceived the framework’s monitoring process very positively, with a maximum possible composite score of 4. The mean composite score is 2.13, indicating good positive sentiment toward the framework’s monitoring process.
Figure 12 presents the histogram and KDE of the weighted composite scores for the monitoring process of the framework. Unlike in the previous case, the minimum weighted composite score is −0.75, for a single respondent. That means this respondent is concerned about the monitoring process used in the framework; however, the other familiar and experienced respondents are neutral or positive about the framework. The maximum score of 3 (out of the highest possible score of 4) indicates the strong agreement of these respondents with the framework. The mean weighted composite score is 1.43, indicating good positive sentiment toward the framework’s monitoring process, which is significantly above the midpoint of the range.
Table 8 summarises the statistical parameters of composite and weighted composite responses. The increase/decrease in the minimum/maximum of the weighted scores indicates that highly experienced and familiar respondents have generally positive sentiments about the framework. This sentiment is further established through the percentile values (Q1, Q2, and Q3), which indicate the percentage of users below the given value.
We further compared the
Z-normalised scores to gain further insights into the comparative insights into the sentiments of the respondents.
Figure 13 presents the distribution of Z-score-normalised composite scores for the monitoring of framework for both scenarios. The overlap in the peak of the composite and weighted curves indicates that there is no clear difference using weighting factors of experience and familiarity.
With the analysis of all three curves, we can conclude with a sufficient level of confidence that there is a consensus on the monitoring process used in the framework presented, while individual perceptions may vary. The overall sentiment of the framework is positive and not sensitive to the familiarity and experience of the respondents.
Through thematic analysis of the responses to the feedback questions, it is evident that respondents consider staff and student awareness to be a crucial step in mitigating contract cheating. Therefore, the framework should include provisions for training both students and staff.
5.3.3. Implementation Challenges in Framework
The responses from three questions (refer to
Table 2) are used for the analysis of the implementation of the framework, and therefore, both composite and weighted composite scenarios are in the range [−6, 6].
Figure 14 presents the histogram and KDE of the composite scores for the implementation of the framework. One of the respondents (out of 32) has an overall negative sentiment about the implementation of the framework with a score of −3 and three respondents with a score of −1. Only one respondent perceived the framework’s implementation very positively, with a maximum possible composite score of 6, and other respondents were positive or neutral. The mean composite score is 1.81, indicating a positive sentiment toward the framework’s implementation. However, there is some reservation among respondents regarding implementation.
Figure 15 presents the histogram and KDE of weighted composite scores for the implementation of the framework. Unlike the previous cases, the minimum weighted composite score is −2.25, for only one respondent, and there is a negative response for another three respondents. That means four respondents are concerned about the implementation of the framework; however, the other familiar and experienced respondents are either neutral or positive about the framework. The maximum score of 4.5 (out of the highest possible score of 6) indicates the strong agreement of one respondent with the framework. The mean weighted composite score is 1.15, indicating a positive sentiment towards the framework’s implementation, which is above the midpoint of the range.
Table 9 summarises the statistical parameters of composite and weighted composite responses. The increase/decrease in the minimum/maximum weighted scores indicates that highly experienced and familiar respondents have generally positive sentiments about the framework.
We further compared the
Z-normalised scores to gain further insights into the comparative insights into the sentiments of the respondents.
Figure 16 presents the distribution of Z-score-normalised composite scores for the implementation of the framework for both scenarios. The overlap with a slight shift in the peak of the weighted curve toward the left indicates that experienced and familiar respondents find some challenging issues in the implementation of the framework.
With the analysis of all three curves, we can conclude with a sufficient level of confidence that there is a consensus on the implementation of the framework presented, while individual perceptions may vary. The overall sentiment on the implementation of the framework is positive, but divided opinions were observed. Additionally, it is sensitive to the familiarity and experience of the respondents.
“Implementing this framework is a substantial undertaking”.— This is the general sentiment echoed by the respondents. The respondents show concern about the timelines and workload increase due to the new process involved. Through thematic analysis of the responses to the feedback questions, we identified three key points. Some respondents highlighted the need for clear instructions and guidelines to implement the framework, which are essential for smooth integration of the framework into the existing system. Furthermore, they emphasised the importance of automated monitoring at level 2 to ensure its effectiveness. The respondents also indicated that implementing this framework will require proper training for both staff and students to adopt the changes. These points are reflected in the recommendations listed below.
5.4. Recommendations
After a thorough analysis of the results, the following precautionary measures are recommended for the institute to consider during the adoption of the framework:
Create clear guidelines and instructions outlining the implementation steps and processes for smooth integration of the framework into the existing system;
Implement process automation for monitoring at level 2 within the learning management system and academic management system to streamline operations and reduce the workload of academic staff;
Develop a dashboard to facilitate data analysis across all three databases, simplify workflows, and reduce the workload;
Conduct a pilot implementation in selected subjects before full deployment of the framework across the entire institution, allowing for refinement of processes and procedures used in the framework.
6. Conclusions and Limitations
The survey analysis of the framework provides a comprehensive evaluation of the proposed framework in three critical areas: clarity, the monitoring process, and implementation. The findings of the survey analysis indicate a consensus on the clarity and monitoring process of the framework, with respondents expressing a high level of confidence in these components. Overall, the sentiment toward the framework is positive and largely unaffected by the respondents’ familiarity and experience. Although the overall sentiment of the respondents about the implementation of the framework is positive, some respondents, those with more experience, expressed concerns about certain aspects of the implementation. The respondents show concerns about the timelines and additional workload due to the new process involved in the framework. Furthermore, respondents highlighted the need for precise instructions and guidelines. Therefore, to ensure the successful adoption of the framework, several key actions are recommended. First, the institution should design and implement clear instructions and guidelines to support its effective integration. Second, an instructor dashboard should be developed to automate the monitoring process. This tool will help reduce administrative burdens and staff workload; this addresses one of the concerns raised by the respondents. Finally, an incremental approach with pilot implementations should be adopted to address potential challenges and refine processes. Additionally, the framework’s monitoring phase aims to discourage students from relying on AI-enabled tools to generate answers, while the instructor dashboard provides a snapshot of student progress and identifies students who need educational interventions to prevent contract cheating.
There are some limitations in this study. First, this framework is evaluated only in an academic institute that has a limited number of academic staff. Second, the response rate is low, although the authors made a maximum effort to encourage academic staff to participate in this survey. Therefore, more research is required to explore its applicability across a broader range of higher education contexts. Another limitation of the study is the findings are gathered from descriptive statistics rather than inferential statistics as the study aims to identify the patterns and trends only. Lastly, level 3 of the framework was not assessed in this study, as the survey participants were exclusively academic staff. The next phase of research will involve gathering insights from academic managers, committee members, and administrative staff to comprehensively evaluate the framework’s implementation phase.