Next Article in Journal
Humanities Education for Engineering Students: Enhancing Soft Skills Development
Previous Article in Journal
Distress and Positive Experiences Among Adolescents in Northern Chile in the Context of the COVID-19 Pandemic: A Qualitative Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Building Academic Integrity: Evaluating the Effectiveness of a New Framework to Address and Prevent Contract Cheating

by
Deepani B. Guruge
*,
Rajan Kadel
,
Samar Shailendra
and
Aakanksha Sharma
Melbourne Institute of Technology (MIT), School of IT and Engineering (SITE), Melbourne, VIC 3000, Australia
*
Author to whom correspondence should be addressed.
Societies 2025, 15(1), 11; https://doi.org/10.3390/soc15010011
Submission received: 4 November 2024 / Revised: 1 January 2025 / Accepted: 9 January 2025 / Published: 14 January 2025

Abstract

:
Academic integrity is a cornerstone of education systems, yet the rise of contract cheating poses significant challenges for higher education institutions. Current approaches to managing contract cheating often lack the comprehensive structure needed to address the complexities of modern cheating methods. The primary objective of this study is to investigate the effectiveness of the proposed Three-Tier Framework (TTF), designed in our previous study to combat contract cheating. The proposed framework comprises three tiers: awareness, monitoring, and evaluation. It engages stakeholders within the system and encourages a proactive and collaborative stance against contract cheating while reinforcing a culture of academic honesty. The evaluation focuses on three key aspects: the clarity of the framework’s functions and objectives, the potential challenges in implementing the proposed monitoring process, and the perceived limitations in detecting and mitigating contract cheating through this framework. Supervised and unsupervised assignments are considered, excluding the option of e-proctoring, as some students encountered difficulties setting up necessary tools and software for online exams. Survey results reveal a broad consensus among respondents, who expressed strong confidence in the clarity and effectiveness of the framework and its monitoring procedures. These positive perceptions were consistent across respondents, regardless of their prior experience or familiarity with contract cheating. Although the overall feedback was positive, concerns were raised regarding implementing the framework in current educational settings. Specific challenges cited include tight timelines and the increased workload associated with the new procedures, emphasising a need for additional guidance, training, and institutional support to ensure effective adoption. The proposed framework incorporates an instructor dashboard designed to streamline academic workflow and simplify the monitoring process introduced in this framework. The survey results confirm that the framework can be adopted to address the unique needs of academics and diverse educational environments; however further research is needed to explore its applicability across the broader higher education community.

1. Introduction

Contract cheating is a form of academic misconduct in which students outsource their assignments to a third party and submit them as their own work. Cheating services can be categorised based on how these assignments are outsourced, commercial, non-commercial, or Artificial Intelligence (AI)-enabled. Commercial contract cheating occurs when a student hires a third party to complete their work on a fee-for-service basis [1]. Non-commercial contract cheating occurs when a student’s friends or family complete the work without financial compensation [2,3,4,5]. AI-enabled contract cheating involves the use of AI tools without proper authorisation or through inappropriate or undisclosed use. Content created by such methods is not considered original work, as it fails to demonstrate the student’s understanding, learning, or proper acknowledgment. This form of contract cheating has emerged in the university education system with the widespread availability of Generative Artificial Intelligence (GenAI) tools and technologies to the public [6,7,8].
Multiple factors contribute to students’ participation in contract cheating. Some of these include a lack of understanding of assignment criteria or academic integrity requirements; not receiving constructive, meaningful, and timely feedback from academics; a lack of confidence in academic writing; and the pressure to receive good grades [1,2,9,10,11,12,13]. The increasing number of students involved in contract cheating creates a growing threat to the legitimacy and integrity of the qualifications they attain [14,15,16,17]. Although student submissions are often passed through plagiarism detection software, some instances of plagiarism remain undetected [18,19,20]. Moreover, authorship detection software is still under development and is not infallible, requiring some level of interpretation [21,22,23]. The motivation for building a framework stems from our personal experience as academics in the Australian higher education sector and the significant prevalence of contract cheating among students [24]. In our previous research [25], we investigated the underlying factors driving students to engage in contract cheating, which led to the creation of a Three-Tier Framework (TTF) designed to address and mitigate this issue. The framework considers all stakeholders, emphasising that each party must recognise their roles and responsibilities and work collaboratively to review and improve the relevant processes, policies, and procedures involved [14]. Both supervised and unsupervised assignment submissions are included in the framework; however, the e-proctoring option is excluded due to the challenges students face in setting up the necessary platforms to join online exams. Furthermore, e-proctoring raises significant privacy concerns, as it intrudes on the test-takers privacy during exams, which in turn adversely affects students’ psychological well-being [18,26,27]. For detailed information on the development and structure of the framework, refer to [25]. A concise overview of the proposed framework is provided in Section 3. The primary objective of this research is to examine academics’ perceptions of the proposed TTF before its implementation.
The key contributions of this research work are as follows:
  • Assess academics’ perceptions of the clarity and functionality of the proposed framework.
  • Evaluate the effectiveness of the proposed framework and identify potential challenges associated with implementing the monitoring process.
  • Understand academics’ perspectives on the limitations and enablers in detecting and mitigating contract cheating through the proposed framework.
The outline of the paper is as follows. A brief review of the literature of related work is presented in Section 2, which is followed by a brief description of the proposed framework in Section 3. A detailed description of the methodology used to collect the survey data and analysis is presented in Section 4. The results and analysis are presented in Section 5. Section 6 summarizes the main findings of the research, along with limitations and recommendations.

2. Literature Review

Various studies have explored the challenges associated with contract cheating; however, the literature still lacks a comprehensive framework to address this complex problem effectively. A recent review on contract cheating in higher education by Ashan et al. [28] highlights the key factors contributing to contract cheating and proposes a conceptual framework to address contract cheating within higher education context [29]. However, the review does not provide a clear implementation strategy for applying the framework in higher-educational settings. Furthermore, many universities in Australia have classified the unauthorised use of Gen AI as a form of contract cheating. This includes cases where GenAI-generated text is used without proper acknowledgment in the assessments where GenAI is allowed, as well as instances where students use GenAI despite it being restricted in the assessment [29]. A framework for AI integration in nephrology academic writing and peer review is introduced in [30]. In this framework, the authors consider several components, including the use of AI in general and its role in the peer-review process, ethical issues, and the impact and role of AI. Additionally, the framework outlines the objectives of each component, associated objectives, action items, stakeholders, and evaluation criteria. However, this work is specific to a certain field and not generic enough to adopt in the educational setting. Therefore, there is a pressing need for a new framework that considers the factors involved in both traditional contract cheating and contract cheating introduced by GenAI. The framework we propose considers both types of contract cheating and also emphasises the practical implementation aspect of the framework in higher-educational settings.
Furthermore, Ellis et al. [31] proposed a framework to address and respond to student cheating using the educational integrity enforcement pyramid. This framework is grounded in the theory of responsive regulation and assessment security, providing a theoretical foundation to identify gaps in institutional strategies to combat academic misconduct. While the framework offers valuable insights, it lacks detailed guidance on the practical application of these strategies. Numerous educational frameworks have been proposed in the literature to enhance strategies for upholding academic integrity, offering valuable theoretical insights and conceptual guidelines. However, they often fall short in offering clear and actionable steps for practical implementation. For example, the authors in refs. [12,32] proposed a conceptual framework for authentic design aimed at achieving teaching and learning goals for students. The framework considers cognitive and instructional validity, and its effectiveness was evaluated through a survey. Brauns et al. [33] emphasised the need for an inclusive approach to science education and proposed a framework to ensure interaction and learning for all students. They used qualitative measures such as the foundation, reproducibility, reliability, transferability, relevance, and ethics of the framework to validate their framework. The authors in [34,35] presented various frameworks for e-learning, along with methods for their validation. Similarly, in [36], the authors argue that a shift has been observed in the global education paradigm from educator-oriented to student-centric and they have further proposed a framework to assess and validate teaching competencies and effectiveness. They evaluated their framework by analysing survey feedback from domain experts and also using confirmatory factor analysis. A review of the validation of educational assessments for educators and researchers is presented in [37]. The paper addresses the issue of prioritisation by identifying four key inferences in an assessment activity.
Within the realm of education, frameworks are proposed to prevent and mitigate instances of academic misconduct [38,39,40,41,42,43]. The study by Baird et al. [9] highlighted the importance of preventing contract cheating by removing the opportunities for it and understanding how the crime is committed. This study focused only on a business capstone unit. Recent studies have explored innovative strategies aligned with the identified gaps in current approaches to mitigate contract cheating. In [38], an ethical framework is introduced to maintain academic integrity considering eight ethical aspects; (1) everyday ethics, (2) institutional ethics, (3) ethical leadership, (4) professional and collegial ethics, (5) instructional ethics, (6) students’ academic conduct, (7) research integrity and ethics, and (8) publication ethics. This framework looks only at ethical aspects to maintain academic integrity. Our previous research study [25] identified significant gaps, challenges, and problems in existing methods, such as absence of tools, data, evidence, and knowledge to analyse patterns and behaviours in students’ performance across assignments or units. Additionally, there was a lack of comprehensive strategy or procedure to address these issues. Another critical shortcoming identified was the absence of dedicated units or integrity offices within institutions to manage and mitigate academic misconduct systematically. Subsequently, a conceptual framework was formulated that integrates and addresses the identified gaps. A brief description of the framework will be discussed next.
In the literature, various frameworks are introduced to adopt GenAI and combat GenAI in educational settings. In [39], the authors introduce an AI policy education framework to the multifaceted implications of AI integration in university teaching and learning. In [40], the authors highlight the academic misconduct introduced by GenAI and discuss the relevance of the existing crime prevention frameworks for addressing GenAI-facilitated academic misconduct. A framework for GenAI adoption in education is introduced in [44]. In [45], a text-mining and scenario modelling framework is introduced by predicting threats to academic integrity. Despite various advancements, a critical gap persists in the absence of standardised frameworks for evaluating the effectiveness of contract cheating mitigation strategies.
In response to these gaps, our framework integrates both adversarial and cooperative approaches while offering specific implementation steps, strategies to engage most of the stakeholders, and mechanisms to monitor the effectiveness of these interventions. The monitoring phase of the framework is designed to discourage students from using AI-enabled tools to generate answers. In addition, instructors are supported with snapshots of students’ progress in the subject through a dashboard.

Research Questions

In this research, we aim to explore academics’ perspectives on the proposed TTF for addressing contract cheating. By examining their insights the research seeks to evaluate the framework’s clarity, functionality, and effectiveness while identifying potential implementation challenges and opportunities for improving the functionality of the framework. The following research questions guide this investigation:
  • What are academics’ perceptions of the clarity and functionality of the proposed TTF in addressing contract cheating?
  • What are academics’ perceptions of the monitoring process within the proposed TTF for addressing contract cheating?
  • What are academics’ perceptions of the implementation process of the proposed TTF in addressing contract cheating?

3. Three-Tier Framework (TTF) to Combat Contract Cheating

The paper expands the research analysis to evaluate the TTF introduced in [25] to address and mitigate contract cheating in academic institutions. The proposed framework is designed with an emphasis on the institutional contexts of the issue to standardise the response process and foster a consistent approach for detecting and mitigating contract cheating. Figure 1 provides a high-level view of the framework, while Figure 2 presents a detailed breakdown of the proposed TTF. The proposed framework comprises three interconnected tiers: awareness, monitoring, and evaluation, each serving a specific role in addressing contract cheating. Figure 1 illustrates the framework’s circular and iterative structure, with each level contributing to a continuous cycle of improvement. Re-visiting and revising each stage based on evaluation insights, the framework fosters a sustainable approach to mitigate contract cheating.
Overall, the proposed framework incorporates three key features that demonstrate its adaptability to future changes, including the integration of GenAI and other technologies.
  • The framework establishes a foundational level of awareness on contract cheating among all stakeholders (level 1), ensuring it remains flexible and responsive to emerging challenges in academic integrity. This flexibility enables the framework to adapt to GenAI and other future technological advancements.
  • The emphasis on process over product, through the framework’s monitoring process (level 2), is designed to address both traditional contract cheating and academic integrity challenges posed by GenAI technologies in an adaptable manner [46,47].
  • The adaptive framework enables continuous updates to policies and processes (level 3) to address changes, including the emergence of technologies such as GenAI.
At the awareness level, both student awareness and staff proficiency in recognising contract cheating are enhanced through various targeted activities. The monitoring phase records students’ assignment-related activities using a Pre-Defined Template (PDT) allowing for the systematic recording and analyses of data on potential contract cheating instances across three dedicated databases.
The final stage involves regularly reviewing and updating institutional policies, procedures, and services related to preventing contract cheating. An Academic Integrity Office (AIO) assesses instances recorded in the monitoring database at the course level. At the same time, unit coordinators conduct unit-level evaluations to see students’ performance within the degree program across the courses he/she enrolled in. Insights and findings from this level feed back into level 1, creating a continuous improvement loop that reinforces awareness and enhances the overall effectiveness of the framework.

3.1. Level 1—Awareness

Awareness activities on the first tier of the framework will be conducted for students and the staff before the semester starts or during the orientation or induction, aiming to improve awareness of contract cheating, including both traditional contract cheating and contract cheating generated by GenAI [19,48,49]. Some key points are highlighted below, but this is not exhaustive.
  • Academic integrity, including all forms of contract cheating;
  • Penalties associated with academic breaches, including those related to contract cheating;
  • Information on the proper and ethical use of GenAI tools and technologies in education and assessments;
  • Details of student support services and facilities in the Centre of Learning (CoL);
  • Special consideration application process.
In parallel with student-awareness programs, training for teaching staff can be conducted on:
  • Trends, patterns, and irregularities in potential contract cheating cases, with examples;
  • Assessment design and evaluation techniques that can mitigate contract cheating, including GenAI-associated contract cheating;
  • Provide printed copies of institution policies and a simple flowchart of the procedure they need to follow if they detect contract cheating.

3.2. Level 2—Monitor

The initiatives within the second tier of the proposed framework are designed to span the entire semester, focusing on monitoring student progress consistently. The primary objective is to ensure continuous intervention when necessary, fostering a proactive approach to maintaining academic integrity. To facilitate these operations, three documents, including two centralised databases, have been introduced as part of the framework’s implementation at this level. The three documents are listed below. In addition to these tools, level 2 of the framework incorporates strategies to actively engage stakeholders, such as students, academics, and administrative staff, ensuring a collaborative approach to uphold academic integrity. This phase is specially designed to discourage students from relying on GenAI tools, and instructors are encouraged to use scaffolding techniques when designing assignments to support this process. To further support academics, the framework provides instructors with a user-friendly dashboard offering real-time snapshots of students’ progress in the subject. These snapshots include detailed insights into grouping in step 2 and progress tracking in Steps 3 and 4, enabling instructors to make informed decisions and provide feedback.
  • Monitoring database: a centralised shareable database updated by academics for recording assignment marks, progress of assignment monitoring, assessor input, and suspected cheating;
  • Academic Integrity Breach Database: to record confirmed academic breaches by AIO;
  • Software analysis reports from authorship investigation analysis or similar software from a Learning Management System uploaded by specialised administrative staff at AIO.

3.2.1. Monitoring Database: Pre-Defined Template (PDT)

The PDT is designed to monitor student work and gather evidence before making any allegations of academic dishonesty against students. This PDT template can be customised according to the specific context of the course or application. At Melbourne Institute of Technology (MIT), for example, students are required to complete two supervised assessments, an in-class test (ICT), a final exam, and two unsupervised assessments such as formative assessment (FA) and assignment 2. In addition, class participation and contributions are also evaluated. The PDT ensures that there is a systematic objective approach to monitor academic performance, helping to maintain integrity and fairness in the evaluation process A sample PDT is shown in Table 1.

3.2.2. Other Documents

The academic integrity breach database is usually maintained by academic institutes according to their policy. However, software analysis needs to be produced by the IT team according to demand during the investigation.

3.2.3. Monitoring Steps

A concise overview of the steps outlined in level 2 of the framework (refer to Figure 2) is provided below.
  • STEP 1: Tutor records student marks for FA (unsupervised) and ICT (supervised) in PDT and compares to identify irregularities. Their observations are recorded in the OIS1 column ("Y” or “N”) and comments in the comments column
  • STEP 2: A2 progress will be demonstrated in class for three (3) weeks (6, 8, and 10) and progress will be recorded on a scale of 1 (lowest) to 10 (highest) in the PDT. Then, the students will be assigned to either Group A or Group B based on the following rules:
    Group A—Successful—if average A2 progress is greater than 5 AND no irregularities are observed in OIS1 AND regular class participation;
    Group B—Unsuccessful—if average A2 progress is less than or equal to 5 OR irregularities are observed in OIS1 OR irregular class participation.
  • STEP 3: Tutor evaluates A2 considering the grouping in step 2 and records the observation in OIS2 with comments. Any suspicious cases will be reported to UC.
  • STEP 4: The final exam, and class participation will be evaluated by observing the information in PDT Database. Any suspicious cases will be reported to UC.
  • STEP 5: UC performs data analysis on the unit level using data in the PDT. Any suspicious cases will be reported to the AIO.
  • STEP 6: The AIO conducts course-wide data analysis for each student using data in the monitoring database for all units, the academic integrity breach database, and the authorship investigation report. Interviews will be conducted for suspicious cases.
  • STEP 7: The AIO applies a penalty for confirmed cases using the analysis report and the interview results according to the policy.
  • STEP 8: The AIO will decide and apply the penalty according to the policy after analysing the data in three databases, course-level data, assessor reports on the centralised databases, and also using reports from software tools. Finally, the academic integrity breach database is updated.

3.3. Level 3—Evaluation

The evaluation stage involves consistently evaluating and revising institutional policies, procedures, and services concerning contract cheating, which includes GenAI-associated cases. The findings from the evaluation are feedback to level 1, the awareness level, to update activities to ensure that the activities are according to industry and technology trends.

4. Survey and Analysis Methodology

This study employed a survey questionnaire as a research instrument. Figure 3 explains the procedural sequence adhered to to conduct the research.
1. Setting Objectives: This survey questionnaire was developed to evaluate the TTF designed and published in our previous research [25]. However, this survey will evaluate only the first two levels of the framework as it is distributed among all academics teaching in both the Melbourne and Sydney campuses of MIT.
2. Ethical Approval: Before starting the research, the study proposal was reviewed and evaluated by the ethics committee at MIT and this study was approved by the MIT Research Ethics Committee.
3. Pre-Survey Information Session: Preceding the survey, a presentation was given to the participants illustrating details about the underlying research and the purpose of the survey. Participants were allowed to ask questions during the presentation.
4. Design of Data Collection Instrument: After gaining an understanding of the objectives, the questionnaire was designed to contain two main sections. The initial section aimed to collect demographic information from the academics taking part in the study, namely their current position, academic experience, working campus, and their level of understanding of the concept of contract cheating in higher education settings. The objective of the second segment of the survey was to obtain the academics’ opinions on the suitability of the proposed framework to combat contract cheating within higher-educational settings. The survey utilised open-ended and closed-ended survey questions (refer to Appendix A for detailed information) to gain the participants’ insights on the designed framework and their understanding of the topic. The questionnaire provided in the Appendix A was developed using established guidelines and standard processes [50,51], combined with insights from educational experiences. This development was followed by reliability testing to ensure its consistency and validity. We perform spilt-half reliability to check the internal consistency, where the survey questions are divided into two halves and we calculate the correlation between the scores from each half. Some of the closed-ended survey questions were rated along a band of agreement based on 5-point Likert-type scales, ranging from ’Strongly Disagree’ to ‘Strongly Agree’. Informed consent was obtained from participants before starting the data collection.
5. Data Collection: The survey was distributed to 151 academics, including Tutors, Lecturers, Senior Lecturers, Associate Professors, and Professors, teaching at an institute in Australia. A Google form containing the questionnaire and a link to the recorded pre-survey information session was emailed to the participants, and 32 participants out of 151 (roughly 21.2%) responded to the survey.
We minimised sampling bias by distributing the survey to all academic staff across various levels and employment types, ensuring an inclusive approach. The survey was sent to Professors, Senior Lecturers, Lecturers, and Tutors, both casual and full-time. This method allowed us to collect responses from various perspectives within the academic community. As shown in Figure 4, this response distribution further supports this as 59.4% of the participants were Lecturers, 28% were Senior Lecturers or higher, and 12.5% were working as Tutors. These proportions indicate participation across the academic hierarchy, reflecting diverse viewpoints and experiences. Although the representation of roles aligns with our goals, we acknowledge a potential limitation, a non-responsive bias that we were unable to fully mitigate despite broad outreach efforts.
6. Data Analysis and Interpretation: The data analysis will start with a demographic analysis of the respondents and be followed by an overall sentiment analysis among academics for contract cheating. Finally, the analysis of academics’ perception of the framework (in three areas: clarity, monitoring, and implementation aspects) is performed using composite, weighted composite, and Z-score analysis. This quantitative analysis is based on the concepts described in Section 4.1 and Section 4.2, whereas the open-ended questions are analysed using thematic analysis, which involves identifying, analysing, and interpreting themes in three areas (clarity, monitoring process, and implementation of the framework) within those answers [52].

4.1. Composite and Weighted Composite Analysis

A composite score (CS) is calculated by taking the sum of each component score. The formula can be represented as in Equation (1)
CS = i = 1 n Component Score i
where n is the number of components and where component scorei is the score of the ith component. A weighted composite score (WCS) is calculated by taking the sum of each component score multiplied by its respective weight. The formula can be written as in Equation (2)
WCS = i = 1 n ( Component Score i × Weight i )
where W e i g h t i is the weight assigned to the ith component.
The doubly weighted composite score (DWCS) involves adding an extra level of weighting to the WCS. It is essentially the weighted sum of the WCS. The formula for the DWCS can be written as Equation (3):
DWCS = i = 1 n ( WCS i × Secondary Weight i ) .

4.2. Normalised Z-Score Analysis

The Z-score, also known as the standard score, is a statistical measure that describes the position of a raw score in terms of its distance from the mean, measured in units of the standard deviation. Z-score helps in detecting any outliers in the data and allows to comparison of different groups of data even if they are on different scales. The Z-score is calculated as in Equation (4):
Z = X μ σ
where X is the composite score or weighted composite score, μ is the mean of all the composite scores or weighted composite scores (Equation (5)), σ is the standard deviation of the composite score or weighted composite score (Equation (6)) and N is the number of respondents.
μ = i = 1 N X i N
σ = i = 1 N ( X i μ ) 2 N
We also obtained the Kernel Density Estimate (KDE) of all the samples received. We used it to estimate a continuous probability density function for the sample responses received. The KDE works by placing a Gaussian kernel on each point in the data set. Adding up all these individual kernels results in a smooth estimation of the overall dataset.
f ^ ( x ) = 1 n i = 1 n K h ( x X i )
K h ( x X i ) = 1 2 π h 2 exp ( x X i ) 2 2 h 2
where:
f ^ (x) is the estimate of the probability density function at point x;
Kh(.) is the (Gaussian) kernel function;
n is the number of data points;
Xi represents the individual data points;
h is the bandwidth (a parameter that controls the width of the kernel).

5. Results and Analysis

This section outlines the results and analysis of the survey, i.e., step 6 of the survey and analysis methodology (Figure 3). The survey questionnaire was designed to assess academic perception of three key aspects of the framework, clarity of the framework’s functionality and purpose, effectiveness and challenges in implementing the monitoring process utilized in level 2, and detecting and mitigating contract cheating using the proposed framework. Table 2 illustrates the mapping of the question numbers in the questionnaire (Appendix A) to the primary areas of the investigation indicating their respective usage in the analysis.
In this study, the process yielded a correlation of 0.86, indicating a strong level of internal consistency. This high reliability suggests that the survey items consistently measure the same construct across both halves. Such consistency underscores the robustness of the survey design, ensuring reliable and stable measurement throughout.

5.1. Respondent Demographics

This section outlines the demographics of the academic respondents in terms of position, academic experience, and working campus; details are shown in Table 3. The academics’ position varies from Tutor to Professor and the majority of respondents, i.e., 59.4%, are Lecturers. Similarly, 75% of the academics have more than six years of academic experience. This suggests that the respondents have considerable experience in academia and predominantly respond from the Melbourne campus. The sessional staff working at our institute bring valuable experience and insight from teaching at other universities, as well on contract cheating matters.

5.2. Awareness

Prior to providing feedback on the proposed framework, respondents were asked about their familiarity with the concept of contract cheating, their comprehension of its implications, and any recent training they may have received on contract cheating, and the details are provided in Table 4. A notable 56.3% of respondents have engaged in recent training or research in contract cheating within the last year. This indicates that there is an ongoing effort to stay updated on this issue, although 21% have never participated in such training or research. This may affect the awareness level of those academics.
Figure 5 and Figure 6 demonstrate how the academics’ level of familiarity with contract cheating fluctuates under their training and experience, providing insights into their depth of understanding of the concepts involved.
As shown in Figure 5, 6% of the academics who reported to be expert practitioners and 59% of those who reported to be quite familiar or familiar had been in the field for more than 6 years. In total, 65% of them had more than 5 years of experience in the field and they also are familiar with the concepts needed to provide feedback about the proposed framework.
Similarly, Figure 6 illustrates that 3% of the academics who identified themselves as expert practitioners, 49% of those who described themselves as quite familiar, and 15% of those who described themselves as familiar had received training in the previous year. In total, 67% of them had undergone recent training on contract cheating; it can be confidently stated that they possess the necessary familiarity with the concepts required to provide feedback about the proposed framework. These values indicate that around 65% of the surveyed academics had a high level of awareness about issues.
Figure 7 depicts to what extent these academics agree on the concept “Contract cheating has a negative impact on the student learning”. Six per cent of expert practitioners, 53% of those who describe themselves as quite familiar, and 22% who are familiar agreed that contract cheating has a negative impact on student learning. The overwhelming majority, 81% of the respondents, express agreement regarding the detrimental effects of contract cheating. Therefore, there is a pressing need for a clear and implementable framework to mitigate and detect contract cheating, safeguarding both institutional reputation and student learning.

5.3. Academic Perception of Framework

In this section, we perform an analysis of the perception of academics in the proposed framework in three areas: clarity, monitoring, and implementation of the framework. The component score of these questions (refer to Table 2) are converted into the [−2, 2] range according to Table 5. The primary (familiarity) and secondary (experience) weight of the [0.25, 1] range is used according to Table 6 in the weighted composite analysis. The composite analysis, doubly weighted composite analysis, and Z-score analysis are performed using Equation (1), Equation (3), and Equation (4), respectively.

5.3.1. Clarity of Framework

The responses from five questions (refer to Table 2) are used for analysis of the clarity of the framework and the composite and weighted composite scenarios. These scores are calculated using Equations (1) and (3) and the scores are in the range [−10, 10]. Figure 8 presents the histogram and Kernel Density Estimation (KDE) of the composite scores for clarity of the framework. Two of the respondents (out of thirty-two) have an overall negative sentiment about the clarity of the framework, with scores of −6 and −2, respectively. Six respondents perceived the framework’s clarity positively, with a maximum possible composite score of 10. The mean composite score is 5.13, indicating a good positive sentiment towards the framework’s clarity.
Figure 9 presents the histogram and KDE of weighted composite scores for clarity of the framework. For this plot, the user responses are weighted (refer to Table 6) by their respective familiarity and experience with contract cheating. Unlike the previous case, the minimum weighted composite score is −4.5, for only one respondent. That means this one respondent is concerned about the clarity and usability of the proposed framework; however, the other familiar and experienced respondents are either neutral or positive about the framework. The maximum score of 7 (out of the highest possible score of 10) indicates the strong agreement of these respondents with the framework. The mean weighted composite score is 3.31, indicating a good positive sentiment toward the framework’s clarity, which is significantly above the midpoint of the range. Table 7 summarises the statistical parameters of composite and weighted composite responses. The increase/decrease in minimum/maximum weighted score indicates that highly experienced and familiar respondents have generally positive sentiments about the framework.
We further compared the Z-normalised scores to gain further insights into the comparative insights into the sentiments of the respondents. Figure 10 presents the distribution of Z-score-normalised composite scores for clarity of the framework for both scenarios. The approximate bell shape of the two curves and the overlap of KDE indicates and ensures the sufficiency of the number of data points as per the Centre Limit Theorem [53]. The overlap with a slight shift in the peak of the weighted curve toward the right indicates that experienced and familiar respondents find it clear and agree with the proposed framework approach.
With the analysis of all three curves, we can conclude with a sufficient level of confidence that there is a consensus on the clarity of the framework presented, while individual perceptions may vary. The overall sentiment of the framework is positive and not sensitive to the familiarity and experience of the respondents.

5.3.2. Monitoring Process in Framework

The responses from two questions (refer Table 2) are used for the analysis of the monitoring process (level 2) of the framework. Therefore, both composite and weighted composite scenarios are in the range [−4, 4]. Figure 11 presents the histogram and KDE of the composite scores for the monitoring process of the framework. One of the respondents (out of thirty-two) has an overall negative sentiment about the monitoring process used in the framework, with scores of −3, and there are two neutral respondents. Seven respondents perceived the framework’s monitoring process very positively, with a maximum possible composite score of 4. The mean composite score is 2.13, indicating good positive sentiment toward the framework’s monitoring process.
Figure 12 presents the histogram and KDE of the weighted composite scores for the monitoring process of the framework. Unlike in the previous case, the minimum weighted composite score is −0.75, for a single respondent. That means this respondent is concerned about the monitoring process used in the framework; however, the other familiar and experienced respondents are neutral or positive about the framework. The maximum score of 3 (out of the highest possible score of 4) indicates the strong agreement of these respondents with the framework. The mean weighted composite score is 1.43, indicating good positive sentiment toward the framework’s monitoring process, which is significantly above the midpoint of the range. Table 8 summarises the statistical parameters of composite and weighted composite responses. The increase/decrease in the minimum/maximum of the weighted scores indicates that highly experienced and familiar respondents have generally positive sentiments about the framework. This sentiment is further established through the percentile values (Q1, Q2, and Q3), which indicate the percentage of users below the given value.
We further compared the Z-normalised scores to gain further insights into the comparative insights into the sentiments of the respondents. Figure 13 presents the distribution of Z-score-normalised composite scores for the monitoring of framework for both scenarios. The overlap in the peak of the composite and weighted curves indicates that there is no clear difference using weighting factors of experience and familiarity.
With the analysis of all three curves, we can conclude with a sufficient level of confidence that there is a consensus on the monitoring process used in the framework presented, while individual perceptions may vary. The overall sentiment of the framework is positive and not sensitive to the familiarity and experience of the respondents.
Through thematic analysis of the responses to the feedback questions, it is evident that respondents consider staff and student awareness to be a crucial step in mitigating contract cheating. Therefore, the framework should include provisions for training both students and staff.

5.3.3. Implementation Challenges in Framework

The responses from three questions (refer to Table 2) are used for the analysis of the implementation of the framework, and therefore, both composite and weighted composite scenarios are in the range [−6, 6]. Figure 14 presents the histogram and KDE of the composite scores for the implementation of the framework. One of the respondents (out of 32) has an overall negative sentiment about the implementation of the framework with a score of −3 and three respondents with a score of −1. Only one respondent perceived the framework’s implementation very positively, with a maximum possible composite score of 6, and other respondents were positive or neutral. The mean composite score is 1.81, indicating a positive sentiment toward the framework’s implementation. However, there is some reservation among respondents regarding implementation.
Figure 15 presents the histogram and KDE of weighted composite scores for the implementation of the framework. Unlike the previous cases, the minimum weighted composite score is −2.25, for only one respondent, and there is a negative response for another three respondents. That means four respondents are concerned about the implementation of the framework; however, the other familiar and experienced respondents are either neutral or positive about the framework. The maximum score of 4.5 (out of the highest possible score of 6) indicates the strong agreement of one respondent with the framework. The mean weighted composite score is 1.15, indicating a positive sentiment towards the framework’s implementation, which is above the midpoint of the range. Table 9 summarises the statistical parameters of composite and weighted composite responses. The increase/decrease in the minimum/maximum weighted scores indicates that highly experienced and familiar respondents have generally positive sentiments about the framework.
We further compared the Z-normalised scores to gain further insights into the comparative insights into the sentiments of the respondents. Figure 16 presents the distribution of Z-score-normalised composite scores for the implementation of the framework for both scenarios. The overlap with a slight shift in the peak of the weighted curve toward the left indicates that experienced and familiar respondents find some challenging issues in the implementation of the framework.
With the analysis of all three curves, we can conclude with a sufficient level of confidence that there is a consensus on the implementation of the framework presented, while individual perceptions may vary. The overall sentiment on the implementation of the framework is positive, but divided opinions were observed. Additionally, it is sensitive to the familiarity and experience of the respondents.
“Implementing this framework is a substantial undertaking”.— This is the general sentiment echoed by the respondents. The respondents show concern about the timelines and workload increase due to the new process involved. Through thematic analysis of the responses to the feedback questions, we identified three key points. Some respondents highlighted the need for clear instructions and guidelines to implement the framework, which are essential for smooth integration of the framework into the existing system. Furthermore, they emphasised the importance of automated monitoring at level 2 to ensure its effectiveness. The respondents also indicated that implementing this framework will require proper training for both staff and students to adopt the changes. These points are reflected in the recommendations listed below.

5.4. Recommendations

After a thorough analysis of the results, the following precautionary measures are recommended for the institute to consider during the adoption of the framework:
  • Create clear guidelines and instructions outlining the implementation steps and processes for smooth integration of the framework into the existing system;
  • Implement process automation for monitoring at level 2 within the learning management system and academic management system to streamline operations and reduce the workload of academic staff;
  • Develop a dashboard to facilitate data analysis across all three databases, simplify workflows, and reduce the workload;
  • Conduct a pilot implementation in selected subjects before full deployment of the framework across the entire institution, allowing for refinement of processes and procedures used in the framework.

6. Conclusions and Limitations

The survey analysis of the framework provides a comprehensive evaluation of the proposed framework in three critical areas: clarity, the monitoring process, and implementation. The findings of the survey analysis indicate a consensus on the clarity and monitoring process of the framework, with respondents expressing a high level of confidence in these components. Overall, the sentiment toward the framework is positive and largely unaffected by the respondents’ familiarity and experience. Although the overall sentiment of the respondents about the implementation of the framework is positive, some respondents, those with more experience, expressed concerns about certain aspects of the implementation. The respondents show concerns about the timelines and additional workload due to the new process involved in the framework. Furthermore, respondents highlighted the need for precise instructions and guidelines. Therefore, to ensure the successful adoption of the framework, several key actions are recommended. First, the institution should design and implement clear instructions and guidelines to support its effective integration. Second, an instructor dashboard should be developed to automate the monitoring process. This tool will help reduce administrative burdens and staff workload; this addresses one of the concerns raised by the respondents. Finally, an incremental approach with pilot implementations should be adopted to address potential challenges and refine processes. Additionally, the framework’s monitoring phase aims to discourage students from relying on AI-enabled tools to generate answers, while the instructor dashboard provides a snapshot of student progress and identifies students who need educational interventions to prevent contract cheating.
There are some limitations in this study. First, this framework is evaluated only in an academic institute that has a limited number of academic staff. Second, the response rate is low, although the authors made a maximum effort to encourage academic staff to participate in this survey. Therefore, more research is required to explore its applicability across a broader range of higher education contexts. Another limitation of the study is the findings are gathered from descriptive statistics rather than inferential statistics as the study aims to identify the patterns and trends only. Lastly, level 3 of the framework was not assessed in this study, as the survey participants were exclusively academic staff. The next phase of research will involve gathering insights from academic managers, committee members, and administrative staff to comprehensively evaluate the framework’s implementation phase.

Author Contributions

Conceptualisation: D.B.G. and R.K.; Survey questionnaire preparation, survey analysis, writing—original draft, and writing—review and editing: D.B.G., R.K., S.S. and A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding from MIT Research Fund.

Institutional Review Board Statement

The study was conducted after approval from the Melbourne Institute of Technology Ethics Committee.

Informed Consent Statement

The survey study protocol was approved by the Ethics Committee of MELBOURNE INSTITUTE OF TECHNOLOGY on 7 September 2023.

Data Availability Statement

Data supporting this research are available from the authors on reasonable request.

Acknowledgments

We would like to thank the Melbourne Institute of Technology (MIT) administration for providing working space and financial and other administrative support.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Questionnaires

Dear Respondent,
The objective of this survey is to study the awareness, perception, and expectation of the proposed framework designed for combating contract cheating at MIT. Here is a brief description.
You are requested to go through the Framework descriptions and video before you start your survey. Answer the questions based on your understanding of the document and your experience. Your response will be used in aggregate form only without revealing or inferring your identities. Your answers will be kept confidential. You can leave the survey anytime when you feel free to leave. Please provide the information as per your best knowledge. Instructions on how to answer questions are included in the questions. Your opinion is extremely valuable to us.
  • Current Position—Use a higher one if you have many
    (a) Tutor (b) Lecturer (c) Senior Lecturer (d) Associate Professor (e) Professor
  • Academic Experience—All of your experience including at MIT
    (a) Under 1 year (b) 1–2 years (c) 3–4 years (d) 5–6 years (e) More than 6 years
  • Working Campus at MIT
    (a) Melbourne (b) Sydney
  • I have received training/conducted research on contract cheating recently.
    (a) Never (b) More than 3 years ago (c) 1-3 years ago (d) 3 months to 1 year ago (e) Within the last three months
  • I am familiar with the concept of contract cheating in higher education settings.
    (a) Not Familiar (b) Somewhat Familiar (c) Familiar (d) Quite Familiar (e) Expert Practitioner
  • To what extent do you agree or disagree that contract cheating has a negative impact on student learning?
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • To what extent do you agree or disagree that the contract cheating has a negative impact on the Institution’s reputation?
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • Proposed framework is clearly described and easy to understand.
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • To what extent do you think awareness activities (in level 1) of the framework will enable students and staff to be aware of the consequences and risks of contract cheating?
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • To what extent do you agree or disagree that the proposed framework will provide sufficient tools and platforms to discourage or identify contract cheating cases compared to strategies we use at currently?
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • To what extent do you agree or disagree that the proposed framework ensures transparency and fairness in dealing with contract cheating?
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • To what extent do you agree or disagree with the effectiveness of the proposed framework in detecting and mitigating contract cheating cases compared to the existing system?
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • To what extent do you agree or disagree that the use of the Assignment 2 monitoring process (in level 2) in the proposed framework will discourage contract cheating when compared to the existing techniques?
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • To what extent do you agree or disagree that the use of the Assignment 2 monitoring process (level 2) in the proposed framework will encourage student learning.
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • To what extent you are comfortable implementing the Assignment 2 monitoring process (level 2) in the proposed framework during the lab/tutorial session?
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • To what extent do you agree or disagree with the ease of performing the tasks assigned to you within the proposed framework?
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • To what extent do you agree or disagree that the Academic Integrity Officer (AIO) in this framework plays a significant role in detecting contract cheating?
    (a) Strongly disagree (b) Disagree (c) Neutral (d) Agree (e) Strongly agree
  • Would you like to see any other tools to detect and discourage contract cheating in the proposed framework?
    ..........................................................................................
  • What areas of improvement do you believe the proposed framework should focus on?
    .............................................................................................

References

  1. Xu, Y.; Li, W. The causes and prevention of commercial contract cheating in the era of digital education: A systematic & critical review. J. Acad. Ethics 2023, 21, 303–321. [Google Scholar]
  2. Newton, P.M. How common is commercial contract cheating in higher education and is it increasing? A systematic review. In Frontiers in Education; Frontiers Media: Lausanne, Switzerland, 2018; p. 67. [Google Scholar]
  3. Lancaster, T. Commercial contract cheating provision through micro-outsourcing web sites. Int. J. Educ. Integr. 2020, 16, 1–14. [Google Scholar] [CrossRef]
  4. Curtis, G.J.; McNeill, M.; Slade, C.; Tremayne, K.; Harper, R.; Rundle, K.; Greenaway, R. Moving beyond self-reports to estimate the prevalence of commercial contract cheating: An Australian study. Stud. High. Educ. 2021, 47, 1–13. [Google Scholar] [CrossRef]
  5. Curtis, G.J. Contract Cheating: Introduction. In Second Handbook of Academic Integrity; Springer: Berlin/Heidelberg, Germany, 2024; pp. 647–662. [Google Scholar]
  6. Curtin University. Contract Cheating. Available online: https://www.curtin.edu.au/students/essentials/rights/academic-integrity/contract-cheating/ (accessed on 1 November 2024).
  7. The University of New Castle. Genative Artificial Intelligence and Academic Integrity. Available online: https://www.newcastle.edu.au/__data/assets/pdf_file/0010/985843/Gen-Ai-and-AI-for-CC_flowchart_November-2024-V3.pdf (accessed on 1 November 2024).
  8. Manoharan, S.; Speidel, U.; Ward, A.E.; Ye, X. Contract Cheating–Dead or Reborn? In Proceedings of the 2023 32nd Annual Conference of the European Association for Education in Electrical and Information Engineering (EAEEIE), Eindhoven, The Netherlands, 14–16 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
  9. Baird, M.; Clare, J. Removing the opportunity for contract cheating in business capstones: A crime prevention case study. Int. J. Educ. Integr. 2017, 13, 1–15. [Google Scholar] [CrossRef]
  10. Amigud, A.; Lancaster, T. 246 reasons to cheat: An analysis of students’ reasons for seeking to outsource academic work. Comput. Educ. 2019, 134, 98–107. [Google Scholar] [CrossRef]
  11. Rogerson, A.M. Detecting contract cheating in essay and report submissions: Process, patterns, clues and conversations. Int. J. Educ. Integr. 2017, 13, 1–17. [Google Scholar] [CrossRef]
  12. Awdry, R.; Ives, B. International predictors of contract cheating in higher education. J. Acad. Ethics 2023, 21, 193–212. [Google Scholar] [CrossRef]
  13. Wang, Y.; Xu, Z. Statistical Analysis for Contract Cheating in Chinese Universities. Mathematics 2021, 9, 1684. [Google Scholar] [CrossRef]
  14. Rahimi, R.; Jones, J.; Bailey, C. Exploring contract cheating in further education: Student engagement and academic integrity challenges. Ethics Educ. 2024, 19, 38–58. [Google Scholar] [CrossRef]
  15. Gamage, K.A.; Dehideniya, S.C.; Xu, Z.; Tang, X. Contract cheating in higher education: Impacts on academic standards and quality. J. Appl. Learn. Teach. 2023, 6, 1–13. [Google Scholar]
  16. Lancaster, T. Contract Cheating: Practical Considerations. In Second Handbook of Academic Integrity; Springer: Berlin/Heidelberg, Germany, 2024; pp. 799–811. [Google Scholar]
  17. Sweeney, S. Who wrote this? Essay mills and assessment–Considerations regarding contract cheating and AI in higher education. Int. J. Manag. Educ. 2023, 21, 100818. [Google Scholar] [CrossRef]
  18. Hill, G.; Mason, J.; Dunn, A. Contract cheating: An increasing challenge for global academic community arising from COVID-19. Res. Pract. Technol. Enhanc. Learn. 2021, 16, 1–20. [Google Scholar]
  19. Khan, Z.R.; Hemnani, P.; Raheja, S.; Joshy, J. Raising awareness on contract cheating–Lessons learned from running campus-wide campaigns. J. Acad. Ethics 2020, 18, 175–191. [Google Scholar] [CrossRef]
  20. Eaton, S.E.; Crossman, K.; Behjat, L.; Yates, R.M.; Fear, E.; Trifkovic, M. An institutional self-study of text-matching software in a Canadian graduate-level engineering program. J. Acad. Ethics 2020, 18, 263–282. [Google Scholar] [CrossRef]
  21. Ison, D.C. Detection of Online Contract Cheating through Stylometry: A Pilot Study. Online Learn. 2020, 24, 142–165. [Google Scholar] [CrossRef]
  22. Conlan, K.; Baggili, I.; Breitinger, F. Anti-forensics: Furthering digital forensic science through a new extended, granular taxonomy. Digit. Investig. 2016, 18, S66–S75. [Google Scholar] [CrossRef]
  23. Curtis, G.J.; Clare, J. Prevalence, Incidence, and Rates of Contract Cheating. In Second Handbook of Academic Integrity; Springer: Berlin/Heidelberg, Germany, 2024; pp. 681–696. [Google Scholar]
  24. Aljanahi, M.H.; Aljanahi, M.H.; Mahmoud, E.Y. “I’m not guarding the dungeon”: Faculty members’ perspectives on contract cheating in the UAE. Int. J. Educ. Integr. 2024, 20, 9. [Google Scholar] [CrossRef]
  25. Guruge, D.B.; Kadel, R. Towards an Holistic Framework to Mitigate and Detect Contract Cheating within an Academic Institute—A Proposal. Educ. Sci. 2023, 13, 148. [Google Scholar] [CrossRef]
  26. Eaton, S.E.; Turner, K.L. Exploring academic integrity and mental health during COVID-19: Rapid review. J. Contemp. Educ. Theory Res. (Jcetr) 2020, 4, 35–41. [Google Scholar]
  27. Erguvan, I.D. The rise of contract cheating during the COVID-19 pandemic: A qualitative study through the eyes of academics in Kuwait. Lang. Test. Asia 2021, 11, 1–21. [Google Scholar] [CrossRef]
  28. Ahsan, K.; Akbar, S.; Kam, B. Contract cheating in higher education: A systematic literature review and future research agenda. Assess. Eval. High. Educ. 2022, 47, 523–539. [Google Scholar] [CrossRef]
  29. Australian Academic Integrity Network (AAIN). Institutional Responses to the Use of Generative Artificial Intelligence. 2023. Available online: https://wordpress-ms.deakin.edu.au/academicintegrity/wp-content/uploads/sites/290/2023/06/AAIN-Institutional-Responses-to-the-use-of-Generative-Artificial-Intelligence-V1.1.pdf (accessed on 10 December 2024).
  30. Miao, J.; Thongprayoon, C.; Suppadungsuk, S.; Garcia Valencia, O.A.; Qureshi, F.; Cheungpasitporn, W. Ethical dilemmas in using AI for academic writing and an example framework for peer review in nephrology academia: A narrative review. Clin. Pract. 2023, 14, 89–105. [Google Scholar] [CrossRef] [PubMed]
  31. Ellis, C.; Murdoch, K. The educational integrity enforcement pyramid: A new framework for challenging and responding to student cheating. Assess. Eval. High. Educ. 2024, 1–11. [Google Scholar] [CrossRef]
  32. Pellegrino, J.W.; DiBello, L.V.; Goldman, S.R. A framework for conceptualizing and evaluating the validity of instructionally relevant assessments. Educ. Psychol. 2016, 51, 59–81. [Google Scholar] [CrossRef]
  33. Brauns, S.; Abels, S. Validation and Revision of the Framework for Inclusive Science Education. In Inclusive Science Education; Working Paper, 1/2021; 2021; pp. 1–31. Available online: https://www.leuphana.de/fileadmin/user_upload/Forschungseinrichtungen/insc/professuren/didaktik-der-naturwissenschaften/files/Brauns_Abels_2021_Validation_and_Revision_of_the_Framework_for_Inclusive_Science_Education.pdf (accessed on 10 December 2024).
  34. Inglis, A. Approaches to the validation of quality frameworks for e-learning. Qual. Assur. Educ. 2008, 16, 347–362. [Google Scholar] [CrossRef]
  35. Matulevicius, R. Validating an evaluation framework for requirements engineering tools. In Information Modeling Methods and Methodologies: Advanced Topics in Database Research; IGI Global: Hershey, PA, USA, 2005; pp. 148–174. [Google Scholar]
  36. Tigelaar, D.E.; Dolmans, D.H.; Wolfhagen, I.H.; Van Der Vleuten, C.P. The development and validation of a framework for teaching competencies in higher education. High. Educ. 2004, 48, 253–268. [Google Scholar] [CrossRef]
  37. Cook, D.A.; Hatala, R. Validation of educational assessments: A primer for simulation and beyond. Adv. Simul. 2016, 1, 1–12. [Google Scholar] [CrossRef] [PubMed]
  38. Eaton, S.E. A Comprehensive Academic Integrity (CAI) Framework: An Overview. 2023. Available online: https://prism.ucalgary.ca/handle/1880/116060 (accessed on 10 December 2024).
  39. Chan, C.K.Y. A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ. 2023, 20, 38. [Google Scholar] [CrossRef]
  40. Birks, D.; Clare, J. Linking artificial intelligence facilitated academic misconduct to existing prevention frameworks. Int. J. Educ. Integr. 2023, 19, 20. [Google Scholar] [CrossRef]
  41. Moriarty, C.; Lang, C.; Usdansky, M.; Kanani, M.; Jamieson, M.; Gallant, T.B.; George, V. Institutional Toolkit to Combat Contract Cheating; International Center for Academic Integrity, University of Alabama: Tuscaloosa, AL, USA, 2016. [Google Scholar]
  42. Morris, E.J. Academic integrity matters: Five considerations for addressing contract cheating. Int. J. Educ. Integr. 2018, 14, 1–12. [Google Scholar] [CrossRef]
  43. Harper, R.; Bretag, T.; Rundle, K. Detecting contract cheating: Examining the role of assessment type. High. Educ. Res. Dev. 2021, 40, 263–278. [Google Scholar] [CrossRef]
  44. Shailendra, S.; Kadel, R.; Sharma, A. Framework for Adoption of Generative Artificial Intelligence (GenAI) in Education. IEEE Trans. Educ. 2024, 67, 777–785. [Google Scholar] [CrossRef]
  45. Carmichael, J.J. Predicting Threats to Academic Integrity: A Text-Mining and Scenario Modeling Framework. Ph.D. Thesis, Carleton University, Ottawa, ON, Canada, 2024. [Google Scholar]
  46. Smolansky, A.; Cram, A.; Raduescu, C.; Zeivots, S.; Huber, E.; Kizilcec, R.F. Educator and student perspectives on the impact of generative AI on assessments in higher education. In Proceedings of the Tenth ACM Conference on Learning@ Scale, Copenhagen, Denmark, 20–22 July 2023; pp. 378–382. [Google Scholar]
  47. Kadel, R.; Mishra, B.K.; Shailendra, S.; Abid, S.; Rani, M.; Mahato, S.P. Crafting Tomorrow’s Evaluations: Assessment Design Strategies in the Era of Generative AI. In Proceedings of the 2024 International Symposium on Educational Technology (ISET), Macao, China, 29 July–1 August 2024; pp. 13–17. [Google Scholar] [CrossRef]
  48. Pitt, P.; Dullaghan, K.; Sutherland-Smith, W. ‘Mess, stress and trauma’: Students’ experiences of formal contract cheating processes. Assess. Eval. High. Educ. 2021, 46, 659–672. [Google Scholar] [CrossRef]
  49. Eaton, S.E. Contract cheating in Canada: A comprehensive overview. In Academic Integrity in Canada: An Enduring and Essential Challenge; Springer: Berlin/Heidelberg, Germany, 2022; pp. 165–187. [Google Scholar]
  50. Frary, R.B. A Brief Guide to Questionnaire Development; Virginia Polytechnic Institute & State University: Blacksburg, VA, USA, 2003; Volume 9, p. 2003. [Google Scholar]
  51. Kishore, K.; Jaswal, V.; Kulkarni, V.; De, D. Practical guidelines to develop and evaluate a questionnaire. Indian Dermatol. Online J. 2021, 12, 266–275. [Google Scholar] [CrossRef]
  52. Braun, V.; Clarke, V. Thematic Analysis: A Practical Guide; Sage: Newcastle upon Tyne, UK, 2022. [Google Scholar]
  53. Islam, M.R. Sample size and its role in Central Limit Theorem (CLT). Comput. Appl. Math. J. 2018, 4, 1–7. [Google Scholar]
Figure 1. Framework high-level diagram.
Figure 1. Framework high-level diagram.
Societies 15 00011 g001
Figure 2. Three-Tier Framework (TTF) for combatting contract cheating.
Figure 2. Three-Tier Framework (TTF) for combatting contract cheating.
Societies 15 00011 g002
Figure 3. Survey and analysis methodology.
Figure 3. Survey and analysis methodology.
Societies 15 00011 g003
Figure 4. Current role.
Figure 4. Current role.
Societies 15 00011 g004
Figure 5. Familiarity with contract cheating by all participants and by experience.
Figure 5. Familiarity with contract cheating by all participants and by experience.
Societies 15 00011 g005
Figure 6. Familiarity with contract cheating by all participants and by recent training.
Figure 6. Familiarity with contract cheating by all participants and by recent training.
Societies 15 00011 g006
Figure 7. Respondent view on the impact of contract cheating on student learning.
Figure 7. Respondent view on the impact of contract cheating on student learning.
Societies 15 00011 g007
Figure 8. Distribution of composite scores for clarity of framework.
Figure 8. Distribution of composite scores for clarity of framework.
Societies 15 00011 g008
Figure 9. Distribution of doubly weighted composite scores for clarity of framework.
Figure 9. Distribution of doubly weighted composite scores for clarity of framework.
Societies 15 00011 g009
Figure 10. Distribution of Z-score-normalised composite scores for clarity of framework.
Figure 10. Distribution of Z-score-normalised composite scores for clarity of framework.
Societies 15 00011 g010
Figure 11. Distribution of composite scores for monitoring of framework.
Figure 11. Distribution of composite scores for monitoring of framework.
Societies 15 00011 g011
Figure 12. Distribution of doubly weighted composite scores for monitoring of framework.
Figure 12. Distribution of doubly weighted composite scores for monitoring of framework.
Societies 15 00011 g012
Figure 13. Distribution of Z-score composite scores for monitoring of framework.
Figure 13. Distribution of Z-score composite scores for monitoring of framework.
Societies 15 00011 g013
Figure 14. Distribution of composite scores for implementation of framework.
Figure 14. Distribution of composite scores for implementation of framework.
Societies 15 00011 g014
Figure 15. Distribution of doubly weighted composite scores for implementation of framework.
Figure 15. Distribution of doubly weighted composite scores for implementation of framework.
Societies 15 00011 g015
Figure 16. Distribution of Z-score-normalised composite scores for implementation of framework.
Figure 16. Distribution of Z-score-normalised composite scores for implementation of framework.
Societies 15 00011 g016
Table 1. A sample Pre-Designed Template (PDT).
Table 1. A sample Pre-Designed Template (PDT).
MarksAssessor InputProgress—A2MarksAssessor Input
FAICTOIS1CommentA2A2A2GroupA2OIS2Comment
Table 2. Mapping between the questions from the questionnaires and their usage in the analysis.
Table 2. Mapping between the questions from the questionnaires and their usage in the analysis.
QuestionsUsed forQuestionsUsed for
1–3Demography and weight13–14Monitoring
4–7Awareness15–17Implementation
8–12Clarity18–19Specific feedback
Table 3. Demographic characteristics of respondent academics.
Table 3. Demographic characteristics of respondent academics.
VariablesCategoriesPercentage (%)
PositionTutor12.5
Lecturer59.4
Senior Lecturer25
Associate Professor0
Professor3.1
Academic ExperienceUnder one year9.4
1–2 years6.2
3–4 years0
5–6 years9.4
More than 6 years75
CampusMelbourne87.5
Sydney12.5
Table 4. Familiarity with the concept of contract cheating and training.
Table 4. Familiarity with the concept of contract cheating and training.
VariablesCategoriesPercentage (%)
FamiliarityExpert Practitioner9.4
Quite Familiar62.5
Familiar25
Somewhat Familiar3.1
Not Familiar0
Recent training or researchWithin the last 3 months34.4
3 months–1 year21.9
1–3 years9.4
More 3 years12.5
Never21.9
Table 5. Component score used in the analysis.
Table 5. Component score used in the analysis.
OpinionWeightOpinionWeight
Strongly disagree−2Disagree−1
Strongly agree+2Agree+1
Neutral0
Table 6. Weight used for experience and familiarity used in weighted composite analysis.
Table 6. Weight used for experience and familiarity used in weighted composite analysis.
ExperienceWeightFamiliarityWeight
Under one year0.25Somewhat Familiar0.25
1–2 years0.50Familiar0.50
5–6 years0.75Quite Familiar0.75
More than 6 years1.00Expert Practitioner1.00
Table 7. The statistical parameters of composite and weighted composite responses.
Table 7. The statistical parameters of composite and weighted composite responses.
ParameterCompositeWeighted Composite
Mean5.133.31
Std3.412.62
Min−6.00−4.5
Max10.007.5
25% (Q1)4.001.50
50% (Q2)5.003.75
75% (Q3)7.005.00
Table 8. The statistical parameters of composite and weighted composite responses for monitoring.
Table 8. The statistical parameters of composite and weighted composite responses for monitoring.
ParameterCompositeWeighted Composite
Mean2.131.43
Std1.501.04
Min−3.00−0.75
Max4.003.00
25% (Q1)1.750.75
50% (Q2)2.251.50
75% (Q3)3.003.00
Table 9. The statistical parameters of composite and weighted composite responses for implementation.
Table 9. The statistical parameters of composite and weighted composite responses for implementation.
ParameterCompositeWeighted Composite
Mean1.631.15
Std1.811.34
Min−3.00−2.25
Max6.004.5
25% (Q1)1.001.13
50% (Q2)2.002.00
75% (Q3)3.004.50
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guruge, D.B.; Kadel, R.; Shailendra, S.; Sharma, A. Building Academic Integrity: Evaluating the Effectiveness of a New Framework to Address and Prevent Contract Cheating. Societies 2025, 15, 11. https://doi.org/10.3390/soc15010011

AMA Style

Guruge DB, Kadel R, Shailendra S, Sharma A. Building Academic Integrity: Evaluating the Effectiveness of a New Framework to Address and Prevent Contract Cheating. Societies. 2025; 15(1):11. https://doi.org/10.3390/soc15010011

Chicago/Turabian Style

Guruge, Deepani B., Rajan Kadel, Samar Shailendra, and Aakanksha Sharma. 2025. "Building Academic Integrity: Evaluating the Effectiveness of a New Framework to Address and Prevent Contract Cheating" Societies 15, no. 1: 11. https://doi.org/10.3390/soc15010011

APA Style

Guruge, D. B., Kadel, R., Shailendra, S., & Sharma, A. (2025). Building Academic Integrity: Evaluating the Effectiveness of a New Framework to Address and Prevent Contract Cheating. Societies, 15(1), 11. https://doi.org/10.3390/soc15010011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop