Next Article in Journal
Remote Hardware Controlled Experiment Virtual Laboratory for Undergraduate Teaching in Power Electronics
Previous Article in Journal
Bullying among Teens: Are Ethnicity and Race Risk Factors for Victimization? A Bibliometric Research
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessment and Feedback for Large Classes in Transnational Engineering Education: Student–Staff Partnership-Based Innovative Approach

School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
*
Author to whom correspondence should be addressed.
Educ. Sci. 2019, 9(3), 221; https://doi.org/10.3390/educsci9030221
Submission received: 1 June 2019 / Revised: 5 August 2019 / Accepted: 8 August 2019 / Published: 23 August 2019

Abstract

:
Assessment and feedback (A&F) are two major components of any educational program and must be properly in place to ensure student learning and quality of experience. However, these important components come under severe challenges of meeting student expectations in the large class size context. When the program delivery relates to a transnational educational (TNE) scenario, the additional constraints on staff–student physical interaction, regional time differences and cultural background gaps introduce additional challenges: Conducting proper assessments and provide timely and constructive feedback to the students. In this paper, the authors propose a novel assessment and feedback framework which exploits having a large student number as a positive factor by introducing staff–student partnership to implement efficient assessment and feedback strategies. Authors propose to use students for peer-review, assessment design, evaluation rubric design and tutorial-based feedback. The students also take part in preparing feedback clusters based on which the instructor provides pseudo-personalised video feedback. Through feedback clusters, authors introduce the trade-off between individual feedback and generic feedback. The results of the study are particularly promising in terms of student satisfaction and learning enhancement.

1. Introduction

Assessment and feedback (A&F) play a key role in students’ learning, and assessment-based direct learning approaches are now becoming a common part of the higher education sector. Equally, A&F can enhance student satisfaction and engagement with future learning, where student satisfaction can significantly influence university ranking in league tables. Students are also likely to transfer knowledge among other students when they see the prospects through various assessment tasks—predominantly through group assessment tasks, which can be commonly seen in engineering streams. The UK Higher Education Academy (HEA) also recognises the importance of A&F through the UK Professional Standards Framework (UKPSF), for example, it highlights “assess and give feedback to learners” as a key area of activity (UKPSF—A3) [1]. Providing feedback for both formative and summative assessments is also a key recommendation by all of the higher education accreditation bodies, which they scrutinise through periodic accreditation visits. A&F activities are not just to establish learner achievement, but they also provide evidence for staff to set standards or to improve courses as well as curriculums.
There are many reasons to assess students during a course: To guide students’ improvement, to help and direct students with their choices about elective courses, to help students to learn from mistakes and to allow students to understand how well they are developing as learners [2]. Most importantly, assessment set standards to determine students’ fitness for the course and overall to the programme. Consequently, it is also significant to understand the assessment of other courses of the same programme to guarantee that similar learning outcomes are not assessed several times unnecessarily.
Selecting the correct assessment form for each individual course is a key, to practically provide useful feedback to students of large classes, and hence to enhance their learning. Once an appropriate assessment form is selected, the lecturer then has to consider the values and principles to associate for that particular form of assignment when designing it. For example, the assessment tasks should be valid, reliable, transparent and authentic [2,3]. Concurrently, it should stimulate deep learning and inspire students to learn [4]. It also needs to be fair for the skills set of the students and they should be able to manage in a timely manner. Further, it should also be capable of discriminating academically stronger students against weaker students.
Assessment can be in various forms: Examinations, laboratory work, reports/essays, project work, group work, portfolios (presentations, posters, oral examinations, dissertations) and several other forms [2,5]. Each of those forms have advantages and disadvantages. For example, examinations are a way of assessing students’ learning, related to a specific subject matter, in a more proficient way. However, there is no evidence in the literature that examinations fundamentally enhance students’ aspiration to learn a particular subject. Similarly, project work enables students to apply theory into practice, but such work generally absorbs a large proportion of students’ academic time to complete the work, as well as staff time for marking. It is clear that each form of assessment has its own boundaries, where it becomes even more challenging for the case of large classes when setting assessments and providing feedback.
It is generally difficult to find an effective way for assessment and feedback for the case of large classes. Further, when setting a deadline for an assessment of a course that needed to be sorted in consultation with other lecturers who run courses in parallel (to prevent overburdening of the students at the same time), all of these become increasingly complicated as the student number increase, as well as when the programme is conducted as a transnational education (TNE) programme. Finding a healthier approach for assessment and feedback is vital for the sustainability of such TNE programmes. TNE is currently growing rapidly. For the case of the UK, more students enrolled overseas (over 0.7 million students) than international students enrolled in higher education programmes in the UK [6].
In view of such challenges, the large student number can be turned from a problem to a solution when students act as partners. This partnership can take several shapes when it comes to assessment and feedback. For instance, Reference [7] talk about peer assessment as a learning and teaching enhancement tool where they present the feedback from three facilitators of a first-year undergrad course on critical skills. The facilitators help and guide the students in a peer-review process for one of their assessment exercises on the course. While the authors cite their successful experience on improving student learning, it is emphasised that the peer-review exercise should be run for more iterations for better outcomes.
It is argued by authors in Reference [8] that assessment in partnership with the learners has a huge impact on student learning, and the assessment design and their marking schemes should involve student participation to ensure student engagement and managing student expectations. However, it is recommended that proper feedback mechanism should be in place for the students to improve on their learning skills and the feedback mechanism should be aligned to the assessment marking rubrics.
Recently, some researchers [9] have evaluated the effectiveness of an oral peer-review exercise, which they conducted for post graduate students. The impact of the peer-review process was evaluated through comparison in exam scores for students who took in peer review versus those who did not participate, the perception of the peer review impact and student satisfaction on being involved in the assessment process. Authors have mentioned that the peer-review exercise resulted in significantly higher scores on the exams for the students who took part in the peer-review exercise as well as improved student learning. This clearly underlines the importance of involving students in the assessment procedures.
Reference [10] provides a study on the variability in the students’ evaluating process in peer assessment. The authors compare the peer review outcomes with a Calibrated Peer Review, an online system to facilitate the peer assessment of writing. It was noted that the student peer review was closely matching the expert assessments. However, there were inconsistencies and variabilities found on individual to individual bases. The authors’ emphasis on instructors to properly guide the students through the peer-review process have improved results.
Regards to the feedback, Reference [11] has discussed how automated feedback can be helpful in addressing the requirements and challenges of student feedback. The authors provide a simple framework for providing automated feedback to deal with feedback inconsistencies, handwriting issues, etc. However, we find that the framework is still not precise enough to give the learners a sense of individual feedback.
In this paper, we propose a novel methodology of A&F in the context of larger class sizes in TNE provision. The key objective is to involve students as partners in designing the assessments as well as taking part in the peer-review process. Further, a novel feedback approach is proposed to overcome the deficiencies of generic feedback and impracticalities of individual feedback. The proposed feedback methodology could not only be efficient in terms of staff time, but also effective in terms of student learning. The proposed procedure has been implemented for a first-year class in the Electronics and Electrical Engineering (EEE) degree program with a student number of around 202.

2. Objectives of the Study

Given the primary motivation to explore the effectiveness of student involvement in improving the A&F for larger classes in TNE provision, this paper proposes a novel A&F framework with the following key objectives:
  • To involve the students in the design of assessments and their evaluation criteria.
  • To participate students in the peer-review process to enhance their understanding of the A&F process.
  • To propose and exercise a feedback strategy which could work efficiently and effectively for large size classes.
  • To recommend means to prepare pseudo-personalized feedback.

3. Proposed Framework

In the proposed framework, there are two-tier assessment as well as two-tier feedback stages, which would involve the collaboration of the students with the instructor in the A&F process as shown in Figure 1. The students are categorised as the students currently enrolled on the course and the ex-students who had taken the same course last year.
The details of the framework stages are given below.

3.1. Framework Stages

3.1.1. Assessment Tier I

In this phase, the instructor would meet the ex-students and discuss the nature of the assessment including the top-level contents of the assessment and the evaluation criteria. The student perspective to design the assessment and its rubric would benefit from their previous experience of the same course. Based on what could have been improved in last year’s course, regarding the A&F process, an assessment and its grading evaluation rubrics will be developed. Once completed, the current students would be given the assessment. This assessment can be formative as well as summative.

3.1.2. Assessment Tier II

In this phase, the current students will review the assessment performance of their peers based on the evaluation rubric that was designed by the ex-students. They will require to determine the rubric category to which the reviewed work belonged. This will help the students understand the instructors’ way of thinking when their work is being graded. Subsequently, the ex-students moderate the peer review performance by comparing the peer review markings with their own markings. This will also provide them with the insight on whether the current students were able to understand the grading scheme and judge the peer work properly.

3.1.3. Feedback Tier I

In the first feedback stage, the ex-students will provide detailed feedback on the work of all the students who participated in the assessment phase. This feedback will be provided in a tutorial session, where the ex-students will inform the current students about the main strengths and weaknesses found in their work. They will also divide the student work in clusters in a way that students with a high correlation in their performances are put together in one feedback cluster. While providing detailed feedback, the ex-students will note the important points related to each feedback cluster. These points may include the common strengths and weaknesses related to the understanding and the completion of the assessment tasks.

3.1.4. Feedback Tier II

In the last part of the process, the instructor would be able to see the feedback pointers prepared by the ex-students at the end of the Feedback Tier I stage. The instructor would then prepare a set of feedback videos addressing each feedback cluster. The instructor would discuss the criteria for the students who fell into each category and the most common strengths of the students observed during the assessment process. It would also highlight the most common mistakes, the potential reasons for those mistakes and, subsequently, the tips to avoid such mistakes in future assessments.
It can be seen that clustered feedback is a compromise between individual feedback and generic feedback. Clustering the students with overlapping performance and then providing the feedback to a student cluster would help in providing targeted and detailed feedback in an efficient manner. Due to the high correlation among student performances in a cluster, cluster feedback will appear as pseudo personalised feedback for the students.

3.2. Implementation of the Proposed Two-Tier A&F Process

This study was conducted for 202 students registered in the Microelectronics Systems course, which is generally offered in Semester 2 (Spring Semester) of Year 1 in Electronics and Electrical Engineering program. This course introduces basic concepts associated with embedded microelectronic systems, illustrated by circuits that include a small, modern microcontroller. Both theoretical and practical aspects of the design of microcontroller systems were covered. These aspects were illustrated by investigating the role of a modern microcontroller in an electronic system, followed by the design and analysis of hardware and software in a microcontroller system including the circuitry to interface microprocessors with external circuitry.
A total of three ex-students who took the course of Microelectronics Systems in the previous year contributed as the student partner team (SPT) on the project. These ex-students were selected on their academic competency and ability. The requirements and expectations of the role were advertised to the top 10% of the students who took the same course last year. The SPT were chosen from the interested students with an outstanding academic record and excellent communication skills. These students were paid on an hourly basis as teaching assistants for their time spent for this study. The whole process was developed and executed in collaboration with the SPT.
After the completion of all the tiers mentioned in the proposed framework, students were asked to complete a reflective feedback questionnaire about all the components of the framework—assessment, feedback, peer review and overall. The questionnaire consisted of a number of aspects of the project, which were rated on a 5-point Likert scale to quantify the student experience with regard to each component of the study. There was a group of open-ended questions, too, which asked the students to comment on the strengths, weaknesses and recommendations of the investigation. The focus group interviews were held with the ex-students to obtain their views about their experience of working on the project. The discussion of the findings in regard to different aspects of the projects including students’ survey and focus group interviews are detailed in the subsequent sections of the paper.

3.3. Procedural Developments

3.3.1. Assessment Design

A team of ex-students participated to design the assessment under the guidance of the course instructor. The instructor and the ex-students discussed the nature of the assessment and concluded that the assessment should be formative and must evaluate all the main aspects of the course that the students may have learnt during their semester. There were three main question categories decided—basic understanding, numerical calculations and computer coding to interface the microcontroller with discrete components and external peripherals. The ex-students prepared the assessment, comprised of these three parts with each part addressing one category, and moderated by the instructed.

3.3.2. Assessment Rubric Design

The ex-students then prepared an assessment rubric for the evaluation of each part (and sub-parts) of the assessment as shown in Figure 2. The ex-students categorized each question part into five categories ranging from Excellent to Poor and accordingly produced the statements required to assess the student work in a fair manner while ensuring that each rubric category is sufficiently distinguished from its neighbouring categories within each assessment task. This comprehensive rubric was designed for all sort of assessment types, i.e., essay type (Q1), numerical problems (Q2) and computer programming (Q3). The instructor provided feedback to improve the assessment rubric in order to fairly assess the student work and to improve the distinctions among various assessment categories. It was made sure that the language used in the rubrics was plain and easily understood by the markers. The students who participated in the peer reviewing were given detailed instruction about the use of assessment rubrics with the help of examples to facilitate the peer-review process.

3.3.3. Formative Assessment

The assessment designed by the ex-students was taken by the current students as a mock midterm exam and it was declared as a formative assessment. The assessment was conducted under normal exam conditions and the ex-students did the job of exam invigilators. The exam duration was 60 min with three compulsory questions, one each for assessing basic knowledge, numerical competence and programming skills. After the exam the ex-students marked all the exam sheets in a way that they noted each student’s performance on the assessment rubric. As an outcome, for each student, there was a rubric sheet with circles on the rubric categories to which each part of their work was closely matching.

3.3.4. Anonymous Peer Review

The current students participated in the anonymous peer review. All the student sheets were copied and were given fictitious numbering while the original student IDs were masked. The record of the fictitious numbers and actual student IDs were kept in a secure Excel file. Although the coding part was cumbersome and time-consuming, the students were informed about the benefits associated with the anonymous peer review and student privacy (data protection and unbiasedness). The current students marked on the rubric sheets, highlighting the rubric category of each part of the assessment they reviewed.

3.3.5. Peer Review Moderation

The ex-students then collected the peer-review marked sheets and compared them with their own scoring. They recorded the differences in the student’s peer review results and the original results. Later, after providing the assessment solution, the peer-review exercise was repeated. The purpose of this activity was to see whether peer review would be affected in case the reviewers (current students in this case) had the knowledge about the assessment solution. As it can be seen from Figure 3, we have provided the comparison for one task only. Similar data was collected for all the assessment tasks. In Figure 3, on the x-axis we have represented the rubric categories numbered from 1 to 5 where 1 represents Poor and 5 represents Excellent. On the y-axis is the number of students falling in the corresponding rubric category. As it can be seen, there is a considerable difference between the ex-students’ grading and peer review before the actual answers were provided to the students. However, this gap was reduced once the solution was provided to the students. This makes sense as most of the students would have not properly reviewed their peers’ work while they did not actually know the actual solution. Therefore, we emphasise that in order to properly conduct the peer review, the students should be briefed and informed in detail prior to the peer review.
During the moderation process, the team of ex-students noted the strengths and weaknesses in each of the question categories against all the rubric categories. They then prepared tables covering all the rubric categories of all the questions, and highlighted the main pros and cons related to the student performance in those tasks. In this way there were five feedback clusters prepared aligned to the five rubric categories for each question.

3.3.6. Tutorial-Based Feedback

The ex-students arranged a face-to-face tutorial to discuss the overall assessment. They not only provided the solutions for all the questions and their parts, but also shared the pros and cons tables for each assessment task. This tutorial provided the current students the opportunity to interact face-to-face with the evaluating team and clarify any doubts they may had. The results of the peer review were also discussed, and the students were informed about their performance on the peer-review exercise. The students were provided with the opportunity of questioning, debating and providing feedback about the review and feedback sessions.

3.3.7. Video Feedback

The instructor provided their experienced feedback in the form of videos. These videos were prepared with the support of the feedback pointers that the ex-students prepared about each feedback cluster. The instructor through a set of videos, shared the tips and tricks to improve the performance of the students falling in each of the feedback clusters. The videos were shared through Moodle and each student watched only one video feedback according to the category he/she falls in. Hence, all the students had the opportunity to receive feedback through the video, watching and learning from it at their own convenience.
As discussed earlier, the initial intent of the clustered video feedback is to provide a pseudo-personalized feedback to students so that he/she has a chance to rectify his/her mistakes and concepts. It is nearly impossible to provide one-to-one personalized feedback on individual work for a large class size, i.e., 200 to 240 students. On the other hand, generic feedback is not effective as students do not give value to it. Hence, the proposed approach overcomes the deficiencies of generic feedback and impracticalities of individual feedback. Therefore, the proposed feedback methodology could not only be efficient in terms of staff time, but also effective in terms of student learning.

4. Results

Out of 202 students, 173 students took part in completing the feedback questionnaire. The percentage of participating students was ensured by providing the link to the feedback survey in the lab sessions where student presence is mandatory. The feedback questionnaire was divided into four categories: Assessment, feedback, peer review and overall commentary. The students responded on a 5-point Likert scale corresponding to strongly agree, agree, neutral, disagree and strongly disagree.

4.1. Assessment Category

The following questions were asked about “the assessment and its quality” from the students:
  • The mock exam topics were covered in the lectures/labs by the instructor (Q1).
  • The length of the exam was appropriate (Q2).
  • The difficulty level of the exam was appropriate for me (Q3).
As shown in Figure 4, the overwhelming majority of the students agreed/strongly agreed that the topics in the assessment were covered by the instructor during lectures and/or labs. However, there was a small percentage of students who were neutral. Probably these were the students who were not attending the classes and had no knowledge of how to respond to this question. There was a small percentage of students who disagreed about the length of the exam being appropriate. The assessment difficulty, though considered appropriate by most of the students, was not considered appropriate by some of the students. This is expected as there is always a certain portion of the students who find the assessments too difficult or too easy. It is clear that the assessment design by the student partners was a successful test case.

4.2. Feedback Category

Regards to the feedback and its quality, students were asked the following questions:
  • The feedback was provided in time for me to use it (Q4).
  • The feedback was clearly understandable (Q5).
  • The feedback was useful to me (Q6).
  • The feedback was relevant to my assessment (Q7).
  • The feedback was more useful for me than providing only the solution (Q8).
  • This feedback process is a good alternative to individual feedback (Q9).
  • The video feedback provided additional useful information to me (Q10).
As seen in Figure 4, most of the students either strongly agreed or agreed to different questions related to the feedback they were provided. Around 88% of the students considered the feedback useful to them. There were some disagreements on the timely availability of the feedback and video feedback value addition. Although the feedback process was completed within three weeks, some students were still not satisfied with the timeliness of feedback return. This strengthens the authors’ initial argument that in case the instructors have to provide individual feedback to large classes, the feedback delivery could take even longer causing further student dissatisfaction. Secondly, with regard to the value addition by feedback, it is required to further improve on the video contents. This ensures the videos are effectively providing additional support and not just a repetition of the tutorial feedback.

4.3. Peer Review Category

Here are the questions which were asked about the peer-review process:
  • The peer review process was helpful for my learning (Q11).
  • The assessment rubrics were clear to me during peer-review process (Q12).
  • The writing of the peer reviews for my peers helped me to understand the examiner’s thinking process (Q13).
Although most peer-review survey results were extremely positive, there was a percentage of students who did not think that the assessment rubrics were clear enough for them during the peer-review process. Some did not consider that taking part in peer-review helped them get insight of the assessment’s purpose. This indicates further work is required when designing rubrics to ensure better understanding by the students. Also, the peer-review process should be preceded by a detailed presentation, relating peer-review and its implication on performing better in assessments.

4.4. Overall Feedback

Following were the questions on the overall process:
  • The overall quality of assessment I took part in was exceptional (Q14).
  • The overall quality of the peer review process was extraordinary (Q15).
  • The overall quality of the feedback that I received was extremely helpful (Q16).
  • The whole process will be useful in my learning (Q17).
  • The whole process will help me in my exams (Q18).
The majority of the students agreed/strongly agreed with the different aspects of the project. Particularly, there was a huge percentage of students who were happy with the usefulness of the process, and only around 5% were in disagreement with the effectiveness of the peer-review experiment. It is understandable that peer review is not a common practice for students, especially for 1st-year students, and therefore more effort would be required to enhance students’ engagement. It is also important to ensure that students understand that with the help of standard marking criteria and anonymity, the factor of biased grading is significantly reduced. The peer-review process can also help them better understand their own strengths and weaknesses when they are reflecting and judging their peers’ work.
The authors also compared the student’s performance on the exam with the previous year’s cohort in the same subject. It was found that the average score for the student cohort who participated in this project was around 9% more than those of the previous year’s cohort who did not have the opportunity to undergo such exercise.

4.5. Ex-Student Feedback

This section briefly includes some of the feedback authors received from the ex-students through focus groups. The ex-students enjoyed the tutorial feedback session that they held independently, and they considered it as a confidence-boosting experience. They also shared that participating in the project would help them in the later stages of their degree program. They had a better understanding of how the instructors grade their work based on assessment rubrics and how they could effectively employ their instructors’ feedback to improve their learning. It was also noted that ex-students were able to create a good working relationship with the current students, and they enjoyed being a mentor and helper to their junior students. This was also evidenced from partner student feedback, for example: “…my knowledge of student assessment and feedback was greatly improved. Having experienced being a student, a lecturer and examiner, I learnt to think in all three ways, which helped me understand all of them better. Being in such a project also enhanced my time management and communication skills.”; and also, “overall, I think it is an excellent project which benefits both junior students and us. And it also makes some improvements on the traditional assessment & feedback session”.

5. Conclusions and Recommendations

In this paper, we have proposed and implemented a novel A&F mechanism to support large size classes in the TNE context. We have proposed a two-tier A&F strategy that employs students as partners for planning, preparing and executing the A&F process. We have involved current students in the peer-review process while a selected team of ex-students under the guidance of their instructors prepared a formative assessment and its marking rubrics. Then the ex-students supervised the peer-review process and helped in moderating the peer reviews. Later, the ex-students helped in preparing feedback clusters and delivering tutorial-based feedback. Based on the feedback clusters, the instructors prepared a set of videos addressing the students falling in each cluster and thereby providing pseudo-personalised feedback. The study was evaluated through student survey and focus groups. The outcomes of the projects are extremely promising to address the challenges of A&F for large classes, particularly in the TNE context. However, the study can be implemented in any large class size scenarios.
Towards the end we would like to make some final recommendations:
  • Peer review should be preceded with an introduction session and the students should be guided about the purpose of peer review.
  • Student presentation skills can be improved if the written exam assessment is complemented/replaced by a poster presentation.
  • Video feedback should provide additional information to tutorial-based feedback in order to add value to the feedback process.
  • Student experience with peer reviewing should be kept in mind before engaging them in the peer-review process. Particularly, first-year students have very little experience of the university system of assessments and evaluations, and therefore they should be trained accordingly.
  • The student partners should be motivated and inspired by someone who has strong commitment and expertise in the A&F process and could lead the students properly.

Author Contributions

Conceptualization, S.H. and W.A.; methodology, S.H. and M.A.I.; software, K.A.A.G.; validation, M.A.I., K.A.A.G.; formal analysis, S.H.; investigation, S.H.; resources, K.A.A.G.; data curation, S.H.; writing—original draft preparation, K.A.A.G.; writing—review and editing, S.H.; visualization, S.H.; supervision, S.H.; project administration, S.H.; funding acquisition, S.H., W.A., M.A.I.

Funding

This research was funded by Learning and Teaching Development Fund, University of Glasgow.

Acknowledgments

Authors would like to thank student partners, Zimo Zhao, Di Zhang and Ruoye Wang, for their support in conducting the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Higher Education Academy. Assessment and Feedback; Higher Education Academy: Heslington, UK, 2019; Available online: https://www.heacademy.ac.uk/system/files/resources/assessment_and_feedback.pdf (accessed on 13 May 2019).
  2. Race, P. Designing Assessment to Improve Physical Sciences Learning; Higher Education Academy: Heslington, UK, 2009; ISBN 978-1-903815-22-9. [Google Scholar]
  3. Race, P.; Brown, S.; Smith, B. 500 Tips on Assessment, 2nd ed.; Routledge: London, UK, 2005. [Google Scholar]
  4. Hounsell, D. The trouble with feedback: New challenges, emerging strategies. Interchange 2008, 2, 1–10. [Google Scholar]
  5. Race, P. The Lecturer’s Toolkit—A Resource for Developing Assessment, Learning and Teaching, 3rd ed.; Routledge: London, UK, 2007. [Google Scholar]
  6. Universities UK. About Our TNE Work. 2008. Available online: https://www.universitiesuk.ac.uk/International/heglobal/Pages/about-tne-work.aspx (accessed on 13 May 2019).
  7. Tighe-Mooney, S.; Bracken, M.; Dignam, B. Peer Assessment as a Teaching and Learning Process: The Observations and Reflections of Three Facilitators on a First-Year Undergraduate Critical Skills Module. AISHE J. 2016, 8, 283. [Google Scholar]
  8. Stefani, L. Assessment in Partnership with Learners. Assess. Eval. High. Educ. 1998, 23, 339–350. [Google Scholar] [CrossRef]
  9. Dickson, H.; Harvey, J.; Blackwood, N. Feedback, feedforward: Evaluating the effectiveness of an oral peer review exercise amongst postgraduate students. Assess. Eval. High. Educ. 2019, 44, 692–704. [Google Scholar] [CrossRef]
  10. Russell, J.; Van Horne, S.; Ward, A.S.; Bettis, E.A.; Gikonyo, J. Variability in students’ evaluating processes in peer assessment with calibrated peer review. J. Comput. Assist. Learn. 2017, 33, 178–190. [Google Scholar] [CrossRef]
  11. Biggam, J. Using Automated Assessment Feedback to Enhance the Quality of Student Learning in Universities: A Case Study, Technology Enhanced Learning; Quality of Teaching and Educational Reform; Springer: Berlin, Germany, 2010. [Google Scholar]
Figure 1. Proposed two-tier assessment and feedback (A&F) process.
Figure 1. Proposed two-tier assessment and feedback (A&F) process.
Education 09 00221 g001
Figure 2. Marking rubric developed by ex-students.
Figure 2. Marking rubric developed by ex-students.
Education 09 00221 g002
Figure 3. Moderation of the peer-review process.
Figure 3. Moderation of the peer-review process.
Education 09 00221 g003
Figure 4. Survey results.
Figure 4. Survey results.
Education 09 00221 g004

Share and Cite

MDPI and ACS Style

Hussain, S.; Gamage, K.A.A.; Ahmad, W.; Imran, M.A. Assessment and Feedback for Large Classes in Transnational Engineering Education: Student–Staff Partnership-Based Innovative Approach. Educ. Sci. 2019, 9, 221. https://doi.org/10.3390/educsci9030221

AMA Style

Hussain S, Gamage KAA, Ahmad W, Imran MA. Assessment and Feedback for Large Classes in Transnational Engineering Education: Student–Staff Partnership-Based Innovative Approach. Education Sciences. 2019; 9(3):221. https://doi.org/10.3390/educsci9030221

Chicago/Turabian Style

Hussain, Sajjad, Kelum A. A. Gamage, Wasim Ahmad, and Muhammad A. Imran. 2019. "Assessment and Feedback for Large Classes in Transnational Engineering Education: Student–Staff Partnership-Based Innovative Approach" Education Sciences 9, no. 3: 221. https://doi.org/10.3390/educsci9030221

APA Style

Hussain, S., Gamage, K. A. A., Ahmad, W., & Imran, M. A. (2019). Assessment and Feedback for Large Classes in Transnational Engineering Education: Student–Staff Partnership-Based Innovative Approach. Education Sciences, 9(3), 221. https://doi.org/10.3390/educsci9030221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop