Next Article in Journal
Less but Better? Teaching Maths in Further Education and Collateral Growth
Previous Article in Journal
Assessing Expressive Oral Reading Fluency
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Quality of Feedback in Higher Education: A Review of Literature

1
Department of Elementary and Special Education, Georgia Southern University, Statesboro, GA 30458, USA
2
Department of Special Education, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
3
College of Education, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2020, 10(3), 60; https://doi.org/10.3390/educsci10030060
Submission received: 20 January 2020 / Revised: 26 February 2020 / Accepted: 26 February 2020 / Published: 5 March 2020

Abstract

:
In raising the standards for professional educators, higher educators must be prepared to provide the highest quality feedback on student performance and work products toward improved outcomes. This review of the literature examined the major findings of 70 quantitative, mixed methods, or qualitative studies found in higher education journals across a range of disciplines. Multiple recommendations and results for feedback emerged which fall into the categories described by Susan Brookhart. This review found research for each of Brookhart’s categories, with results indicating differences between the perceptions of adherence to sound feedback practices versus the reality of implementation, the potential for innovative tool use, and a disagreement about the effectiveness of peers for providing effective feedback. Indicators for quality within the research confirmed the importance of commonly accepted standards such as positivity, specificity, timeliness, and encouraging active student participation. Additionally, trends and themes indicated a need for the consistent implementation of the feedback exchange process and flexibility to account for student input/preferences. Greater consistency toward the application of these quality indicators should be undertaken when determining the quality of higher education feedback for preservice teachers prior to undertaking summative licensure assessments.

1. Introduction

As with public schools, institutions of higher education are facing increased accountability pressures in recent years to assess student performance [1]. For educator preparation programs (EPPs; i.e., institutions of higher education designed to prepare candidates for teaching across the U.S.), this pressure is particularly acute [2,3]. Nationally, summative assessments such as performance-based assessments (e.g., validated tasks including demonstrations of teacher-candidate knowledge and skills such as edTPA or PPAT) and licensure exams are being used more frequently to ascertain teacher-candidate effectiveness prior to a licensure recommendation [4,5]. The high-stakes nature of these assessments has caused many EPPs to create formative activities in coursework that are aligned with these summative assessments, allowing candidates to build their capacity toward the successful completion of these assessments throughout a program of study [6,7]. These formative activities (such as planning differentiated lessons or analyzing student data) are embedded into course assignments and provide faculty opportunities to give candidates extensive feedback on their performance with enough time to adjust instruction based on the results. Given that passing scores may be required on the summative assessments to meet licensure requirements, the quality of faculty feedback given to candidates on formative assignments can be critical in assisting candidates with improving their own performance—particularly on a performance-based assessment like edTPA [8]. Examining the quality of faculty feedback to candidates on these assessments, then, would be worthy of further study.
Reviewing the quality of faculty feedback to candidates, particularly when multiple faculty are providing feedback on a common assessment, can be tricky. This kind of review means considering the variability of feedback given to students by different faculty. As Evans [9] noted, there is wide disagreement among university faculty on the definition of appropriate assessment feedback. The amount, context, and quality of what constitutes “appropriate” feedback may vary greatly. Faculty may consider any attempt to streamline this process as a threat to their autonomy [7]. In addition, research has shown that when feedback is provided to candidates in a negative context (i.e., “what you are doing wrong”), candidates are less likely to take as a means to improvement [9,10]. The kind of feedback given to candidates and how that feedback shared is critical in determining whether the feedback is effective in increasing candidate performance.
There are several conditions or moderators to consider which can influence the impact of feedback. Gibbs and Simpson [11] outline seven conditions that facilitate a positive influence of feedback including the frequency, timeliness, relevance and detail of the feedback, focus of feedback on student performance, and that feedback is understandable and actionable. Brookhart [12] outlined features of feedback that were considered most impactful that addressed strategies such as the timing, amount, and mode of the feedback as well content of the feedback which included an examination of the function, valence, clarity, specificity, and tone. The authors seek to further examine learning-oriented assessment [13] best practices as they relate to higher education edTPA training and assessment. To chart a path toward the most efficient and effective feedback interactions, scholars need to examine how feedback is currently operating in higher education.
The focus of this study was to examine the literature for (a) features of feedback or (b) the conditions under which feedback is provided in higher education that is deemed to impact the quality or utilization of the feedback. A literature review was conducted to answer the following question:
  • Are features of quality feedback verified in the higher education current literature?
  • What are the perceptions of faculty and students in higher education regarding what is considered quality feedback?
The Methods outline the parameters for the review, the inclusion and exclusion criteria, and descriptors of the codes. The results are organized by research design and codes. The discussion highlights the overall major findings within the review.

2. Methods

The parameters for conducting this literature review included a targeted review of materials that fell within a specific time frame. The focus of the current review considered literature on feedback published between 2000 and 2018, which included updated frameworks for feedback development and higher standards for empirical evidence. Readily available electronic databases (considered standards in the U.S.) were searched as the main source of identifying appropriate information (e.g., Academic Search Complete, Proquest, Sage Complete, JSTOR, WorldCat.org, EBSCO Host, Ed Research Complete, and Google search/scholar for web resources). Search terms generally included the words higher education with combinations of the words feedback, assessment, and evaluation along with various descriptors like quality, timely, task focused, analytic, and descriptive. Literature were also identified by manual searches of higher education-focused journals (e.g., Assessment and Evaluation in Higher Education) and tree searches of the reference sections of previously identified literature.
Articles were reviewed for alignment with the following inclusion and exclusion criteria. Including theoretical, conceptual, and data-based works (both qualitative and quantitative), the authors of this review sorted the collection of literature into three categories for separate analyses: conceptual foundations, quantitative, and qualitative. All work included in the conceptual foundations was used to inform the literature search terms and categories. Only literature that conducted a methodological study was included in the results of the literature review. The literature review materials included journal articles, conference papers, and dissertations. Other materials such as books, book chapters, and websites were also considered in the development of the quality feedback coding measure as well as in the development of the search term list for the literature review.
Inclusion criteria:
  • Articles from 2000 to 2018.
  • Included feedback provided to students in higher education (e.g., a university or community college).
  • Addressed either the content of the feedback provided or the format of feedback (delivery mechanisms).
  • Geographic Location: all accepted.
  • Types of literature: conceptual/definition, methodological, empirical studies, reviews/syntheses.
Exclusion criteria:
  • Articles prior to 2000.
  • Studies including participants from grade school or post-school employment.
  • Articles which fail to clearly describe the focus of study as the content and format of feedback.
Using the existing literature about the features of quality feedback, a coding form was created to identify the categories within each article (see Table 1). Brookhart [12] was written for practicing classroom teachers, not necessarily faculty in higher education. The content, however, was deemed to be applicable for both young children and adult learners. Therefore, the content was adapted through the lens of higher education. Not all codes outlined within Brookhart [12] were included in the study. For example, within Function, the code of “judgement” was not included in this review, as the research literature typically did not measure “judgement” as a purpose of providing feedback within a higher education setting. Considerations of student interactions with feedback (e.g., Carless and Boud [14]), are taken into account through the code of Tone. All articles were coded using the same codes regardless of the methodology across studies. A number of articles (n = 7, 10%) with representation from each methodology category included were double coded by the first two authors to establish reliability of coding. Operational definitions were created to increase the reliability of coding across articles (see Table 1). Additionally, qualitative studies included a narrative findings summary for themes by category captured in an excel file. The percentage of agreement for the categories ranged from 91% to 100%.

3. Results

The initial review resulted in 17 quantitative, 11 mixed methods studies, and 50 qualitative studies (see Supplemental Materials). After a closer examination of these studies, 10 did not meet the inclusion criteria. Thus, the final analysis included 15 (21.4%) quantitative, 11 (15.7%) mixed methods, and 44 (62.9%) qualitative studies for a total of 70 studies. Of these accepted studies, the majority were conducted or published internationally (outside of the United States). These studies were coded for their characteristic features (e.g., participants, design, results), data collection descriptions (e.g., timing, mode, audience) and for explicit outcomes from the analyses. For quantitative studies, coding was conducted to examine the presence/absence or type of various feedback qualities including the feedback’s focus, means of comparison, function, valence, and quality (clarity, specificity, and tone). For the qualitative studies, a findings summary was completed and a data reduction process was conducted. Mixed methods studies were included in both processes for their relevant data.

3.1. Characteristics of Current Literature

To begin the analyses, the authors noted descriptive information for each study (i.e., who provided the feedback; the mode in which that feedback was provided; methodological designs). Of the included works, the majority of the research focused on the instructor to student feedback (61%), but some literature also focused on peer to student feedback (18%) with two comparisons between instructors and peers. Various methodologies for the mode of feedback were noted, including evaluations of innovative technology applications (e.g., audio feedback by Lunt and Curan [15]; Macgregor, Spiers, and Taylor [16]), while most focused on written feedback (e.g., Kisnanto [17]). Blog, vlog, and virtual feedback were also represented, although viewed as part of a new wave of applications for technology education (e.g., Williams and Jacobs [18]; Xie, Ke, and Sharma [19]). Tutor to student feedback was also present (17%) as was automated feedback (4%). The quantitative designs were varied between experimental (e.g., group = 22.3%) and non-experimental (e.g., survey = 12%), with few examples relying on other means (e.g., action research, n = 1). Every study examined at least one of either the identified strategies or the content for effective feedback. Across all qualitative studies examined, none included an explicit description of all feedback features, but each study included at least one of the characteristic features.

3.1.1. Focus of the Studied Feedback

Using the levels feedback focus outlined by Hattie and Timperley [20], the included studies were sorted into those which focused the task (defined here as the completion of work tasks or content of the work), the process of the task (defined here as the process for producing the work), the self-regulation of student activities (defined here as student behaviors associated with the work), or the students themselves (defined here as directly addressing student characteristics). While many included studies did not describe the feedback in sufficient detail to be coded for this feature (n = 53), the majority of those that could focused on the work or the content of the work (19.4%, n = 13). None of the included studies focused solely on the process, although an additional four studies focused on both content and process (e.g., Van der berg, Admiral, and Pilot [21]). Few studies had a focus on student behavior (e.g., Minehart, Rudolph, Pian-Smith, and Raemer [22]) and only one study was found which mentioned feedback that included characteristics and descriptors of the students themselves (e.g., “Your willingness to change your tone has improved!”, Herbert and Vorauer [23]).

3.1.2. Means of Comparison and Feedback Function

A description of the means for comparison was rarely included (9%), but when it was, the feedback were criterion referenced as opposed to norm-referenced. Although the feedback function was typically not explicitly stated, descriptions of the feedback were sufficient to identify whether the feedback was descriptive (providing advice without implications for or a connected to a grade) or evaluative (with implications for or connected to a grade). The majority of the qualitative studies examined evaluative feedback (19.4% of all included studies), while descriptive was included for seven others, six of which were provided in conjunction with evaluative feedback.

3.1.3. Valence and Tone

The valence of the feedback was coded to indicate the frequency of positive versus negative feedback in the literature. While the majority of the qualitative studies did not mention or provide a sufficient description of the valence (noted for 28.8%), examinations of both positive and negative were compared (e.g., Plakht, Shiyovich, Nusbaum, and Raizer [24]) or noted in student opinions of feedback effectiveness (e.g., McCarthy [25]). Tone was also coded by identifying feedback designed for active participation versus passive participation by the learner. The vast majority of the included studies did not include the relevant information for this code (with only 3% identified), but those that did described the detriments of passive learning feedback from computer-generated feedback (e.g., Herbert and Vorauer [23]) and peer provided feedback (e.g., Wilkins, Shin, and Ainsworth [26]).

3.1.4. Understandable and Actionable

Similarly, only six studies evaluated evidence for understandable or actionable qualities (e.g., Herbert and Vorauer [23], Lunt and Curan, [15]). All studies that mentioned this characteristic included a description of the presence of vague feedback that was less helpful in applying suggestions and making impacts on future student work.

3.1.5. Specificity

Only five studies evaluated evidence for specificity (e.g., Lunt and Curran [15]) with two examining the difference between vague vs. specific comments that promotes the next steps for students (e.g., Wilkins, Shin, and Ainsworth [26]). In the current literature, there is a lack of evidence to sufficiently describe how specific feedback in higher education has a causal relationship with positive results. The limited evidence that was found, however, consistently suggests that specificity is required for quality so that the student is able to understand how to proceed. Unfortunately, the feedback for higher education students included in the studies that examined this tended to be vague, and thus less impactful and effective.

3.2. Qualitative Themes in Current Literature

A qualitative analysis of qualitative literature supported the quantitative summary based on various aspects of effective (and ineffective) feedback and feedback practices. Within the 41 qualitative studies, evaluations of feedback delivery (timeliness, tone, valence; n = 23), feedback forms/mediums/modes (n = 25), feedback content (n = 14), and feedback functions or purposes (n = 9) were the most frequent foci. Additionally, many of the qualitative studies of feedback within high education examined various failings or barriers to quality feedback (n = 22) as well as the technology used or required to support quality feedback (n = 18). Of these initial codes, several themes (described below) emerged from the findings.

3.2.1. Feedback Delivery

Feedback delivery examinations described influences on how the message was received (e.g., Huxham [27]). This included factors like feedback timeliness, tone, and valence. Timeliness was a particularly prominent theme across many studies (e.g., Lynam and Cachia [28]), many times with participants noting that timely feedback was more likely to be received and acted upon. Additionally, feedback was evaluated based on the individual providing it (instructor, tutor, peer- with instructor feedback being preferred) as well as the tone and language chosen to convey the message (which was noted by participants as having an impact on usability and reception). Peer feedback was examined (e.g., Wilkins, Shin, and Ainsworth [26]) and found to increase reflective thinking and collaboration, but most conclusions suggest that peer feedback cannot replace instructor feedback but can supplement it.

3.2.2. Feedback Forms

In the literature, the various forms by which feedback was provided were evaluated frequently, and ranged from descriptive (e.g., Mathisen [29]) to comparative across perceptions (e.g., Gould and Day [30]). Both a focus for the students and the instructors within the qualitative studies, feedback forms were noted to impact the effectiveness of the feedback’s message, the timeliness and actionability of the feedback in application to future work. Hennessy and Forester [31] conducted a comparison of forms (audio and written) and concluded that more than one mode may increase access to the positive effects of feedback, confirming the findings of prior inquiry (e.g., Miller [32]). Several innovations for use within typical forms were found (e.g., Bloxham and Campbell [33] examined the use of cover sheets to spur productive dialogue between tutors and students), and outcomes in these types of studies indicated the potential for impact on overall feedback quality and its use (e.g., Sopina and McNeill [34]).

3.2.3. Feedback Content

Characteristics of the feedback content were also present in the literature, providing descriptions for included elements. Preferences or beliefs about what should be said within the feedback were discovered related to valence and actionable content, (e.g., Poulos and Mahony [35]), and most recently, considerations of affect and personalization to student growth [36]. Other research emphasized the need to create a connection between the feedback and assessment criteria as a general standard [37].

3.2.4. Feedback Functions and Purposes

Dawson and colleagues [36] are one example from the literature that examined perceptions of the functions of feedback. Students and instructors indicated that improvement of work was defined by the increased understanding of the content, but also by self-reflection and critical thinking indicators.

4. Discussion

The current literature review sought to extend current understanding by using existing research to identify the features of quality feedback in higher education as well as the perceptions of faculty and students in higher education regarding what is considered quality feedback. The outcomes confirmed the findings of previous reviews that there is a wide range of foci in the research literature. Despite this, clear themes emerged which indicate that standards for quality feedback in higher education can be established and applied through a quality framework.
The identification of critical features and functions of feedback is of upmost importance if feedback is to be determined to be effective. O’Donovan, Rust and Price [38] noted a failure between theory and practice for the effectiveness of feedback. This literature review contributes by identifying indicators which promote quality, a necessary step prior to feedback being effective. While this review utilized feedback characteristics originally outline for classroom teachers, the utility of the adapted framework was applicable and appropriate given the research literature surrounding feedback in higher education. It is necessary for IHEs to determine if faculty (or the structures they utilize such as peer feedback opportunities) are providing feedback that has quality elements for not only content, but for the strategies or processes as well. It is possible to develop a tool for use in institutions of higher education (IHEs) which can measure the implementation of essential feedback characteristics and provide feedback to faculty regarding the number of quality indicators found in their current practice. Providing faculty with feedback can lead to a variety of preferred outcomes such as the reduction or even elimination of the perceived mismatch between instructor practices and learning goals. This type of feedback could also promote active relationships between instructors and students surrounding the feedback and improvement exchanges. Additionally, faculty may be more thoughtful and willing to use innovative tools and techniques to increase the effectiveness of higher education feedback. The quantitative analysis within this review strengthens this call for action on the part of educators in EPPs to reform practice.

4.1. Major Findings for Quality Feedback Across the Literature

4.1.1. Consistent Definitions of Features

For the coded samples associated with feedback content, there seemed to be relative standards to follow across disciplines. Most of the characteristics described here (including timing, focus, means of comparison, valence, and quality for clarity, specificity, and tone) are repeated throughout the literature with a consistent definition of their quality. For example, timing is generally considered to be high quality when it falls within close proximity to the initial learning and production of work products from both the instructor’s perspectives [37] and the students’ perspectives [35].

4.1.2. Feature Implementation Inconsistency

The implementation of high quality feedback features in the literature were inconsistent. Substantial mismatches were noted in some cases (e.g., Mulliner and Tucker [37]) indicating that even when instructors understand the elements of high-quality feedback, the effective implementation of those quality elements can remain elusive. The use of effective feedback strategies is also inconsistent. There is a need for implementation standards for new application of feedback practices, such as with audio feedback (e.g., Hennessey and Forester [31]).

4.1.3. Perceptions

Overall, perceptions emerged as a theme. Dissonance between various conceptualizations of feedback and feedback practices was apparent. Two branches emerged within this theme, both of which have the potential to influence the quality of feedback in higher education.

Instructor Perception vs. Implementation

This theme was associated with multiple examples of instructor feedback practices that did not match the instructors’ espoused beliefs about high quality feedback practices (e.g., Orrell, [39]). Student action was one area in which perception did not match practice. Some articles demonstrated a lack of expectations to act on the provided feedback (e.g., Stern and Solomon, [40] described as steps for preliminary guidance, ongoing clarification, supplementary supports, and feeding forward by Hounsell, McCune, Hounsell, and Litjens [41]), Another mismatch was the delivery of feedback on formative or summative assessments, which favored summative assessment (describing the end result of student learning) only (e.g., Orrell [39]). The literature highlighted that feedback practices should reflect the evidence-based methods designed to achieve primary learning goals, which indicate a cyclical approach to apply feedback. For example, Hyland [42] interviewed faculty to examine disciplinary writing feedback and found that their ability to shape student writing toward disciplinary approved standards was considered a primary outcome.

Instructor Perceptions vs. Student Perceptions

The mismatch theme was also noted in articles which addressed instructor and student perceptions of the effectiveness and usefulness of provided feedback (e.g., Weaver [43]). There is also evidence that differing perceptions affect the assessment process as well (e.g., Carless [44]). Certain trends in the quality feedback literature are also at odds with student perceptions, such as demonstrated student preferences for written print feedback over electronic means [45]. Dawson et al. [36] found that while some characteristics of feedback would match between students and faculty, perspectives differ on the finer goals and the ultimate point of feedback improvement. These mismatches are cause for the reevaluation of quality feedback priorities which includes the flexibility to account for student preferences.

Impact of Student Perceptions

Also of note were the instances of student perceptions which have the potential to impact the outcomes of feedback utilization and application to future work. For example, students perceive individual feedback as higher quality than group feedback (e.g., Mulliner and Tucker [37]). The previously noted dissonance between instructor and student ideas of the purpose, most effective form/mode, or valence or the feedback are influential in the student’s receptiveness to constructive criticism. Studies are emerging on the ways that students interact with feedback (e.g., Carless and Boud [14]), and calls for continued research investigating the ways in which students appreciate, judge, and manage their affect before taking action are echoed by the current review. Student perceptions of valence in the literature indicated support for constructive criticism in some cases (e.g., Brandt [46]) and a sensitivity to any feedback associated with judgement that could be discouraging to students (e.g., Zhou and Deneen [47]). Indications that including student preferences for the feedback process may increase students’ active participation were present [41]. Active student participation is a primary goal for high-quality feedback and requires student and staff buy-in for effective implementation (e.g. Hart and Wakeman [8]).

4.1.4. Potential of Peers

Multiple articles led to the development of a theme on the potential for peers to provide effective feedback to students in higher education classrooms. This potential is highly disputed with indications that staff are likely to reject peer feedback models due to a lack of reliability, a lack of expertise, power struggle concerns, and a lack of time [48]. While some of the literature described the limitations of using peers (e.g., unexplained incongruences and inconsistent expectations [46]), others described their great potential to promote advanced cognitive strategies in their peers [49]. Wilkins, Shin and Aimsworth [26] found that peer feedback provided more opportunities for student reflection and collaboration but was strictly supplemental to the feedback provided by university staff. Ertmer et al. [50] described instructor feedback as more preferred by students (who did not see the value in peer feedback), again demonstrating the potential for student perceptions to impact the feedback process.

4.1.5. Innovative Tool Use

Prominent descriptions, comparisons, and new applications for various supporting technology were noted across the higher education feedback literature. From video feedback [29] to the effective development and acceptance of automatic computer-generated feedback [51], the literature describes promising results for new technology applications. Computer-generated content feedback was also emerging in the literature (e.g., Eddington and Foxworth [52]), with indications that automating some forms of feedback could save time for higher educators and save money for educational institutions [53]. Being on the forefront of many educational feedback supporting technologies, further study would provide a stronger research base and evidence for untested tools and strategies.

4.1.6. Timing, Mode, and Audience Findings

Timing was included in 24% of the identified studies and consistently emphasized timeliness as a virtue of quality feedback. Thirty-two percent of the studies examined written feedback (either paper or electronic), however several studies compared different modes of feedback delivery (e.g., Macgregor, Spiers, and Taylor [16]). Future studies could consider the strengths and weakness of each possible mode, which may influence the feedback exchange. It also may be of interest to further study the impact of providing a variety of modes [10]. In describing the audience for the feedback, indirect or vague descriptions were provided for 52% of the studies. Two studies compared group vs. individual feedback [37,54], with the clear majority providing evidence for individual student feedback.
Consideration of the audience was largely unexamined in the experimental research despite its inclusion in high quality feedback frameworks (e.g., Brookhart [12]). In addition to the tailoring of feedback to address individual student goals, audience considerations include factors about the students that may impact the utilization of the feedback, such as level of study (undergraduate versus master’s level students; e.g., Poulos and Mahony [35]) or previous preparation (initial licensure/lateral entry students versus students who have previous lesson plan writing experience). This aspect of high-quality feedback seems largely absent in the literature, however evaluations of the dynamic dimensions of feedback (cognitive, social-affective, and structural) suggest that the conceptualization of feedback as dialogue includes the audience as an active participant [55]. Future examinations should include the recent literature themes of the audience and the impact of recipient participation in the feedback discussion, moving beyond feedback type, provider, and mode as basic descriptors for quality (e.g., Wu [56]).

5. Conclusions

This study contributes to the field by identifying quality feedback practice trends which result in the biggest impact for success for students in higher education. These include both widely accepted quality standards (e.g., feedback should be positive, specific, timely, and encourage active student engagement) as well as less-known features with emerging evidence in the literature (e.g., the impact of peer feedback practices, novel tool use, and novel feedback forms). To prepare preservice educators for high stakes teacher preparation summative licensure assessments, EPPs will need to become more consistent regarding the content of the feedback provided to students and in the process in which their programs provide such feedback [7]. The included research in this review confirms that effective feedback should be actionable throughout a student’s learning process, rather than occurring once on a summative assessment with no chance of application or improvement by the student.
As the measurement of quality feedback and its delivery by higher education faculty is not a typical occurrence other than for the characteristics of timing and tone, it is challenging to require any mode, form, or quality characteristic. This review, however, helps to outline best practices- practical implications for what individual faculty can do to maximize the quality and impact of the feedback provided to students. By utilizing quality feedback practices outlined here that are responsive to student preferences and needs, the time spent on feedback by faculty can directly influence student learning. Moreover, the use of these practices by higher education faculty would address student preference and subsequently, student satisfaction.
This review found research for each of the categories regarding feedback strategies and content, with results indicating differences between the perceptions of adherence to sound feedback practices versus the reality of implementation, the potential of innovative tool use, and a disagreement about the effectiveness of peers for providing effective feedback. Quality feedback requires consideration of strategies, processes, and content, but also a means of addressing potential mismatches in perception of students and faculty within higher education for what is effective. Beyond the recommendations of Gibbs and Simpson [11] as well as Brookhart [12], additional factors described in recent literature (e.g., features available in distance learning [56]) are likely to impact the quality of the feedback. Future work in the evaluation of feedback, particularly on tasks related to high stakes assessment, are necessary to be sure students in higher education are receiving feedback that prompts the adaptations and modifications necessary for student learning and development, as well as the lasting impacts demanded by the rigors of the teaching profession.

Supplementary Materials

A table of studies included in this analysis is available at the request of the corresponding author. Literature that was reviewed but not referenced above is included in the reference list [57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112].

Author Contributions

The authors on this review worked as a team, each contributing to the review and analysis of the included literature. Conceptualization, S.W., K.H., and L.H.; methodology, S.W.; software, K.H.; validation, K.H., S.W. and L.H.; formal analysis, K.H.; investigation, K.H.; resources, K.H.; data curation, K.H.; writing—original draft preparation, K.H.; writing—review and editing, S.W.; visualization, K.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Support from the University of North Carolina at Charlotte staff was appreciated in the development of this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Worthen, M. The Misguided Drive to Measure ‘Learning Outcomes’; New York Times: New York, NY, USA, 2018; p. SR 1. Available online: https://www.nytimes.com/2018/02/23/opinion/sunday/colleges-measure-learning-outcomes.html (accessed on 15 December 2019).
  2. Knight, S.L.; Lloyd, G.M.; Arbaugh, F.; Gamson, D.; McDonald, S.P.; Nolan, J., Jr.; Whitney, A.E. Performance assessment of teaching: Implications for teacher education. J. Teach. Educ. 2017, 65, 372–374. [Google Scholar] [CrossRef] [Green Version]
  3. Price, T.A. Teacher education under audit: Value-added measures, TVAAS, edTPA and evidence-based theory. Citizsh. Soc. Econ. Educ. 2014, 13, 211–225. [Google Scholar] [CrossRef]
  4. Cochrane-Smith, M.; Piazza, P.; Power, C. The politics of accountability: Assessing teacher education in the United States. Educ. Forum 2013, 77, 6–27. [Google Scholar] [CrossRef]
  5. Sato, M. What is the underlying conception of teaching of the edTPA? J. Teach. Educ. 2014, 65, 421–434. [Google Scholar] [CrossRef] [Green Version]
  6. Burns, B.; Henry, J.; Lindauer, J. Working together to foster candidate success on the edTPA. J. Inq. Action Educ. 2015, 6, 18–37. [Google Scholar]
  7. Hart, L.C.; Wakeman, S. Creating faculty buy-in for edTPA and other performance-based assessments. In Evaluating Teacher Education Programs through Performance-Based Assessments; Polly, D., Ed.; IGI Global: Hershey, PA, USA, 2016; pp. 80–92. [Google Scholar]
  8. Davis, T.S.; Mountjoy, K.J.; Palmer, E.L. Creating an instructional framework to prepare teacher education candidates for success on a performance-based assessment. J. Res. Bus. Educ. 2016, 57, 1–13. [Google Scholar]
  9. Evans, C. Making sense of assessment feedback in higher education. Rev. Educ. Res. 2013, 83, 70–120. [Google Scholar] [CrossRef] [Green Version]
  10. McCarthy, J. Evaluating written, audio and video feedback in higher education summative assessment tasks. Issues Educ. Res. 2015, 25, 153–169. [Google Scholar]
  11. Gibbs, G.; Simpson, C. Conditions under which assessment supports students’ learning. Learn. Teach. High. Educ. 2005, 1, 3–31. [Google Scholar]
  12. Brookhart, S.M. How to Give Effective Feedback to Your Students, 2nd ed.; ASCD: Alexandria, VA, USA, 2017. [Google Scholar]
  13. Carless, D. Exploring learning-oriented assessment processes. High. Educ. 2015, 69, 963–976. [Google Scholar] [CrossRef] [Green Version]
  14. Carless, D.; Boud, D. The development of student feedback literacy: Enabling uptake of feedback. Assess. Eval. High. Educ. 2018, 43, 1315–1325. [Google Scholar] [CrossRef] [Green Version]
  15. Lunt, T.; Curran, J. ‘Are you listening please?’ The advantages of electronic audio feedback compared to written feedback. Assess. Eval. High. Educ. 2010, 35, 759–769. [Google Scholar] [CrossRef]
  16. Macgregor, G.; Spiers, A.; Taylor, C. Exploratory evaluation of audio email technology in formative assessment feedback. Res. Learn. Technol. 2011, 19, 39–59. [Google Scholar] [CrossRef]
  17. Kisnanto, Y.P. The effect of written corrective feedback on higher education students’ writing accuracy. J. Pendidik. Bhs. Dan Sastra 2016, 16, 121–131. [Google Scholar] [CrossRef] [Green Version]
  18. Williams, J.B.; Jacobs, J.S. Exploring the use of blogs as learning spaces in the higher education sector. Australas. J. Educ. Technol. 2004, 20, 232–247. [Google Scholar] [CrossRef] [Green Version]
  19. Xie, Y.; Ke, F.; Sharma, P. The effect of peer feedback for blogging on college students’ reflective learning processes. Internet High. Educ. 2008, 11, 18–25. [Google Scholar] [CrossRef]
  20. Hattie, J.; Timperley, H. The power of feedback. Rev. Educ. Res. 2007, 77, 81–112. [Google Scholar] [CrossRef]
  21. Van den Berg, I.; Admiraal, W.; Pilot, A. Designing student peer assessment in higher education: Analysis of written and oral peer feedback. Teach. High. Educ. 2006, 11, 135–147. [Google Scholar] [CrossRef]
  22. Minehart, R.D.; Rudolph, J.; Pian-Smith, M.C.; Raemer, D.B. Improving faculty feedback to resident trainees during a simulated caseA randomized, controlled trial of an educational intervention. Anesth. J. Am. Soc. Anesth. 2014, 120, 160–171. [Google Scholar]
  23. Hebert, B.G.; Vorauer, J.D. Seeing through the screen: Is evaluative feedback communicated more effectively in face-to-face or computer-mediated exchanges? Comput. Hum. Behav. 2003, 19, 25–38. [Google Scholar] [CrossRef]
  24. Plakht, Y.; Shiyovich, A.; Nusbaum, L.; Raizer, H. The association of positive and negative feedback with clinical performance, self-evaluation and practice contribution of nursing students. Nurse Educ. Today 2013, 33, 1264–1268. [Google Scholar] [CrossRef] [PubMed]
  25. McCarthy, J. Enhancing feedback in higher education: Students’ attitudes towards online and in-class formative assessment feedback models. Act. Learn. High. Educ. 2017, 18, 127–141. [Google Scholar] [CrossRef]
  26. Wilkins, E.A.; Shin, E.K.; Ainsworth, J. The effects of peer feedback practices with elementary education teacher candidates. Teach. Educ. Q. 2009, 36, 79–93. [Google Scholar]
  27. Huxham, M. Fast and effective feedback: Are model answers the answer? Assess. Eval. High. Educ. 2007, 32, 601–611. [Google Scholar] [CrossRef]
  28. Lynam, S.; Cachia, M. Students’ perceptions of the role of assessments at higher education. Assess. Eval. High. Educ. 2018, 43, 223–234. [Google Scholar] [CrossRef]
  29. Mathisen, P. Video feedback in higher education—A contribution to improving the quality of written feedback. Nord. J. Digit. Lit. 2012, 7, 97–113. [Google Scholar]
  30. Gould, J.; Day, P. Hearing you loud and clear: Student perspectives of audio feedback in higher education. Assess. Eval. High. Educ. 2013, 38, 554–566. [Google Scholar] [CrossRef]
  31. Hennessy, C.; Forrester, G. Developing a framework for effective audio feedback: A case study. Assess. Eval. High. Educ. 2013, 39, 777–789. [Google Scholar] [CrossRef] [Green Version]
  32. Miller, T. Formative computer-based assessment in higher education: The effectiveness of feedback in supporting student learning. Assess. Eval. High. Educ. 2009, 34, 181–192. [Google Scholar] [CrossRef]
  33. Bloxham, S.; Campbell, L. Generating dialogue in assessment feedback: Exploring the use of interactive cover sheets. Assess. Eval. High. Educ. 2010, 35, 291. [Google Scholar] [CrossRef]
  34. Sopina, E.; McNeill, R. Investigating the relationship between quality, format and delivery of feedback for written assignments in higher education. Assess. Eval. High. Educ. 2015, 40, 666. [Google Scholar] [CrossRef]
  35. Poulos, A.; Mahony, M.J. Effectiveness of feedback: The students’ perspective. Assess. Eval. High. Educ. 2008, 33, 143–154. [Google Scholar] [CrossRef]
  36. Dawson, P.; Henderson, M.; Mahoney, P.; Phillips, M.; Ryan, T.; Boud, D.; Molloy, E. What makes for effective feedback: Staff and student perspectives. Assess. Eval. High. Educ. 2018, 1–12. [Google Scholar] [CrossRef]
  37. Mulliner, E.; Tucker, M. Feedback on feedback practice: Perceptions of students and academics. Assess. Eval. High. Educ. 2017, 42, 266–288. [Google Scholar] [CrossRef]
  38. O’Donovan, B.; Rust, C.; Price, M. A scholarly approach to solving the feedback dilemma in practice. Assess. Eval. High. Educ. 2016, 41, 938–949. [Google Scholar] [CrossRef] [Green Version]
  39. Orrell, J. Feedback on learning achievement: Rhetoric and reality. Teach. High. Educ. 2006, 11, 441–456. [Google Scholar] [CrossRef]
  40. Stern, L.A.; Solomon, A. Effective faculty feedback: The road less traveled. Assess. Writ. 2006, 11, 22–41. [Google Scholar] [CrossRef]
  41. Hounsell, D.; McCune, V.; Hounsell, J.; Litjens, J. The quality of guidance and feedback to students. High. Educ. Res. Dev. 2008, 27, 55–67. [Google Scholar] [CrossRef] [Green Version]
  42. Hyland, K. Faculty feedback: Perceptions and practices in L2 disciplinary writing. J. Second. Lang. Writ. 2013, 22, 240–253. [Google Scholar] [CrossRef] [Green Version]
  43. Weaver, M.R. Do students value feedback? Student perceptions of tutors’ written responses. Assess. Eval. High. Educ. 2006, 31, 379–394. [Google Scholar] [CrossRef]
  44. Carless, D. Differing perceptions in the feedback process. Stud. High. Educ. 2006, 31, 219–233. [Google Scholar] [CrossRef] [Green Version]
  45. Ferguson, P. Student perceptions of quality feedback in teacher education. Assess. Eval. High. Educ. 2011, 36, 51–62. [Google Scholar] [CrossRef]
  46. Brandt, C. Integrating feedback and reflection in teacher preparation. Elt. J. 2008, 62, 37–46. [Google Scholar] [CrossRef]
  47. Zhou, J.; Deneen, C.C. Chinese award-winning tutors’ perceptions and practices of classroom-based assessment. Assess. Eval. High. Educ. 2016, 41, 1144–1158. [Google Scholar] [CrossRef]
  48. Liu, N.F.; Carless, D. Peer feedback: The learning element of peer assessment. Teach. High. Educ. 2006, 11, 279–290. [Google Scholar] [CrossRef] [Green Version]
  49. Liu, E.Z.F.; Lin, S.S. Relationship between peer feedback, cognitive and metacognitive strategies and achievement in networked peer assessment. Br. J. Educ. Technol. 2007, 38, 1122–1125. [Google Scholar] [CrossRef]
  50. Ertmer, P.A.; Richardson, J.C.; Belland, B.; Camin, D.; Connolly, P.; Coulthard, G.; Mong, C. Using peer feedback to enhance the quality of student online postings: An exploratory study. J. Comput. Mediat. Commun. 2007, 12, 412–433. [Google Scholar] [CrossRef] [Green Version]
  51. Bayerlein, L. Students’ feedback preferences: How do students react to timely and automatically generated assessment feedback? Assess. Eval. High. Educ. 2014, 39, 916–931. [Google Scholar] [CrossRef]
  52. Eddington, K.; Foxworth, T. Dysphoria and self-focused attention: Effects of feedback on task strategy and goal adjustment. J. Soc. Clin. Psychol. 2012, 31, 933–951. [Google Scholar] [CrossRef]
  53. Debuse, J.C.; Lawley, M.; Shibl, R. Educators’ perceptions of automated feedback systems. Australas. J. Educ. Technol. 2008, 24, 374–386. [Google Scholar] [CrossRef] [Green Version]
  54. Han, J.; Finkelstein, A. Understanding the effects of professors’ pedagogical development with clicker assessment and feedback technologies and the impact on students’ engagement and learning in higher education. Comput. Educ. 2013, 65, 64–76. [Google Scholar] [CrossRef]
  55. Ajjawi, R.; Boud, D. Examining the nature and effects of feedback dialogue. Assess. Eval. High. Educ. 2018, 43, 1106–1119. [Google Scholar] [CrossRef]
  56. Wu, R. Feedback in Distance Education: A Content Analysis of Distance Education, a Doctoral Dissertation from Proquest Diss. Theses Glob. Available online: https://librarylink.uncc.edu/login?url=https://search-proquest-com.librarylink.uncc.edu/docview/1905845256?accountid=14605 (accessed on 1 August 2018).
  57. Adcroft, A. The mythology of feedback. High. Educ. Res. Dev. 2011, 30, 405–419. [Google Scholar] [CrossRef] [Green Version]
  58. Bailey, R.; Garner, M. Is the feedback in higher education assessment worth the paper it is written on? Teachers’ reflections on their practices. Teach. High. Educ. 2010, 15, 187–198. [Google Scholar] [CrossRef]
  59. Barker, M.; Pinard, M. Closing the feedback loop? Iterative feedback between tutor and student in coursework assessments. Assess. Eval. High. Educ. 2014, 39, 899. [Google Scholar] [CrossRef]
  60. Beaumont, C.; O’Doherty, M.; Shannon, L. Reconceptualizing assessment feedback: A key to improving student learning? Stud. High. Educ. 2011, 36, 671–687. [Google Scholar] [CrossRef]
  61. Beck, D.E. Performance-Based Assessment: Using pre-established criteria and continuous feedback to enhance a student’s ability to perform practice tasks. J. Pharm. Pract. 2000, 13, 347–364. [Google Scholar] [CrossRef]
  62. Boronat-Navarro, M.; Forés, B.; Puig-Denia, A. Assessment feedback in higher education: Preliminary results in a course of strategic management. Ed. Univ. Politècnica De València 2015. [Google Scholar] [CrossRef] [Green Version]
  63. Broadbent, J.; Panadero, E.; Boud, D. Implementing summative assessment with a formative flavour: A case study in a large class. Assess. Eval. High. Educ. 2018, 43, 307–322. [Google Scholar] [CrossRef] [Green Version]
  64. Callahan, T.J.; Strandholm, K.; Dziekan, J. Developing an undergraduate assessment test: A mechanism for faculty feedback about retention. J. Educ. Bus. 2009, 85, 45–49. [Google Scholar] [CrossRef]
  65. Chen, X.; Breslow, L.; Deboer, J. Analyzing productive learning behaviors for students using immediate corrective feedback in a blended learning environment. Comput. Educ. 2018, 117, 59–74. [Google Scholar] [CrossRef]
  66. Cole, D. Constructive criticism: The role of student-faculty interactions on African American and Hispanic students’ educational gains. J. Coll. Stud. Dev. 2008, 49, 587–605. [Google Scholar] [CrossRef]
  67. Cooper, N.J. Facilitating learning from formative feedback in level 3 assessment. Assess. Eval. High. Educ. 2000, 25, 279–291. [Google Scholar] [CrossRef]
  68. Carruthers, C.; McCarron, B.; Bolan, P.; Devine, A.; McMahon-Beattie, U.; Burns, A. ‘I like the sound of that’—An evaluation of providing audio feedback via the virtual learning environment for summative assessment. Assess. Eval. High. Educ. 2015, 40, 352. [Google Scholar] [CrossRef]
  69. Dowden, T.; Pittaway, S.; Yost, H.; McCarthy, R. Students’ perceptions of written feedback in teacher education: Ideally feedback is a continuing two-way communication that encourages progress. Assess. Eval. High. Educ. 2013, 38, 349. [Google Scholar] [CrossRef]
  70. Esterhazy, R. What matters for productive feedback? Disciplinary practices and their relational dynamics. Assess. Eval. High. Educ. 2018, 43, 1302–1314. [Google Scholar] [CrossRef]
  71. Fernández-Toro, M.; Truman, M.; Walker, M. Are the principles of effective feedback transferable across disciplines? A comparative study of written assignment feedback in languages and technology. Assess. Eval. High. Educ. 2013, 38, 816. [Google Scholar] [CrossRef]
  72. Garrison, D.R.; Anderson, T.; Archer, W. Critical inquiry in a text-based environment: Computer conferencing in higher education. Internet High. Educ. 2000, 2, 87–105. [Google Scholar] [CrossRef] [Green Version]
  73. Gleaves, A.; Walker, C. Richness, redundancy or relational salience? A comparison of the effect of textual and aural feedback modes on knowledge elaboration in higher education students’ work. Comput. Educ. 2013, 62, 249–261. [Google Scholar] [CrossRef]
  74. Glover, C.; Brown, E. Written feedback for students: Too much, too detailed or too incomprehensible to be effective? Biosci. Educ. 2006, 7, 1–16. [Google Scholar] [CrossRef]
  75. Hamer, J.; Purchase, H.; Luxton-Reilly, A.; Denny, P. A comparison of peer and tutor feedback. Assess. Eval. High. Educ. 2015, 40, 151–164. [Google Scholar] [CrossRef]
  76. Heckman-Stone, C. Trainee preferences for feedback and evaluation in clinical supervision. Clin. Superv. 2004, 22, 21–33. [Google Scholar] [CrossRef]
  77. Higgins, R.; Hartley, P.; Skelton, A. Getting the message across: The problem of communicating assessment feedback. Teach. High. Educ. 2001, 6, 269–274. [Google Scholar] [CrossRef]
  78. Higgins, R.; Hartley, P.; Skelton, A. The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Stud. High. Educ. 2002, 27, 53–64. [Google Scholar] [CrossRef]
  79. Huisman, B.; Saab, N.; van Driel, J.; van den Broek, P. Peer feedback on academic writing: Undergraduate students’ peer feedback role, peer feedback perceptions and essay performance. Assess. Eval. High. Educ. 2018, 43, 955–968. [Google Scholar] [CrossRef]
  80. Ibabe, I.; Jauregizar, J. Online self-assessment with feedback and metacognitive knowledge. High. Educ. 2010, 59, 243–258. [Google Scholar] [CrossRef]
  81. Ice, P.; Curtis, R.; Phillips, P.; Wells, J. Using asynchronous audio feedback to enhance teaching presence and students’ sense of community. J. Asynchronous Learn. Netw. 2007, 11, 3–25. [Google Scholar] [CrossRef]
  82. Jessop, T.; El Hakim, Y.; Gibbs, G. The whole is greater than the sum of its parts: A large-scale study of students’ learning in response to different programme assessment patterns. Assess. Eval. High. Educ. 2014, 39, 73. [Google Scholar] [CrossRef] [Green Version]
  83. Lin, S.S.; Liu, E.Z.F.; Yuan, S.M. Web-based peer assessment: Feedback for students with various thinking-styles. J. Comput. Assist. Learn. 2001, 17, 420–432. [Google Scholar] [CrossRef]
  84. Lizzio, A.; Wilson, K. Feedback on assessment: Students’ perceptions of quality and effectiveness. Assess. Eval. High. Educ. 2008, 33, 263–275. [Google Scholar] [CrossRef]
  85. Lundberg, C.A.; Schreiner, L.A. Quality and frequency of faculty-student interaction as predictors of learning: An analysis by student race/ethnicity. J. Coll. Stud. Dev. 2004, 45, 549–565. [Google Scholar] [CrossRef]
  86. Mirador, J.F. A move analysis of written feedback in higher education. RELC J. 2000, 31, 45–60. [Google Scholar] [CrossRef]
  87. Murillo-Zamorano, L.; Montanero, M. Oral presentations in higher education: A comparison of the impact of peer and teacher feedback. Assess. Eval. High. Educ. 2018, 43, 138–150. [Google Scholar] [CrossRef]
  88. Nicol, D.; Thomson, A.; Breslin, C. Rethinking feedback practices in higher education: A peer review perspective. Assess. Eval. High. Educ. 2014, 39, 102–122. [Google Scholar] [CrossRef]
  89. Nordrum, L.; Evans, K.; Gustafsson, M. Comparing student learning experiences of in-text commentary and rubric-articulated feedback: Strategies for formative assessment. Assess. Eval. High. Educ. 2013, 38, 919–940. [Google Scholar] [CrossRef]
  90. Orsmond, P.; Merry, S. The importance of self-assessment in students’ use of tutors’ feedback: A qualitative study of high and non-high achieving biology undergraduates. Assess. Eval. High. Educ. 2013, 38, 737. [Google Scholar] [CrossRef]
  91. Parboteeah, S.; Anwar, M. Thematic analysis of written assignment feedback: Implications for nurse education. Nurse Educ. Today 2009, 29, 753–757. [Google Scholar] [CrossRef] [PubMed]
  92. Parkes, M.; Fletcher, P. A longitudinal, quantitative study of student attitudes towards audio feedback for assessment. Assess. Eval. High. Educ. 2017, 42, 1046–1053. [Google Scholar] [CrossRef]
  93. Parkin, H.J.; Hepplestone, S.; Holden, G.; Irwin, B.; Thorpe, L. A role for technology in enhancing students’ engagement with feedback. Assess. Eval. High. Educ. 2012, 37, 963. [Google Scholar] [CrossRef] [Green Version]
  94. Pelgrim, E.A.M.; Kramer, A.W.M.; Mokkink, H.G.A.; Van der Vleuten, C.P.M. Reflection as a component of formative assessment appears to be instrumental in promoting the use of feedback; an observational study. Med Teach. 2013, 35, 772–778. [Google Scholar] [CrossRef]
  95. Perera, J.; Lee, N.; Win, K.; Perera, J.; Wijesuriya, L. Formative feedback to students: The mismatch between faculty perceptions and student expectations. Med Teach. 2008, 30, 395–399. [Google Scholar] [CrossRef] [PubMed]
  96. Pitt, E.; Norton, L. ‘Now that’s the feedback I want!’ Students’ reactions to feedback on graded work and what they do with it. Assess. Eval. High. Educ. 2017, 42, 499–516. [Google Scholar] [CrossRef]
  97. Price, M.; Handley, K.; Millar, J.; O’Donovan, B. Feedback: All that effort, but what is the effect? Assess. Eval. High. Educ. 2010, 35, 277–289. [Google Scholar] [CrossRef]
  98. Quinton, S.; Smallbone, T. Feeding forward: Using feedback to promote student reflection and learning—A teaching model. Innov. Educ. Teach. Int. 2010, 47, 125–135. [Google Scholar] [CrossRef]
  99. Robinson, S.; Pope, D.; Holyoak, L. Can we meet their expectations? Experiences and perceptions of feedback in first year undergraduate students. Assess. Eval. High. Educ. 2013, 38, 260–272. [Google Scholar] [CrossRef]
  100. Tang, S.Y.F.; Chow, A.W.K. Communicating feedback in teaching practice supervision in a learning-oriented field experience assessment framework. Teach. Teach. Educ. 2007, 23, 1066–1085. [Google Scholar] [CrossRef]
  101. Tang, J.; Harrison, C. Investigating university tutor perceptions of assessment feedback: Three types of tutor beliefs. Assess. Eval. High. Educ. 2011, 36, 583–604. [Google Scholar] [CrossRef] [Green Version]
  102. Timmers, C.F.; Braber-Van Den Broek, J.; Van Den Berg, S.M. Motivational beliefs, student effort, and feedback behaviour in computer-based formative assessment. Comput. Educ. 2013, 60, 25–31. [Google Scholar] [CrossRef]
  103. Usher, M.; Barak, M. Peer assessment in a project-based engineering course: Comparing between on-campus and online learning environments. Assess. Eval. High. Educ. 2018, 43, 745–759. [Google Scholar] [CrossRef]
  104. Van Steendam, E.; Rijlaarsdam, G.; Sercu, L.; Van den Bergh, H. The effect of instruction type and dyadic or individual emulation on the quality of higher-order peer feedback in EFL. Learn. Instr. 2010, 20, 316–327. [Google Scholar] [CrossRef] [Green Version]
  105. Van der Pol, J.; Van den Berg, B.A.M.; Admiraal, W.F.; Simons, P.R.J. The nature, reception, and use of online peer feedback in higher education. Comput. Educ. 2008, 51, 1804–1817. [Google Scholar] [CrossRef] [Green Version]
  106. Van Ginkel, S.; Gulikers, J.; Biemans, H.; Mulder, M. Fostering oral presentation performance: Does the quality of feedback differ when provided by the teacher, peers or peers guided by tutor? Assess. Eval. High. Educ. 2017, 42, 953–966. [Google Scholar] [CrossRef] [Green Version]
  107. Vickerman, P. Student perspectives on formative peer assessment: An attempt to deepen learning? Assess. Eval. High. Educ. 2009, 34, 221–230. [Google Scholar] [CrossRef]
  108. Walker, M. An investigation into written comments on assignments: Do students find them usable? Assess. Eval. High. Educ. 2009, 34, 67. [Google Scholar] [CrossRef] [Green Version]
  109. Walker, M. The quality of written peer feedback on undergraduates’ draft answers to an assignment, and the use made of the feedback. Assess. Eval. High. Educ. 2015, 40, 23. [Google Scholar] [CrossRef]
  110. Wei, W.; Yanmei, X. University teachers’ reflections on the reasons behind their changing feedback practice. Assess. Eval. High. Educ. 2017, 43, 867–869. [Google Scholar] [CrossRef]
  111. Wingate, U. The impact of formative feedback on the development of academic writing. Assess. Eval. High. Educ. 2010, 35, 519. [Google Scholar] [CrossRef]
  112. Zhang, L.; Zheng, Y. Feedback as an assessment for learning tool: How useful can it be? Assess. Eval. High. Educ. 2018, 43, 1120–1143. [Google Scholar] [CrossRef] [Green Version]
Table 1. Coding Form including Operational Definitions.
Table 1. Coding Form including Operational Definitions.
Code CategoryCodeOperational Definition Questions
STRATEGIES
Timing+/−Was there immediacy?
Amount+/−Did it focus on 3–5 main points? Did it focus on learning targets?
Did it focus on a balance of strengths and weaknesses?
Mode Type
Written
Oral
Recorded Oral
Automated
AudienceConsidered/Not ConsideredDoes the feedback account for individual student needs?
Is group feedback considered when the problem occurs across several students and may require re-teaching?
CONTENT
FocusTaskDid it focus on error correction, depth or quality of work?
ProcessDid it focus on how the student approached the task or connect what they did with the result that they got?
Self-regulationDid it focus on self-monitoring strategies?
The Self Did it comment on characteristics of the person who produced the work?
ComparisonNorm-referencingDoes the feedback compare student performance to other student performances?
Criterion-referencingComparing student performance to a standard or rubric
FunctionDescriptiveFeedback not connected to a grade.
EvaluativeFeedback connected to a grade.
ValencePositive Describes how the strengths of student work matches the criteria/norm, pointing out needs and providing guidance on how to address that need.
NegativeIncludes punishments, focuses on criticism.
SpecificSpecific/VagueDoes it use nouns, descriptive adjectives, describe concepts or criteria, describe useful learning strategies?
ClearClear/Not ClearSimple vocabulary and sentence structure, checking for understanding
ToneAddress the student as Active/PassiveDoes it assume that the student is an active learner? Does it ask questions to activate thinking?
Adapted from Brookhart [12].

Share and Cite

MDPI and ACS Style

Haughney, K.; Wakeman, S.; Hart, L. Quality of Feedback in Higher Education: A Review of Literature. Educ. Sci. 2020, 10, 60. https://doi.org/10.3390/educsci10030060

AMA Style

Haughney K, Wakeman S, Hart L. Quality of Feedback in Higher Education: A Review of Literature. Education Sciences. 2020; 10(3):60. https://doi.org/10.3390/educsci10030060

Chicago/Turabian Style

Haughney, Kathryn, Shawnee Wakeman, and Laura Hart. 2020. "Quality of Feedback in Higher Education: A Review of Literature" Education Sciences 10, no. 3: 60. https://doi.org/10.3390/educsci10030060

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop