Next Article in Journal
What Do Clinicians Mean by “Good Clinical Judgment”: A Qualitative Study
Previous Article in Journal
Beyond the COVID-19 Pandemic: Can Online Teaching Reduce the Carbon Footprint of the Internationalisation of UK Higher Education?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Educator Feedback Skill Assessment: An Educational Survey Design Study

1
Department of Rehabilitation Medicine, NYU Grossman School of Medicine, New York, NY 10016, USA
2
New York Medical College, Valhalla, NY 10595, USA
3
Rehabilitation Medicine Service, Robley Rex VA Hospital, Louisville, KY 40206, USA
4
Kristin Carmody Emergency Medicine, New York, NY 10016, USA
*
Author to whom correspondence should be addressed.
Int. Med. Educ. 2022, 1(2), 97-105; https://doi.org/10.3390/ime1020012
Submission received: 28 November 2022 / Accepted: 2 December 2022 / Published: 9 December 2022

Abstract

:
Background: Delivering impactful feedback is a skill that is difficult to measure. To date there is no generalizable assessment instrument which measures the quality of medical education feedback. The purpose of the present study was to create an instrument for measuring educator feedback skills. Methods: Building on pilot work, we refined an assessment instrument and addressed content and construct validity using expert validation (qualitative and quantitative). This was followed by cognitive interviews of faculty from several clinical departments, which were transcribed and analyzed using ATLAS.ti qualitative software. A research team revised and improved the assessment instrument. Results: Expert validation and cognitive interviews resulted in the Educator Feedback Skills Assessment, a scale with 10 items and three response options for each. Conclusions: Building on the contemporary medical education literature and empiric pilot work, we created and refined an assessment instrument for measuring educator feedback skills. We also started the argument on validity and addressed content validity.

1. Introduction

1.1. Conceptual Framework

The ultimate goal of assessment practices in professional health education is improved healthcare. High-quality and credible feedback is necessary to provide a meaningful mechanism through which physicians can be expected to grow [1]. Feedback is fundamental for everything we do—it is an essential part of every framework, every curriculum, and every teaching interaction.
Despite the importance of feedback, residents and faculty thought that provider feedback skills were not sufficiently developed [2,3]. Similarly, faculty from both university and community-based programs described having minimal training and a lack of understanding of the best practices for delivering feedback [4], despite the availability of excellent practical guides [5,6,7]. It does not appear that this is a perception issue—a qualitative study of simulated feedback encounters suggested that faculty skills do not match recommended practice in a number of areas [8].
There is growing evidence that utilizing teacher-centered models of feedback is not sufficient to improving the quality of feedback [9,10,11,12,13,14]. Characteristics of feedback providers form one of the three clusters seen when viewing feedback through the lens of the sociocultural model [15]. For example, improving feedback provider skills may in fact improve outcomes. Sargeant and other colleagues have shown that training coaches to conduct a reflective feedback conversation can improve the acceptance and uptake of feedback [16]. Similarly, supportive coaching has been associated with both perceived coach competence and satisfaction in the sports realm [17].

1.2. Related Research

In order to explore the intended meaning and breadth of the feedback construct, we completed the following steps in a pilot study [18]. We started by conducting a literature review that aligned the feedback construct with prior research and identified existing feedback scales. We then explored how feedback participants conceptualize and describe feedback. We asked feedback recipients (resident physicians) to select, script, and enact six faculty-resident feedback vignettes.
We then conducted seven faculty focus groups that included 23 feedback providers. We asked the faculty, who watched each vignette video as a group, to comment on elements that were successful and on areas for improvement. Synthesizing the literature review and focus group findings ensured that our conceptualization of the feedback construct made theoretical sense to scholars in the field and used language that feedback providers understood. It allowed us to draft a list of 51 items that we grouped under 10 proposed dimensions of feedback and to create an early assessment scale, initially named Feedback Rating Scale (Appendix A Table A1).
Although several feedback delivery frameworks have been described, these are applicable to narrow areas within medical education. Several assessments were developed within specific contexts—written feedback [19], simulation debriefing [20], direct observation of clinical skills [21], communication skills feedback [22], feedback by residents [23], and feedback assessed by medical students [24,25]—however, these instruments are not generalizable to other types of feedback. The major research gap in this domain is therefore the absence of a reliable measurement instrument that can be applied to multiple facets of medical education.
The purpose of the present study was to (a) define dimensions that best represent the construct of feedback in medical education, and to (b) create and refine a generalizable assessment instrument for measuring educator feedback skills.

2. Materials and Methods

2.1. Research Model

This is an educational survey design study. We adopted Messick’s construct validity framework [26]. We selected Messick’s framework because, in contrast to the earlier validity frameworks that focused on “types” of validity (e.g., content or criterion), this approach favors a unified framework in which construct validity (the only type) is supported by evidence derived from multiple sources [27]. We envisioned our study findings being one of such sources that begin the “validity argument”.
For additional guidance in the study design, we selected a systematic and practical approach for creating high-quality survey scales that synthesized multiple survey design techniques into a cohesive mixed-methods process [28]. Building on our pilot work, we addressed the content, construct, and response process aspects of validity.

2.2. Participants

  • To explore the content aspect of construct validity using expert validation, we recruited an international panel of methodologists, researchers, and subject-matter experts.
  • To conduct cognitive interviews, we recruited experienced feedback providers from 4 clinical departments (Emergency Medicine, Medicine, Orthopedic Surgery, Physical Medicine and Rehabilitation) at a single academic health system.

2.3. Data Collection Tools

1.
The experts were asked to comment on each item’s representativeness, clarity, relevance and distribution using an anonymous online form:
2.
Experts rated each item as essential, useful but not essential, or not necessary using an anonymous online form:

2.4. Data Collection Process

To assess how clear and relevant the items are with respect to the construct of interest, international experts were asked to comment on each item’s representativeness, clarity, relevance, and distribution using an anonymous online form. We also asked the experts to review the labels used for the response categories (qualitative review: content aspect of construct validity using expert validation). We asked the same group of experts to review individual items in the modified assessment instrument. Experts rated each item as essential, useful but not essential, or not necessary using an anonymous online form (quantitative review: content aspect of construct validity using expert validation).
To ensure that the respondents interpreted items as we intended (response process validity) we asked experienced feedback providers to use the assessment instrument, modified in above steps, and to rate videotaped feedback encounters that we developed as part of the pilot study [18]. We then conducted structured individual cognitive interviews utilizing the technique of concurrent probing [29]. This technique involves the interviewer asking about the respondent’s thought process while they are completing the questionnaire, and allows a reasonable balance between the demand on the respondent and minimizing the recall bias [28].

2.5. Data Analysis

Using the data collected during the qualitative reviews, we used expert responses and comments to modify and revise the assessment instrument. During quantitative expert reviews, we used both a predetermined content validity ratio cut-point (McKenzie recommends 0.62 minimum for statistical significance of <0.05 for a 10-member panel), and the narrative comments by experts to make inclusion and exclusion decisions for individual items [30].
Audio files of recorded cognitive interviews were transcribed, coded, and analyzed qualitatively using the ATLAS.ti software (Scientific Software Development GmbH 2019) in order to modify and improve the overall assessment instrument and the individual survey items. The research team used a consensus method in deciding on whether to proceed with each revision suggested by interviewees; suggestions that received at least three out of four research team votes were implemented.

3. Results

The majority of interviews were conducted face to face, however, the last two interviews were done virtually due to the COVID-19 pandemic-related restrictions. The assessment instrument (final version, Appendix A Table A2) was revised eight times during the research study (Table 1). The instrument name was changed from Feedback Rating Scale to Educator Feedback Skill Assessment (EFSA).
Qualitative review. Twelve experts agreed to participate (see Acknowledgements section). Ten of the twelve submitted narrative comments online. In addition to individual item revisions, the number of items was increased from 31 to 32 (one item was split to avoid “double barreling”).
Quantitative review. Eight of the twelve experts submitted “inclusion/exclusion” votes online. Ten of the thirty-two items had the content validity ratios of > 0.62, and were included in the final version of the assessment instrument.
Cognitive interviews. Twelve cognitive interviews were conducted, ten face to face and two online via Zoom. Participants included four teaching faculty in Emergency Medicine, four in Physical Medicine and Rehabilitation, three in Internal Medicine, and one in Orthopedic Surgery. Qualitative analysis of the interview transcripts yielded twenty-three recommendations. Seven of the suggestions received at least three out of four research team votes and were implemented in the final version of EFSA. To arrive at the final version of the assessment instrument (Appendix A Table A2), the PI made several additional changes to improve readability, reduce wordiness, and improve item format consistency.

4. Discussion

We believe a rigorous instrument that builds on existing theory and empirical evidence is necessary to measure the quality of feedback in medical education. Our study takes the first step in creating and validating such an instrument. Our results may also impact assessment in medical education in several ways.
Firstly, our findings may deepen the theoretical understanding of the dimensions of feedback necessary for making it meaningful and impactful, with the potential benefit for both medical education researchers and practitioners. Secondly, defining performance expectations for feedback providers in the form of a practical rubric can enhance reliable scoring of feedback performance assessments. Finally, although rubrics may not facilitate valid judgment of feedback assessments per se, they have the potential of promoting learning and improving the instruction of feedback providers by making expectations and criteria explicit, thereby facilitating feedback and self-assessment [31].
Our work started from de novo observations of feedback in a pilot project. While our findings were undoubtedly colored by the work of others and existing frameworks for feedback, we expect to further validate current methods of assessment, and explore and define novel dimensions of delivering feedback. Our work also built on an emerging area of feedback research supported by recent work of others and by our pilot work: specificity. Roze des Ordons and other colleagues identified variability in feedback recipients (4 ‘resident challenges’) and suggested adjusting feedback provider approaches accordingly [8]. Our own pilot [18], on the other hand, was based on scenarios that were selected, scripted, and enacted by learners (resident physicians), and the resultant data suggested additional variability in feedback providers. Using more than one perspective in developing items and dimensions of the assessment instrument may allow us to highlight multiple facets of the feedback construct and understand it more fully.
We think that the collaborative nature of this study is also a strength. Several prominent scholars with unique knowledge in assessment and feedback agreed to participate in expert validation (see Acknowledgements section). Within our own institution, we included faculty from 4 diverse departments in the cognitive interviewing, from both “cognitive” and “procedural” specialties, which supports the generalizability of the resultant instrument.
We addressed only one (content) of the four aspects (structural, content, generalizability, and substantive) of validity described by Messick [26], and this is undoubtedly the greatest weakness of this work. However, we feel strongly that once our new instrument is available to the medical education research community, the sooner this shortcoming can be addressed, by ourselves and by others. Additionally, early use of the instrument by the medical educators in the field is likely to provide feedback that will allow us to further refine and polish EFSA. Future studies will need to explore multiple facets of the feedback construct, while varying the types of feedback providers and feedback recipients. Another area of interest involves the study of different relationship stages, for example, one-time feedback vs ongoing coaching, ‘on the fly’ vs scheduled at the end of a clinical rotation.
To continue collecting the validity evidence, future studies should delve into the psychometric properties of EFSA focusing on structural aspects, as well as convergent and discriminant validity (external aspect). Future studies should also explore the relationship between EFSA and additional external measures such as motivation to use feedback, feedback-seeking frequency, and satisfaction with feedback using existing survey items [32]. The change in physicians’ behavior and performance and how they affect patient outcomes are also areas of future interest. Additional studies across different specialty areas and demographic variables should also be conducted to further explore the generalizability aspect of construct validity.

5. Conclusions

Building on the contemporary medical education literature and empiric pilot work, we created and refined an assessment instrument for measuring educator feedback skills. We also started the argument on validity and addressed content validity. Future studies should address structural, generalizability, and substantive aspects of validity, and test the new instrument in a variety of settings and contexts.

Author Contributions

A.M. contributed to study planning, data collection and analysis, manuscript writing. J.S. contributed to data collection, manuscript writing. F.L. contributed to study planning, data analysis, manuscript writing. C.R. contributed to study planning, data analysis, manuscript writing. K.C. contributed to study planning, data analysis, manuscript writing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Berenstein Foundation and Program for Medical Education Innovations and Research (PrMEIR) at NYU Grossman School of Medicine.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of NYU Grossman School of Medicine.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank our jurors for their time and dedication—Anthony Artino, Eric Warm, Heather Armson, Lisa Altschuler, Eric Holmboe, Pim Teunissen, Amanda Lee Roze des Ordons, Sondra Zabar, Adina Kalet, Jeremy Branzetti, Donna Phillips, David Stern.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Pilot Feedback Rating Scale (FP = Feedback-Provider, FR = Feedback-Recipient).
Table A1. Pilot Feedback Rating Scale (FP = Feedback-Provider, FR = Feedback-Recipient).
Feedback DimensionFeedback Items
Preparation, Engagement, InvestmentFP dedicated adequate time to the feedback conversation
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP was honest about not enough time or not enough facts
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP ensured quiet, private, appropriate environment
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP minimized disruptions
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP was prepared, present, engaged and paying attention
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP was making eye contact and leaning forward
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP was not ‘just going through the motions’
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP was organized and completed the encounter
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Defining ExpectationsFP defined expectations for performance
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Encouraging Self-AssessmentFP encouraged the FR to self-assess
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Beneficence, Encouragement, RespectFP was warm, approachable, supportive, encouraging, & reassuring
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP was positive and used positive language
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP was polite and respectful
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP was constructive without offending
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Exploration, Reaction, DialogueFP listened
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP facilitated a dialogue
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP reacted to FR self-assessment and other comments
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP probed deeper and asked for elaboration
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Using Facts and ObservationsFeedback was based on observed performance by FR
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Specificity, Use of ExamplesFP described specific examples of specific FR behaviors
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Confidence, Direction, CorrectionFP remained calm, composed, and non-confrontational
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP redirected and disarmed
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP confronted wrong resident perceptions
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
FP confronted inappropriate FR behaviors
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Individualizing ConversationFP adapted the feedback conversation and their approach based on FR comments and behaviors during the feedback encounter
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Next StepsFeedback conversation included specific areas for improvement
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Feedback conversation included measurable goals
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Feedback conversation included realistic action plan
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Feedback conversation included discussion of a timely follow-up
Strongly DisagreeDisagreeSomewhat DisagreeSomewhat AgreeAgreeStrongly Agree
Table A2. Educator Feedback Skills Assessment.
Table A2. Educator Feedback Skills Assessment.
ItemsRatingComments
Educator appeared engagedDistractedInconsistently engagedConsistently engaged
Educator was prepared for the feedback sessionUnprepared for the feedback session; did not know Learner or his/her performancePrepared for the feedback session;
knew some things about Learner and his/her performance
Prepared for the feedback session; knew Learner and his/her performance in detail
Self-assessment encouraged and incorporated in conversationSelf-assessment neither encouraged nor incorporated in conversationSelf-assessment encouraged OR incorporated in conversationSelf-assessment encouraged AND incorporated in conversation
Educator was respectfulDisrespectfulInconsistently respectfulConsistently respectful
Educator was constructiveNot constructiveInconsistently constructiveConsistently constructive
Educator facilitated dialogueDid not ask questions, did not allow time for or dismissed Learner commentsAsked some questions, reacted to Learner commentsAsked many questions, allowed time for responses, encouraged Learner comments
Educator probed deeper and asked for elaborationDid not ask for clarification or elaborationInconsistently asked for clarification or elaboration Consistently asked for clarification or elaboration
Educator provided specific examples to LearnerEducator provided no examples Educator provided at least one specific exampleEducator provided many specific examples
Conversation included specific areas for improvement (WHAT to improve)Conversation did not include areas for improvementConversation included at least one area for improvementConversation included many areas for improvement
Conversation included an action plan (HOW to improve)Action plan was not discussedAction plan was discussed in general termsA specific action plan was discussed
GENERAL COMMENTS/ADVICE
Please include any suggestions for this Educator

References

  1. Eva, K.W.; Armson, H.; Holmboe, E.; Lockyer, J.; Loney, E.; Mann, K.; Sargeant, J. Factors influencing responsiveness to feedback: On the interplay between fear, confidence, and reasoning processes. Adv. Health Sci. Educ. 2011, 17, 15–26. [Google Scholar] [CrossRef] [Green Version]
  2. Carmody, K.; Walia, I.; Coneybeare, D.; Kalet, A. Can a Leopard Change Its Spots? A Mixed Methods Study Exploring Emergency Medicine Faculty Perceptions of Feedback, Strategies for Coping and Barriers to Change. Master’s Thesis, Maastricht University School of Health Education, Maastricht, The Netherlands, 2017. [Google Scholar]
  3. Moroz, A.; Horlick, M.; Mandalaywala, N.; Stern, D.T. Faculty feedback that begins with resident self-assessment: Motivation is the key to success. Med. Educ. 2018, 52, 314–323. [Google Scholar] [CrossRef] [Green Version]
  4. Kogan, J.R.; Conforti, L.N.; Bernabeo, E.C.; Durning, S.J.; Hauer, K.E.; Holmboe, E.S. Faculty staff perceptions of feedback to residents after direct observation of clinical skills. Med. Educ. 2012, 46, 201–215. [Google Scholar] [CrossRef]
  5. Lefroy, J.; Watling, C.; Teunissen, P.; Brand, P.L.P. Guidelines: The do’s, don’ts and don’t knows of feedback for clinical education. Perspect. Med. Educ. 2015, 4, 284–299. [Google Scholar] [CrossRef] [Green Version]
  6. Roze des Ordons, A.L.; Gaudet, J.; Grant, V.; Harrison, A.; Millar, K.; Lord, J. Clinical feedback and coaching—BE-SMART. Clin. Teach. 2019, 17, 255–260. [Google Scholar] [CrossRef]
  7. Sargeant, J.; Lockyer, J.M.; Mann, K.; Armson, H.; Warren, A.; Zetkulic, M.; Soklaridis, S.; Könings, K.D.; Ross, K.; Silver, I.; et al. The R2C2 model in residency education: How does it foster coaching and promote feedback use? Acad. Med. 2018, 93, 1055–1063. [Google Scholar] [CrossRef] [Green Version]
  8. Roze des Ordons, A.L.; Cheng, A.; Gaudet, J.E.; Downar, J.; Lockyer, J.M. Exploring Faculty Approaches to Feedback in the Simulated Setting. Simul. Health J. Soc. Simul. Healthc. 2018, 13, 195–200. [Google Scholar] [CrossRef]
  9. Bing-You, R.; Hayes, V.; Varaklis, K.; Trowbridge, R.; Kemp, H.; McKelvy, D. Feedback for Learners in Medical Education: What is Known? A Scoping Review. Acad. Med. 2017, 92, 1346–1354. [Google Scholar] [CrossRef]
  10. Bing-You, R.; Ramani, S.; Ramesh, S.; Hayes, V.; Varaklis, K.; Ward, D.; Blanco, M. The interplay between residency program culture and feedback culture: A cross-sectional study exploring perceptions of residents at three institutions. Med. Educ. Online 2019, 24, 1611296. [Google Scholar] [CrossRef] [Green Version]
  11. Bing-You, R.G.; Trowbridge, R.L. Why Medical Educators May Be Failing at Feedback. JAMA 2009, 302, 1330–1331. [Google Scholar] [CrossRef]
  12. Kraut, A.; Yarris, L.M.; Sargeant, J. Feedback: Cultivating a Positive Culture. J. Grad. Med. Educ. 2015, 7, 262–264. [Google Scholar] [CrossRef] [Green Version]
  13. Molloy, E.; Ajjawi, R.; Bearman, M.; Noble, C.; Rudland, J.; Ryan, A. Challenging feedback myths: Values, learner involvement and promoting effects beyond the immediate task. Med. Educ. 2019, 54, 33–39. [Google Scholar] [CrossRef] [Green Version]
  14. Telio, S.; Ajjawi, R.; Regehr, G. The “Educational Alliance” as a Framework for Reconceptualizing Feedback in Medical Education. Acad. Med. 2015, 90, 609–614. [Google Scholar] [CrossRef] [Green Version]
  15. Ramani, S.; Könings, K.D.; Ginsburg, S.; van der Vleuten, C.P. Feedback Redefined: Principles and Practice. J. Gen. Intern. Med. 2019, 34, 744–749. [Google Scholar] [CrossRef] [Green Version]
  16. Sargeant, J.; Lockyer, J.; Mann, K.; Holmboe, E.; Silver, I.; Armson, H.; Driessen, E.; MacLeod, T.; Yen, W.; Ross, K.; et al. Facilitated reflective performance feedback: Developing an evidence- and theory-based model that builds relationship, explores reasctions and content, and coaches for performance change (R2C2). Acad. Med. 2015, 90, 1698–1706. [Google Scholar] [CrossRef] [Green Version]
  17. Pulido, J.J.; García-Calvo, T.; Leo, F.M.; Figueiredo, A.J.; Sarmento, H.; Sánchez-Oliva, D. Perceived coach interpersonal style and basic psychological needs as antecedents of athlete-perceived coaching competency and satisfaction with the coach: A multi-level analysis. Sport Exerc. Perform. Psychol. 2020, 9, 16–28. [Google Scholar] [CrossRef]
  18. Moroz, A.; King, A.; Kim, B.; Fusco, H.; Carmody, K. Constructing a Shared Mental Model for Feedback Conversations: Faculty Workshop Using Video Vignettes Developed by Residents. MedEdPORTAL 2019, 15, 10821. [Google Scholar] [CrossRef]
  19. Warm, E.; Kelleher, M.; Kinnear, B.; Sall, D. Feedback on Feedback as a Faculty Development Tool. J. Grad. Med. Educ. 2018, 10, 354–355. [Google Scholar] [CrossRef] [Green Version]
  20. Minehart, R.D.; Rudolph, J.; Pian-Smith, M.C.M.; Raemer, D.B. Improving Faculty Feedback to Resident Trainees during a Simulated Case. Anesthesiology 2014, 120, 160–171. [Google Scholar] [CrossRef] [Green Version]
  21. Halman, S.; Dudek, N.; Wood, T.; Pugh, D.; Touchie, C.; McAleer, S.; Humphrey-Murto, S. Direct Observation of Clinical Skills Feedback Scale: Development and Validity Evidence. Teach. Learn. Med. 2016, 28, 385–394. [Google Scholar] [CrossRef]
  22. Perron, N.J.; Nendaz, M.; Louis-Simonet, M.; Sommer, J.; Gut, A.; Baroffio, A.; Dolmans, D.; van der Vleuten, C. Effectiveness of a training program in supervisors’ ability to provide feedback on residents’ communication skills. Adv. Health Sci. Educ. 2012, 18, 901–915. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Bashir, K.; Elmoheen, A.; Seif, M.; Anjum, S.; Farook, S.; Thomas, S. In Pursuit of the Most Effective Method of Teaching Feedback Skills to Emergency Medicine Residents in Qatar: A Mixed Design. Cureus 2020, 12, e8155. [Google Scholar] [CrossRef] [PubMed]
  24. Bing-You, R.; Ramesh, S.; Hayes, V.; Varaklis, K.; Ward, D.; Blanco, M. Trainees’ Perceptions of Feedback: Validity Evidence for Two FEEDME (Feedback in Medical Education) Instruments. Teach. Learn. Med. 2018, 30, 162–172. [Google Scholar] [CrossRef]
  25. Richard-Lepouriel, H.; Bajwa, N.; De Grasset, J.; Audétat, M.; Dao, M.D.; Jastrow, N.; Nendaz, M.; Perron, N.J. Medical students as feedback assessors in a faculty development program: Implications for the future. Med. Teach. 2020, 42, 536–542. [Google Scholar] [CrossRef] [PubMed]
  26. Messick, S. Validity of Psychological Assessment. Am. Psychol. 1995, 50, 741–749. [Google Scholar] [CrossRef]
  27. A Cook, D.A.; Brydges, R.; Ginsburg, S.; Hatala, R. A contemporary approach to validity arguments: A practical guide to Kane’s framework. Med. Educ. 2015, 49, 560–575. [Google Scholar] [CrossRef] [PubMed]
  28. Artino, A.R., Jr.; La Rochelle, J.S.; DeZee, K.J.; Gehlbach, H. Developing questionnaires for educational research: AMEE Guide No. 87. Med. Teach. 2014, 36, 463–474. [Google Scholar] [CrossRef] [Green Version]
  29. Watt, T.; Rasmussen, Å.K.; Groenvold, M.; Bjorner, J.B.; Watt, S.H.; Bonnema, S.J.; Hegedüs, L.; Feldt-Rasmussen, U. Improving a newly developed patient-reported outcome for thyroid patients, using cognitive interviewing. Qual. Life Res. 2008, 17, 1009–1017. [Google Scholar] [CrossRef]
  30. McKenzie, J.; Wood, M.; Kotecki, J.; Clark, J.; Brey, R. Establishing content validity: Using qualitative and quantitative steps. Am. J. Health Behav. 1999, 23, 311–318. [Google Scholar] [CrossRef]
  31. Jonsson, A.; Svingby, G. The use of scoring rubrics: Reliability, validity and educational consequences. Educ. Res. Rev. 2007, 2, 130–144. [Google Scholar] [CrossRef]
  32. Steelman, L.; Levy, P.E.; Snell, A.F. The Feedback Environment Scale: Construct Definition, Measurement, and Validation. Educ. Psychol. Meas. 2004, 64, 165–184. [Google Scholar] [CrossRef]
Table 1. Tabulated Results.
Table 1. Tabulated Results.
Expert Qualitative ReviewsExperts Recruited12
Expert Comments10
Item Number ChangesIncreased by 1 (31 to 32)
Expert Quantitative ReviewsExperts Recruited12
Expert Votes8
Item Number ChangesDecreased by 22 (32 to 10)
Cognitive InterviewsTotal Participants Recruited12
Participants in Emergency Medicine4
Participants in Physical Medicine4
Participants in Internal Medicine 3
Participants in Orthopedic Surgery1
Total Recommendations23
Incorporated Recommendations (>75% Votes)7
Instrument Revisions8
Instrument Name Changes1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Moroz, A.; Stone, J.; Lopez, F.; Racine, C.; Carmody, K. Educator Feedback Skill Assessment: An Educational Survey Design Study. Int. Med. Educ. 2022, 1, 97-105. https://doi.org/10.3390/ime1020012

AMA Style

Moroz A, Stone J, Lopez F, Racine C, Carmody K. Educator Feedback Skill Assessment: An Educational Survey Design Study. International Medical Education. 2022; 1(2):97-105. https://doi.org/10.3390/ime1020012

Chicago/Turabian Style

Moroz, Alex, Jennifer Stone, Francis Lopez, Cynthia Racine, and Kristin Carmody. 2022. "Educator Feedback Skill Assessment: An Educational Survey Design Study" International Medical Education 1, no. 2: 97-105. https://doi.org/10.3390/ime1020012

APA Style

Moroz, A., Stone, J., Lopez, F., Racine, C., & Carmody, K. (2022). Educator Feedback Skill Assessment: An Educational Survey Design Study. International Medical Education, 1(2), 97-105. https://doi.org/10.3390/ime1020012

Article Metrics

Back to TopTop