Next Article in Journal
Does the Prevalence of Obesity Affect the Demand for Soft Drinks? Evidence from Cross-Country Panel Data
Next Article in Special Issue
A Pilot Study on the Feasibility of Developing and Implementing a Mobile App for the Acquisition of Clinical Knowledge and Competencies by Medical Students Transitioning from Preclinical to Clinical Years
Previous Article in Journal
A New Story on the Multidimensionality of the MSPSS: Validity of the Internal Structure through Bifactor ESEM
Previous Article in Special Issue
Teaching Urology to Undergraduates: A Prospective Survey of What General Practitioners Need to Know
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Evaluating the Clinical Reasoning of Student Health Professionals in Placement and Simulation Settings: A Systematic Review

1
Work Integrated Learning, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia
2
Physiotherapy, School of Health Sciences, University of Southampton, Southampton SO17 1BJ, UK
*
Author to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2022, 19(2), 936; https://doi.org/10.3390/ijerph19020936
Submission received: 30 November 2021 / Revised: 21 December 2021 / Accepted: 22 December 2021 / Published: 14 January 2022

Abstract

:
(1) Background: Clinical reasoning is essential to the effective practice of autonomous health professionals and is, therefore, an essential capability to develop as students. This review aimed to systematically identify the tools available to health professional educators to evaluate students’ attainment of clinical reasoning capabilities in clinical placement and simulation settings. (2) Methods: A systemic review of seven databases was undertaken. Peer-reviewed, English-language publications reporting studies that developed or tested relevant tools were included. Searches included multiple terms related to clinical reasoning and health disciplines. Data regarding each tool’s conceptual basis and evaluated constructs were systematically extracted and analysed. (3) Results: Most of the 61 included papers evaluated students in medical and nursing disciplines, and over half reported on the Script Concordance Test or Lasater Clinical Judgement Rubric. A number of conceptual frameworks were referenced, though many papers did not reference any framework. (4) Conclusions: Overall, key outcomes highlighted an emphasis on diagnostic reasoning, as opposed to management reasoning. Tools were predominantly aligned with individual health disciplines and with limited cross-referencing within the field. Future research into clinical reasoning evaluation tools should build on and refer to existing approaches and consider contributions across professional disciplinary divides.

1. Introduction

Systemic changes in healthcare are requiring graduates to be better prepared for work in diverse settings, in teams, and addressing increasingly chronic and complex healthcare needs [1]. To this end, graduates require competence in clinical reasoning; the process of ‘gathering and synthesising information; generating hypotheses; and formulating a clinical impression, diagnosis, prognosis, treatment, care, and/or management plan’ [2]. It is clinical reasoning that integrates the ‘cognitive, psychomotor and affective skills’ required to be ‘adaptive, iterative, and collaborative’ [3]. Therefore, the more autonomous and responsible the health professional, and the more dynamic and complex the situation (including technological advancements), the greater the need for clinical reasoning [4]. Accordingly, the development and evaluation of clinical reasoning is an increasing focus in health professional education [5,6,7]. It is also included as an essential graduate attribute in many health professional programs and a competency in many health professional frameworks internationally [7].
Despite the agreed importance of clinical reasoning, there is a lack of consensus on how it is conceptualised and the definitions of related terminology [2,3,4,6,7]. The term clinical reasoning (CR) is often used synonymously with terms such as decision making, critical thinking, problem solving, clinical judgement, and diagnostic reasoning [4,7]. It is also used as a ‘short hand’ for a broad concept [8], with variations in what comprises clinical reasoning by health professional discipline and also within disciplines [3]. The lack of consensus, along with the varied use of terminology, limits the advancement of education that will prepare graduates for multi-disciplinary teamwork in dynamic and complex situations inherent to healthcare.
There have been many attempts to define clinical reasoning across the professions and to find methods to teach and assess all the constructs related to the concept. Furthermore, there is a growing interest to know when a health professional student develops clinical reasoning, and there have been several recent systematic reviews of evaluations of the time frame of when students develop their clinical reasoning. Each of these reviews has taken a different focus. Specifically, Carter, Creedy, and Sidebotham [9] reviewed the tools used to measure ‘critical thinking’ development in undergraduate nursing and midwifery education. They identified tools by examining papers that applied an experimental design and measured critical thinking on multiple occasions in undergraduate nursing and midwifery education [9]. Similarly, Macauley et al. [10] systematically reviewed evaluations of clinical reasoning that were used as outcome measures, this time in examining simulation programs in any health profession. They also included broad outcome measures that did not necessarily focus on clinical reasoning (e.g., the physical therapy entry-level competency assessment, the Assessment of Physiotherapy Practice [10]). In a recent scoping review, Daniel et al. [5] expanded the scope to investigate approaches to clinical reasoning evaluation used in a range of settings (workplace-based, simulation-based, and non-workplace-based settings), rather than only outcomes-based research, but restricted their review to the evaluation of the clinical reasoning of medical students, residents, and physicians. Between these reviews [5,9,10] a wide range of measures have been identified, but only one review has extended beyond medicine and nursing [10]. The focus of these reviews has also generally been on research outcome measures [5,9]. Yet, there are a myriad of approaches to the evaluation of students that are suited to application in different educational contexts but not necessarily suited to use as research outcome measures [10].
Considering the need of education providers to identify their student health professionals’ proficient development of clinical reasoning in preparation for complex and uncertain work, there remains a need to establish means by which this may be evaluated. There exist dominant theories regarding the development of expertise in clinical reasoning—namely, Script Theory [11,12] describes the restructuring of knowledge as reasoning is practiced and reinforced with the development of expertise such that novices use knowledge networks to progress through detailed reasoning in a cognitively demanding process, whereas experts use ‘illness scripts’ to efficiently target information gathering and checking and arrive at a total solution. Alternately, clinical reasoning has been viewed as a skill to which Dreyfus and Dreyfus’ Model of Skill Development [13] applies, positing that novices are reliant on rules learned from others and skill progresses through a number of stages of the increasing capability to recognise patterns and handle uncertainty through to expertise where solutions are intuitively recognised. These models have informed the conceptualisation of clinical reasoning development in health professions education.
As clinical reasoning is a consequential capability (38), characterised by the integration of ‘cognitive, psychomotor and affective skills’ [3], and inherently requires flexible application, there is great interest in its development in experiential simulated and workplace-based settings, thereby evaluating it as an applied skill [14] or capability. Therefore, it is not a skill suited for evaluation in a classroom setting and does not focus on purely measuring a student’s cognitive capacity or knowledge base [15]. However, there has not been any research in this area of reasoning across disciplines. Further, to date, no research has synthesised the evidence on what the student evaluation tools on the topic of ‘clinical reasoning’ are purporting to evaluate, even in medicine and nursing. Including such details may provide a basis for furthering understanding of the constructs used and the relationships between them. Without this information, it is possible, perhaps likely, that a multitude of different aspects of clinical reasoning and associated constructs are being evaluated without cognisance of the boundaries and connections between them. With these clear gaps, the aim of this review is to systematically identify the tools used to evaluate clinical reasoning and determine the constructs the tools intend to assess.
The following research questions guided this review:
  • What tools have been developed or investigated for evaluating students’ clinical reasoning as applied in clinical education placement and simulation settings of health professional education?
  • What constructs or aspects of clinical reasoning are those tools designed to assess?

2. Materials and Methods

In a systematic approach, potentially eligible studies were identified by searching the databases of CINAHL (via EBSCO), ERIC, EMBASE, Medline and pre-Medline, PsychInfo (via Ovid), and Proquest Nursing and Allied Health. The search strategy (Supplemental Table S1) was inclusive of a wide range of allied health disciplines and medicine: audiology, dietetics, exercise physiology, medicine, nursing, occupational therapy, paramedicine, pharmacy, physiotherapy/physical therapy, podiatry, psychology, social work, speech/speech, and language pathology/therapy. Students were learners in their primary professional training (i.e., not post-professional continuing development or specialisation). The terms reasoning and the variants critical thinking, judgement, problem solving, and decision making were included, mapped to database subject headings as relevant. Likewise, the search included a range of terms pertaining to evaluation and measurement (e.g., assessment, inventory, test, scale, measure, index, survey, rubric, etc.), adjusted to database subject headings as relevant. Searches were limited to publication dates of 2000–2018, in English language only, and, where possible, to peer-reviewed sources.
Citations were imported into Covidence (Melbourne, VIC, Australia) for management. Abstracts, and, where necessary, full-text papers, were each screened by one author according to the inclusion and exclusion criteria set out in Table 1. To ensure consistency within the team, at each stage papers were screened until several included studies were identified, and then, the team met to discuss the criteria for inclusion and exclusion until consensus and consistency were reached. Screening then continued, with even minor uncertainties discussed between the authors in regular meetings throughout the screening.
This review focussed on evaluations used in clinical placement or simulation settings. However, uncertainties arose during the screening process due to the incomplete or unclear reporting of relevant information within the papers. To resolve these uncertainties, a conservative approach was taken, whereby if the setting in which the student was evaluated was unclear in the paper, but the evaluation was reported as connected to student learning in clinical or simulation settings (i.e., eligible settings), the paper was included. Consensus for the other criteria was readily attained given the consistent reporting of these factors (e.g., focus on clinical reasoning or related concept, or outcomes) by authors.
Data extraction was completed by all authors and included authorship and year of publication, country or countries in which the study was undertaken, the tool or tools that were studied, aims, methods, the disciplines and levels of students and any other participants, the theoretical underpinning of the tool (as stated by the authors), and the construct evaluated (as stated by the authors). In cases in which papers included multiple evaluation tools, data were extracted for each relevant tool.
To ensure consistency, for each data extraction item, examples were discussed among the team and noted at the top of the data collection table prior to data extraction. Initially, data were extracted from two papers by different team members, and these were discussed among the team to reach consensus and clarity. Data from each eligible paper were then extracted by one team member per paper, working into a common data extraction table where the others’ work was visible for consistency. Uncertainties were discussed at regular meetings, with a second team member allocated to extract data from individual papers into a new table line in preparation for these meetings as required. Finally, all data for this report were reviewed against each paper by the first author.

3. Results

The 7882 records identified in database searches were narrowed to 61 included papers (Figure 1). Of the 196 papers that appeared to meet the criteria or were unclear from the abstract screening, 135 were excluded at full-text review, predominantly because the evaluation tools were used in settings such as university classes rather than in clinical- or simulation-based settings (n = 46), or because the paper did not report on the development or testing of an evaluation tool (n = 41).

3.1. Overview of Included Studies

The majority of the included 61 papers described studies within medicine and nursing, with 28 and 25 papers, respectively, plus one paper including participants from both of those professions. The remaining papers were in midwifery (n = 3), physical therapy (n = 2), occupational therapy (n = 1) and pharmacy (n = 1). Around half of the papers addressed the development and testing of the Script Concordance Test (SCT; n = 19; [16]) or the Lasater Clinical Judgement Rubric (LCJR; n = 13; [17]) and variants. The SCT is a test for predominantly diagnostic medical scenarios, where the examinee’s answers are scored based on the level of agreement with responses provided by a panel of experts [16]. The included studies each examine different case vignettes and expert decisions. The LCJR describes performance expectations, as well as language for feedback and assessment of predominantly nursing students’ clinical judgment development in a detailed and developmental rubric [17]. Several variants have been developed and tested. An overview of papers by evaluation tool is presented in Supplementary Table S2.
The papers were dominated by studies undertaken in the United States of America (n = 24), including 5 studies of the SCT and 7 studies of the LCJR. Other studies were undertaken in Canada (n = 8, including 5 on the SCT), Australia (n = 6), France (n = 5), Korea (n = 5, including 3 on LCJR variants), and a range of other countries. The distribution of publication years illustrates a trend of increasing publications on this topic over time, and particularly since 2015, considering that further papers published in 2018 would not have yet been in the databases at the time of the search.

3.2. Conceptual Foundations of the Clinical Reasoning Evaluation Tools

The included papers drew from a variety of conceptual frameworks (Table 2). Representing the topics identified in this review, the included papers were grouped into those stating they evaluate clinical decision making, clinical judgement, clinical reasoning, critical thinking, and situation awareness. The final category included papers that do not state a specific construct being measured, though they report measuring clinical reasoning and related constructs in general. Within each group, papers were arranged by the theoretical underpinning that was named in the paper. Evaluation tools for which there were no theoretical underpinnings identified appear at the end of each group.
Even within disciplines, there was a clear lack of agreement regarding critical thinking, with two different consensus statements in nursing [18,19], giving rise to several evaluation tools including the Carter Assessment of Critical Thinking in Midwifery [20,21,22,23,24,25] examining students’ critical thinking skills—a construct almost exclusively evaluated for nursing students. Another evaluation of critical thinking skills, through a clinical viva, was used for nursing students [26] and was derived by adapting a nursing competence assessment [27], while another competence assessment (the Physical Therapy Clinical Performance Instrument [28]) was indirectly adapted to create an evaluation of physical therapy interns’ clinical decision making [29]. The one evaluation of critical thinking skills used for medical students [30] was based on a problem-solving process [31] and piloted in high fidelity simulation.
Two evaluations utilised context-specific reasoning frameworks, both drawing on mnemonic devices to both guides and evaluate students. An occupational therapy clinical reasoning example [32] applied the A SECRET approach (Attention, Sensation, Emotion Regulation, Culture, Relationships, Environment, and Task [33]). In an example evaluating medical students’ clinical documentation [34], the IDEA framework (Interpretive summary, Differential diagnosis, Explanation of reasoning, and Alternative diagnosis with explanation) was combined with RIME descriptions (Reporter, Interpreter, Manager, Educator [35]).
Contrasting in specificity with the context-specific frameworks was the consideration of broad cognitive frameworks and abilities. Script Theory [11,12], which posits that expert clinical reasoning largely draws upon patterns, was used as the foundation of the Script Concordance Test [16] and variants including a multiple-choice examination [36] and written ‘think aloud’ test [37]. Situation awareness [38] was evaluated for nursing students [39]. Drawing upon an even more general foundation, a critical thinking skills evaluation [40] used for nursing students was based on Benner’s levels of nursing expertise, where students are considered to move through five levels of increasing proficiency, i.e., novice, advanced beginner, competent, proficient, and expert [41], and Bloom’s Taxonomy of Educational Objectives, where hierarchical models are used to classify educational learning objectives by levels of complexity and specificity [42]. Similarly, a clinical reasoning evaluation used for physical therapy students [43] was based on multiple sources of knowledge regarding clinical reasoning with evaluations based on the Revised Bloom’s Taxonomy of Educational Objectives [44] and the Dreyfus Model of Skill Acquisition [13] which describes skill development through instruction and experience of five developmental stages from novice to mastery. In another instance, authors reported the complementary use of a clinical reasoning model [45] and social cognitive theory [46,47], which considers that an individual’s thoughts and feelings, as well as the social environment, affect their own behaviour, to derive an evaluation of anxiety and confidence in clinical decision making [48].
Finally, across all the included papers, published models of clinical judgement [49], clinical decision making, and clinical reasoning processes [45,50,51] were cited as directly or indirectly underpinning the Lasater Clinical Judgement Rubric [17] and variants [52,53,54,55,56], as well as several evaluations of clinical decision making [48] and clinical reasoning [57,58]. These represented the most direct link between conceptual models and student evaluations.
Collectively, the identified frameworks represent a broad spectrum of the main constructs deemed important and necessary for the development of clinical reasoning. However, inconsistencies in agreement about the underpinning conceptual framework for clinical reasoning were evident. Further, for 11 evaluation tools adopted for medical and nursing students, no framework was specified [59,60,61,62,63,64,65,66,67,68,69]. Many of these evaluation tools were described as rubrics, examinations, and objective structured clinical examinations (OSCEs), and the authors did not clearly define the construct being assessed as one of critical thinking, clinical judgement, clinical decision making, or clinical reasoning. It would appear that if a conceptual framework for clinical reasoning is used, then the specific constructs are identified and built into the assessment tool, whereas for those without a conceptual framework, it was found that the constructs of the evaluation tools are often only identified in general terms. These tools may, for example, purport to represent ‘clinical reasoning’ generically by way of a proxy activity such as clinical documentation.

4. Discussion

This review identified numerous tools used to evaluate clinical reasoning and related constructs in placements and simulation in health professional education. Of these tools, the Script Concordance Test [16] and Lasater Clinical Judgement Rubric [17] were prominent in the literature. The diversity of additional tools identified from searches using a range of terms provides educators with a variety of options for student evaluation in these situations. These tools encompass a spread of approaches along the ‘continuum of authenticity’ [5], given our inclusion of evaluation tools described as being conducted during, or associated with, clinical placement or simulation settings, even if not explicitly of workplace performance. However, there is a lack of cross-referencing between tools and constructs identified in this review, and evidence of continued development is almost exclusively within discipline boundaries. From identifying these tools and their conceptual foundations, we present four key implications for further discussion.

4.1. There Remains Inconsistent Use of Terminology around Clinical Reasoning

Unsurprisingly, given previous reports [2,3,4,6,7], there is a lack of consistency in the application of terminology to name the constructs being assessed. Different terminology appears preferred in different discipline groups, such as ‘critical thinking skills’ and ‘clinical judgement’ being a focus in nursing and midwifery, whereas ‘clinical reasoning’ is in more widespread use and appears often as a more general term [7]. The use of conceptual frameworks to explicate constructs is limited, and different frameworks are used to define the same constructs as in ‘critical thinking’ [18,19], with varied evaluation tools even from the same frameworks. Included among the tools to evaluate ‘clinical reasoning’ were evaluations of the application of specific frameworks for ‘reasoning’ that were not themselves models of clinical reasoning [32,34], diagnostic processes as in the Script Concordance Test, etc. [34,77], and therapeutic processes [57]. Finally, there was a group of papers that examined diverse evaluation tools of relevance to the topic but without clearly setting out the origin of the construct of interest, sometimes not even clearly naming a construct at all.
To advance both practice and research, there remains a need to clarify both terminology and constructs, and to achieve this in ways that will enable a better understanding of how each profession contributes similarly and differently to clinical reasoning in the practice of both diagnosis and management [3,7,94]. Communicating clearly about, and reconciling, conceptual and theoretical frameworks will be required to advance this cause [6], in the literature more broadly and in reference to the evaluation of clinical reasoning and related constructs specifically.

4.2. Each Evaluation Tool Has Limited Evidence

This review also highlighted that the evidence to support the use of evaluation tools or make choices between them is, in most cases, limited for these learning contexts. The Script Concordance Test [16] and Lasater Clinical Judgement Rubric [17] clearly dominated the published work, but there were many more tools reported on with very limited interconnections made between them or even cross-referencing of research.
It is striking that there was no overlap with measures of critical thinking identified in a previous systematic review of studies with experimental designs [9], and limited overlap with those identified in a systematic review of a broad range of measures used to assess simulation outcomes [10]. In part, the development of some of the tools in this review was in response to the need for appropriate outcome measures that prior reviews in specific circumstances have identified, but overwhelmingly, the evaluation tools in this review have been developed and tested in isolation of each other and with limited subsequent applications. The few papers per available tool offer limited evidence for evaluation of health professional students in clinical and simulated placements.

4.3. There Is Minimal Evidence for Allied Health or Multi-Disciplinary Crossover

Somewhat surprisingly, there were only 4 papers (of the total of 61) that considered tools for the evaluation of clinical reasoning in disciplines beyond nursing and medicine (i.e., allied health), which have been the focus of prior reviews [5,9]. The importance of clinical reasoning in allied health is similarly critical, with many allied health disciplines being primary care practitioners. Clinical reasoning is also important for patient management decisions in all these professions and for working in multidisciplinary teams. With the lack of research, it is unclear if findings on existing tools are applicable to allied health student development.
There also remains no identified avenues of considering how students from differing disciplines engage in clinical reasoning regarding the same patient scenario, with implications for teaching and evaluation of constructs [7], and also for understanding how healthcare teams may work together. A few studies have used the Script Concordance Test beyond medical specialties, with one study including pharmacy students and two including nursing students. However, it should be noted that the case vignettes for the Script Concordance Test were developed and calibrated against content expert decisions for each instance completely independently. Even single studies, where participants were drawn from multiple disciplines or medical specialties [89,91] used multiple instances of the test rather than a direct comparison. Again, this lack of multi-disciplinary crossover is reflective of the usage of terminology and reinforces the need to make explicit intended meanings [7]. If health professionals can speak the same language when considering clinical reasoning pedagogically and clinically, the impacts may be seen on student learning, collaborative decision making, and patient care.

4.4. Evaluation Tools Reported in the Literature Represent Two Contrasting Objectives

The two differing objectives of clinical reasoning measurement represented by the tools in this review, represent contrasts in emphasis on diagnostic versus management considerations in reasoning [94]. When tools are compared side by side, it is apparent that evaluating students’ clinical reasoning is complex. The tools identified do not, individually or even between them, represent the full complexity of the construct of reasoning set out in the literature [4,5,94]. Some approaches to student evaluation emphasise cognitive functioning, and perhaps specificity and objectivity, as do standardised psychometric measures of critical thinking abilities that have been used in non-clinical settings of health professional education [10]. Other approaches to student evaluation favour comprehensive coverage in practice situations. Each takes different approaches toward considering how students manage inherently dynamic healthcare situations. Given none is complete, educators must be mindful of how the application of different constructs when evaluating students’ learning can lead to different interpretations.

4.4.1. Tools to Assess the Development of Diagnostic Reasoning

A key objective of assessment in a subset of the papers included in this review is to examine the congruence between students’ reasoning outcomes and those deemed as experts. This is evident predominantly in instances of the Script Concordance Test. This objective necessitates the identification of a point of clear comparison, which can restrict the aspects and applications of clinical reasoning that can be evaluated. In contrast to definitions of clinical reasoning, which incorporate the whole therapeutic process [2,4], the focus from this objective is typically on diagnosis and treatment decisions where agreement can be objectified. The expectation is that students start out using limited network approaches in knowledge organisation and build refined knowledge scripts with experience [95]. This can be a useful way to consider the development of expertise and evaluations using this approach, while usually being paper-based and closed-response evaluations, are able to track shifts in respondents’ thinking as new data informing clinical reasoning is provided. However, the domains covered need to be married with complementary approaches or comprehensive evaluation to encapsulate all components of clinical reasoning [5].

4.4.2. Tools to Judge the Quality of Performance as a Reflection of Reasoning Processes

The alternate objective of assessment in the papers in this review is to judge the quality of performance of steps that apply throughout a process of therapeutic patient management. Such reasoning processes are clearly set out in models such as the Clinical Judgement Model [49] and Clinical Reasoning Process [51]. This gives rise to the labelling of this construct as ‘clinical judgement’ in tools such as the Lasater Clinical Judgement Rubric [17], while others [57,58] term a very similar construct as clinical reasoning. These instruments address multiple clinical reasoning components beyond diagnosis or scenario planning, which is more consistent with broad definitions of clinical reasoning [2,4,5]. They typically make use of the authentic and dynamic situations in simulation and clinical placements, relying on extended and preferably multiple observations of student reasoning performance with assessor judgement [5]. However, this approach may introduce the risk of misjudgement by assessors if using observable performance outputs (behaviours) to infer cognitive and affective elements of reasoning. It may also be difficult to distinguish between the cognitive, psychomotor, and affective skills in reasoning, and broader therapeutic or technical skills. For example, the Lasater Clinical Judgement Rubric includes ‘calm, confident manner’, ‘clear communication’, and ‘being skilful’ among the elements of clinical judgement [17]. Evaluations using this approach might, therefore, be more holistic than specific and sensitive to clinical reasoning and thereby student development.

5. Limitations

This systematic review sought to identify the range of tools reported in the literature for educators to consider in evaluating the clinical reasoning or related abilities of allied health and medical students in simulated or placement settings. This represents only the formally published, English-language literature on the topic, subject to publication bias and limited international representation, particularly as grey literature searching and secondary search strategies were not used (e.g., reference list and citation tracking, or contacting authors). Author reports were used rather than independent analysis of constructs and theoretical frameworks represented in evaluation tools, given the full content of many tools were not available, and thus, variation in the use of constructs and terms was visible but not resolved in this study. Nonetheless, this review was able to present a broad overview with respect to the inclusions of disciplines of the students while focussing on the evaluation of clinical reasoning and related constructs specifically.

6. Conclusions

This study identified a significant number of tools used to evaluate clinical reasoning and related constructs in placement and simulation settings in health professional education. There is a lack of cross-referencing between tools and constructs identified in this review, and evidence of continued development is only observed within discipline boundaries. Unfortunately, if disciplines do not share a common understanding of the conceptual framework or constructs of clinical reasoning, this may impact on and limit interprofessional learning and collaborative clinical judgements in multidisciplinary teams.
Future research into clinical reasoning evaluation tools should build on and reference existing approaches and consider contributions across professional disciplinary divides. Research is needed to develop, test, and incorporate student evaluations that are applicable to outcome measurement in research studies in order to understand students’ performance of this essential capability and how to support its development. A larger evidence base than was identified for most tools in this review is required for that purpose, with attention to research quality. Repeated measures and longitudinal perspectives capturing students’ reasoning development are specifically required, as are workplace-based approaches [14]. By connecting and expanding this body of work, it will be possible to more clearly identify contributors to students’ learning and their attainment of threshold skills. Clearly, more research is required to sequence the development of clinical reasoning by standardising the use of terminology and constructs and considering tool design that can monitor the developmental progression of clinical reasoning progression with applicability across health professions.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/ijerph19020936/s1, Table S1: Search strategy, Table S2: Tools for evaluating students’ clinical reasoning.

Author Contributions

All authors contributed to the conceptualisation, methodological design, screening, and data extraction as described in the paper, data analysis and conduct of the research, and the drafting and review of the publication. Preliminary findings of this review have previously been presented at the 8th International Clinical Skills Conference, May 2019, Prato, Italy. All authors have read and agreed to the published version of the manuscript.

Funding

The first and third authors have been supported in part by the Special Studies Program of The University of Sydney. The second author has been supported in part by a Global Research Initiator Award from Southampton University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The included papers are referenced within, and data extraction tables are included within and in Supplementary Materials. A full list of identified references including all those excluded from the study is available from the corresponding author on request.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. McLaughlin, J.E.; Wolcott, M.D.; Hubbard, D.; Umstead, K.; Rider, T.R. A qualitative review of the design thinking framework in health professions education. BMC Med. Educ. 2019, 19, 98. [Google Scholar] [CrossRef] [Green Version]
  2. Young, M.; Thomas, A.; Lubarsky, S.; Ballard, T.; Gordon, D.; Gruppen, L.D.; Holmboe, E.; Ratcliffe, T.; Rencic, J.; Schuwirth, L.; et al. Drawing boundaries: The difficulty in defining clinical reasoning. Acad. Med. 2018, 93, 990–995. [Google Scholar] [CrossRef]
  3. Huhn, K.; Gilliland, S.J.; Black, L.L.; Wainwright, S.F.; Christensen, N. Clinical reasoning in physical therapy: A concept analysis. Phys. Ther. 2019, 99, 440–456. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Simmons, B. Clinical reasoning: Concept analysis. J. Adv. Nurs. 2010, 66, 1151–1158. [Google Scholar] [CrossRef]
  5. Daniel, M.; Rencic, J.; Durning, S.J.; Holmboe, E.; Santen, S.A.; Lang, V.; Ratcliffe, T.; Gordon, D.; Heist, B.; Lubarsky, S.; et al. Clinical reasoning assessment methods: A scoping review and practical guidance. Acad. Med. 2019, 94, 902–912. [Google Scholar] [CrossRef] [PubMed]
  6. Durning, S.J.; Artino, A.R., Jr.; Schuwirth, L.; van der Vleuten, C. Clarifying assumptions to enhance our understanding and assessment of clinical reasoning. Acad. Med. 2013, 88, 442–448. [Google Scholar] [CrossRef] [PubMed]
  7. Young, M.; Thomas, A.; Gordon, D.; Gruppen, L.D.; Lubarsky, S.; Rencic, J.; Ballard, T.; Holmboe, E.; Da Silva, A.; Ratcliffe, T.; et al. The terminology of clinical reasoning in health professions education: Implications and considerations. Med. Teach. 2019, 41, 1277–1284. [Google Scholar] [CrossRef] [PubMed]
  8. Young, M.E. Crystallizations of constructs: Lessons learned from a literature review. Perspect. Med. Educ. 2018, 7, 21–23. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Carter, A.G.; Creedy, D.K.; Sidebotham, M. Evaluation of tools used to measure critical thinking development in nursing and midwifery undergraduate students: A systematic review. Nurse Educ. Today 2015, 35, 864–874. [Google Scholar] [CrossRef] [Green Version]
  10. Macauley, K.; Brudvig, T.J.; Kadakia, M.; Bonneville, M. Systematic review of assessments that evaluate clinical decision making, clinical reasoning, and critical thinking changes after simulation participation. J. Phys. Ther. Educ. 2017, 31, 64–75. [Google Scholar] [CrossRef]
  11. Charlin, B.; Tardif, J.; Boshuizen, H.P.A. Scripts and medical diagnostic knowledge: Theory and applications for clinical reasoning instruction and research. Acad. Med. 2000, 75, 182–190. [Google Scholar] [CrossRef]
  12. Schmidt, H.G.; Norman, G.R.; Boshuizen, H.P.A. A cognitive perspective on medical expertise: Theory and implications. Acad. Med. 1990, 65, 611–621. [Google Scholar] [CrossRef]
  13. Dreyfus, S.E.; Dreyfus, H.L. A Five-Stage Model of the Mental Activities Involved in Directed Skill Acquisition; Operations Research Center, University of California: Berkley, CA, USA, 1980. [Google Scholar]
  14. Kononowicz, A.A.; Hege, I.; Edelbring, S.; Sobocan, M.; Huwendiek, S.; Durning, S.J. The need for longitudinal clinical reasoning teaching and assessment: Results of an international survey. Med. Teach. 2020, 42, 457–462. [Google Scholar] [CrossRef]
  15. Renic, J.; Schuwirth, L.; Gruppen, L.D.; Durning, S.J. Clinical reasoning performance assessment: Using situated cognition theory as a conceptual framework. Diagnosis 2020, 7, 241–249. [Google Scholar] [CrossRef] [PubMed]
  16. Charlin, B.; Roy, L.; Brailovsky, C.; Goulet, F.; van der Vleuten, C. The Script Concordance Test: A tool to assess the reflective clinician. Teach. Learn. Med. 2000, 12, 189–195. [Google Scholar] [CrossRef]
  17. Lasater, K. Clinical judgment development: Using simulation to create an assessment rubric. J. Nurs. Educ. 2007, 46, 496–503. [Google Scholar]
  18. Scheffer, B.K.; Rubenfeld, M.G. A consensus statement on critical thinking in nursing. J. Nurs. Educ. 2000, 39, 352–359. [Google Scholar] [CrossRef] [PubMed]
  19. American Philosophical Association. Critical Thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction; California Academic Press: Millbrae, CA, USA, 1990. [Google Scholar]
  20. Allen, G.D.; Rubenfeld, M.G.; Scheffer, B.K. Reliability of assessment of critical thinking. J. Prof. Nurs. 2004, 20, 15–22. [Google Scholar] [CrossRef] [PubMed]
  21. Carter, A.G.; Creedy, D.K.; Sidebotham, M. Development and psychometric testing of the carter assessment of critical thinking in midwifery (preceptor/mentor version). Midwifery 2016, 34, 141–149. [Google Scholar] [CrossRef] [Green Version]
  22. Carter, A.G.; Creedy, D.K.; Sidebotham, M. Critical thinking skills in midwifery practice: Development of a self-assessment tool for students. Midwifery 2017, 50, 184–192. [Google Scholar] [CrossRef] [Green Version]
  23. Carter, A.G.; Creedy, D.K.; Sidebotham, M. Measuring critical thinking in pre-registration midwifery students: A multi-method approach. Nurse Educ. Today 2018, 61, 169–174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Cise, J.S.; Wilson, C.S.; Thie, M.J. A qualitative tool for critical thinking skill development. Nurse Educ. 2004, 29, 147–151. [Google Scholar] [CrossRef] [PubMed]
  25. Shin, H.; Park, C.G.; Kim, H. Validation of Yoon’s critical thinking disposition instrument. Asian Nurs. Res. 2015, 9, 342–348. [Google Scholar] [CrossRef] [Green Version]
  26. Roberts, D. The clinical viva: An assessment of clinical thinking. Nurse Educ. Today 2013, 33, 402–406. [Google Scholar] [CrossRef] [PubMed]
  27. Levett-Jones, T.; Gersbach, J.; Arthur, C.; Roche, J. Implementing a clinical competency assessment model that promotes critical reflection and ensures nursing graduates’ readiness for practice. Nurse Educ. Pract. 2011, 11, 64–69. [Google Scholar] [CrossRef] [PubMed]
  28. Roach, K.E.; Frost, J.S.; Francis, N.J.; Giles, S.; Nordrum, J.T.; Delitto, A. Validation of the revised physical therapist clinical performance instrument (PT CPI): Version 2006. Phys. Ther. 2012, 92, 416–428. [Google Scholar] [CrossRef] [PubMed]
  29. Brudvig, T.J.; Macauley, K.; Segal, N. Measuring clinical decision-making and clinical skills in DPT students across a curriculum: Validating a new survey tool. J. Allied Health 2017, 46, 21–26. [Google Scholar]
  30. Nguyen, K.; Ben Khallouq, B.; Schuster, A.; Beevers, C.; Dil, N.; Kay, D.; Kibble, J.D.; Harris, D.M. Developing a tool for observing group critical thinking skills in first-year medical students: A pilot study using physiology-based, high-fidelity patient simulations. Adv. Physiol. Educ. 2017, 41, 604–611. [Google Scholar] [CrossRef] [PubMed]
  31. Facione, P.A. Critical Thinking: What It Is and Why It Counts; Measured Reasons: Hermosa Beach, CA, USA, 2015. [Google Scholar]
  32. Gee, B.M.; Thompson, K.; Strickland, J.; Miller, L.J. The development of a measurement tool evaluating knowledge related to sensory processing among graduate occupational therapy students: A process description. Occup. Ther. Int. 2017, 2017, 6713012. [Google Scholar] [CrossRef]
  33. Bialer, D.S.; Miller, L.J. No Longer a Secret: Unique Common Sense Strategies for Children with Sensory or Motor Challenges; Sensory World: Arlington, TX, USA, 2011. [Google Scholar]
  34. Baker, E.A.; Ledford, C.H.; Fogg, L.; Way, D.P.; Park, Y.S. The IDEA assessment tool: Assessing the reporting, diagnostic reasoning, and decision-making skills demonstrated in medical students’ hospital admission notes. Teach. Learn. Med. 2015, 27, 163–173. [Google Scholar] [CrossRef]
  35. Pangaro, L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad. Med. 1999, 74, 1203–1207. [Google Scholar] [CrossRef] [PubMed]
  36. Kelly, W.; Durning, S.; Denton, G. Comparing a script concordance examination to a multiple-choice examination on a core internal medicine clerkshi. Teach. Learn. Med. 2012, 24, 187–193. [Google Scholar] [CrossRef] [PubMed]
  37. Power, A.; Lemay, J.F.; Cooke, S. Justify your answer: The role of written think aloud in script concordance testing. Teach. Learn. Med. 2017, 29, 59–67. [Google Scholar] [CrossRef] [PubMed]
  38. Endsley, M.R. Theoretical Underpinning of Situation Awareness: A Critical Review, in Situation Awareness Analysis and Measurement; Endsley, M.R., Garland, D.J., Eds.; Lawrence Erlbaum: Mahwah, NJ, USA, 2000; pp. 3–32. [Google Scholar]
  39. Lavoie, P.; Cossette, S.; Pepin, J. Testing nursing students’ clinical judgment in a patient deterioration simulation scenario: Development of a situation awareness instrument. Nurse Educ. Today 2016, 38, 61–67. [Google Scholar] [CrossRef]
  40. Gantt, L.T. Using the clark simulation evaluation rubric with associate degree and baccalaureate nursing students. Nurs. Educ. Perspect. 2010, 31, 101–105. [Google Scholar]
  41. Benner, P. From Novice to Expert: Excellence and Power in Clinical Nursing; Addison-Wesley: Menlo Park, CA, USA, 1984. [Google Scholar]
  42. Bloom, B.S.; Englehart, M.D.; Furst, E.J.; Hill, W.H.; Krathwohl, D.R. Taxonomy of Educational Objectives: The Classification of Educational Goals; David McKay: New York, NY, USA, 1956. [Google Scholar]
  43. Furze, J.; Gale, J.R.; Black, L.; Cochran, T.M.; Jensen, G.M. Clinical reasoning: Development of a grading rubric for student assessment. J. Phys. Ther. Educ. 2015, 29, 34–45. [Google Scholar] [CrossRef]
  44. Krathwohl, D.R. A revision of Bloom’s Taxonomy: An overview. Theory Pract. 2002, 41, 212–218. [Google Scholar] [CrossRef]
  45. O’Neill, E.S.; Dluhy, N.M.; Chin, E. Modelling novice clinical reasoning for a computerized decision support system. J. Adv. Nurs. 2005, 49, 68–77. [Google Scholar] [CrossRef]
  46. Bandura, A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol. Rev. 1977, 84, 191–215. [Google Scholar] [CrossRef] [PubMed]
  47. Bandura, A. Social Learning Theory; Prentice Hall: Englewood Cliffs, NJ, USA, 1977. [Google Scholar]
  48. White, K.A. Development and validation of a tool to measure self-confidence and anxiety in nursing students during clinical decision making. J. Nurs. Educ. 2014, 53, 14–22. [Google Scholar] [CrossRef]
  49. Tanner, C.A. Thinking like a nurse: A research-based model of clinical judgement in nursing. J. Nurs. Educ. 2006, 45, 204–211. [Google Scholar] [PubMed]
  50. Pesut, D.J.; Herman, J. Clinical Reasoning: The Art and Science of Critical and Creative Thinking; Delmar: Albany, NY, USA, 1999. [Google Scholar]
  51. Levett-Jones, T.; Hoffman, K.; Dempsey, J.; Jeong, S.Y.-S.; Noble, D.; Norton, C.A.; Roche, J.; Hickey, N. The ‘five rights’ of clinical reasoning: An educational model to enhance nursing students’ ability to identify and manage clinically ‘at risk’ patients. Nurse Educ. Today 2010, 30, 515–520. [Google Scholar] [CrossRef]
  52. Shin, H.; Park, C.G.; Shim, K. The Korean version of the Lasater Clinical Judgment Rubric: A validation study. Nurse Educ. Today 2015, 35, 68–72. [Google Scholar] [CrossRef] [PubMed]
  53. Vreugdenhil, J.; Spek, B. Development and validation of Dutch version of Lasater Clinical Judgment Rubric in hospital practice: An instrument design study. Nurse Educ. Today 2018, 62, 43–51. [Google Scholar] [CrossRef] [PubMed]
  54. Georg, C.; Karlgren, K.; Ulfvarson, J.; Jirwe, M.; Welin, E. A rubric to assess students’ clinical reasoning when encountering virtual patients. J. Nurs. Educ. 2018, 57, 408–415. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Shin, H.; Shim, K.; Lee, Y.; Quinn, L. Validation of a new assessment tool for a pediatric nursing simulation module. J. Nurs. Educ. 2014, 53, 623–629. [Google Scholar] [CrossRef] [Green Version]
  56. Kim, S.J.; Kim, S.; Kang, K.A.; Oh, J.; Lee, M.N. Development of a simulation evaluation tool for assessing nursing students’ clinical judgment in caring for children with dehydration. Nurse Educ. Today 2016, 37, 45–52. [Google Scholar] [CrossRef]
  57. Liaw, S.Y.; Rashasegaran, A.; Wong, L.F.; Deneen, C.C.; Cooper, S.; Levett-Jones, T.; Goh, H.S.; Ignacio, J. Development and psychometric testing of a Clinical Reasoning Evaluation Simulation Tool (CREST) for assessing nursing students’ abilities to recognize and respond to clinical deterioration. Nurse Educ. Today 2018, 62, 74–79. [Google Scholar] [CrossRef]
  58. Liou, S.R.; Liu, H.C.; Tsai, H.M.; Tsai, Y.H.; Lin, Y.C.; Chang, C.H.; Cheng, C.-Y. The development and psychometric testing of a theory-based instrument to evaluate nurses’ perception of clinical reasoning competence. J. Adv. Nurs. 2016, 72, 707–717. [Google Scholar] [CrossRef] [PubMed]
  59. Chatterjee, S.; Ng, J.; Kwan, K.; Matsumoto, E.D. Assessing the surgical decision making abilities of novice and proficient urologists. J. Urol. 2009, 181, 2251–2256. [Google Scholar] [CrossRef]
  60. Derakhshandeh, Z.; Amini, M.; Kojuri, J.; Dehbozorgian, M. Psychometric characteristics of Clinical Reasoning Problems (CRPs) and its correlation with routine multiple choice question (MCQ) in Cardiology department. J. Adv. Med. Educ. Prof. 2018, 6, 37–42. [Google Scholar]
  61. Im, S.; Kim, D.K.; Kong, H.H.; Roh, H.R.; Oh, Y.R.; Seo, J.H. Assessing clinical reasoning abilities of medical students using clinical performance examination. Korean J. Med. Educ. 2016, 28, 35–47. [Google Scholar] [CrossRef]
  62. Huwendiek, S.; Reichert, F.; Duncker, C.; de Leng, B.A.; van der Vleuten, C.; Muijtjens, A.M.; Bosse, H.M.; Haag, M.; Hoffmann, G.F.; Tonshoff, B.; et al. Electronic assessment of clinical reasoning in clerkships: A mixed-methods comparison of long-menu key-feature problems with context-rich single best answer questions. Med. Teach. 2017, 39, 476–485. [Google Scholar] [CrossRef]
  63. Fida, M.; Kassab, S.E. Do medical students’ scores using different assessment instruments predict their scores in clinical reasoning using a computer-based simulation? Adv. Med. Educ. Pract. 2015, 6, 135–141. [Google Scholar] [CrossRef] [Green Version]
  64. Beullens, J.; Struyf, E.; Van Damme, B. Do extended matching multiple-choice questions measure clinical reasoning? Med. Educ. 2005, 39, 410–417. [Google Scholar] [CrossRef] [PubMed]
  65. Courteille, O.; Bergin, R.; Stockeld, D.; Ponzer, S.; Fors, U. The use of a virtual patient case in an OSCE-based exam—A pilot study. Med. Teach. 2008, 30, e66–e76. [Google Scholar] [CrossRef] [PubMed]
  66. Berger, A.J.; Gillespie, C.C.; Tewksbury, L.R.; Overstreet, I.M.; Tsai, M.C.; Kalet, A.L.; Ogilvie, J.B. Assessment of medical student clinical reasoning by “lay” vs. physician raters: Inter-rater reliability using a scoring guide in a multidisciplinary objective structured clinical examination. Am. J. Surg. 2012, 203, 81–86. [Google Scholar] [CrossRef] [PubMed]
  67. Tutticci, N.; Lewis, P.A.; Coyer, F. Measuring third year undergraduate nursing students’ reflective thinking skills and critical reflection self-efficacy following high fidelity simulation: A pilot study. Nurse Educ. Pract. 2016, 18, 52–59. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Smith, S.; Kogan, J.R.; Berman, N.B.; Dell, M.S.; Brock, D.M.; Robins, L.S. The development and preliminary validation of a rubric to assess medical students’ written summary statements in virtual patient cases. Acad. Med. 2016, 91, 94–100. [Google Scholar] [CrossRef]
  69. Fleiszer, D.; Hoover, M.L.; Posel, N.; Razek, T.; Bergman, S. Development and validation of a tool to evaluate the evolution of clinical reasoning in trauma using virtual patients. J. Surg. Educ. 2018, 75, 779–786. [Google Scholar] [CrossRef]
  70. Adamson, K.A. Rater bias in simulation performance assessment: Examining the effect of participant race/ethnicity. Nurs. Educ. Perspect. 2016, 37, 78–82. [Google Scholar] [PubMed]
  71. Adamson, K.A.; Kardong-Edgren, S. A method and resources for assessing the reliability of simulation evaluation instruments. Nurs. Educ. Perspect. 2012, 33, 334–339. [Google Scholar] [CrossRef] [PubMed]
  72. Ashcraft, A.S.; Opton, L.; Bridges, R.A.; Caballero, S.; Veesart, A.; Weaver, C. Simulation evaluation using a modified Lasater Clinical Judgment Rubric. Nurs. Educ. Perspect. 2013, 34, 122–126. [Google Scholar] [PubMed]
  73. Bussard, M.E. Evaluation of clinical judgment in prelicensure nursing students. Nurse Educ. 2018, 43, 106–108. [Google Scholar] [CrossRef]
  74. Manetti, W. Evaluating the clinical judgment of prelicensure nursing students in the clinical setting. Nurse Educ. 2018, 43, 272–276. [Google Scholar] [CrossRef]
  75. Strickland, H.P.; Cheshire, M.H.; March, A.L. Clinical judgment during simulation: A comparison of student and faculty scores. Nurs. Educ. Perspect. 2017, 38, 85–86. [Google Scholar] [CrossRef]
  76. Roman-Cereto, M.; Garcia-Mayor, S.; Kaknani-Uttumchandani, S.; Garcia-Gamez, M.; Leon-Campos, A.; Fernandez-Ordonez, E.; Ruiz-Garcia, M.L.; Marti-Garcia, C.; Lopez-Leiva, I.; Lasater, K.; et al. Cultural adaptation and validation of the Lasater Clinical Judgment Rubric in nursing students in Spain. Nurse Educ. Today 2018, 64, 71–78. [Google Scholar] [CrossRef]
  77. Kautz, D.; Kuiper, R.; Bartlett, R.; Buck, R.; Williams, R.; Knight-Brown, P. Building evidence for the development of clinical reasoning using a rating tool with the Outcome-Present State-Test (OPT) Model. South. Online J. Nurs. Res. 2009, 9, 8. [Google Scholar]
  78. Amini, M.; Shahabi, A.; Moghadami, M.; Shams, M.; Anooshirvani, A.; Rostamipour, H.; Kojuri, J.; Dehbozorgian, M.; Nabeiei, P.; Jafari, M.; et al. Psychometric characteristics of script concordance test (SCT) and its correlation with routine multiple choice question (MCQ) in internal medicine department. Biomed. Res. 2017, 28, 8397–8401. [Google Scholar]
  79. Boulouffe, C.; Doucet, B.; Muschart, X.; Charlin, B.; Vanpee, D. Assessing clinical reasoning using a script concordance test with electrocardiogram in an emergency medicine clerkship rotation. Emerg. Med. J. 2014, 31, 313–316. [Google Scholar] [CrossRef] [Green Version]
  80. Goos, M.; Schubach, F.; Seifert, G.; Boeker, M. Validation of undergraduate medical student script concordance test (SCT) scores on the clinical assessment of the acute abdomen. BMC Surg. 2016, 16, 57. [Google Scholar] [CrossRef] [Green Version]
  81. Humbert, A.J.; Besinger, B.; Miech, E.J. Assessing clinical reasoning skills in scenarios of uncertainty: Convergent validity for a script concordance test in an emergency medicine clerkship and residency. Acad. Emerg. Med. 2011, 18, 627–634. [Google Scholar] [CrossRef] [PubMed]
  82. Kania, R.E.; Verillaud, B.; Tran, H.; Gagnon, R.; Kazitani, D.; Tran Ba Huy, P.; Herman, P.; Charlin, B. Online script concordance test for clinical reasoning assessment in otorhinolaryngology: The association between performance and clinical experience. Arch. Otolaryngol.-Head Neck Surg. 2011, 137, 751–755. [Google Scholar] [CrossRef] [Green Version]
  83. Kazour, F.; Richa, S.; Zoghbi, M.; El-Hage, W.; Haddad, F.G. Using the Script Concordance Test to evaluate clinical reasoning skills in psychiatry. Acad. Psychiatry 2017, 41, 86–90. [Google Scholar] [CrossRef] [PubMed]
  84. Lambert, C.; Gagnon, R.; Nguyen, D.; Charlin, B. The Script Concordance Test in radiation oncology: Validation study of a new tool to assess clinical reasoning. Radiat. Oncol. 2009, 4, 7. [Google Scholar] [CrossRef] [Green Version]
  85. Ruiz, J.G.; Tunuguntla, R.; Charlin, B.; Ouslander, J.G.; Symes, S.N.; Gagnon, R.; Phancao, F.; Roos, B.A. The Script Concordance Test as a measure of clinical reasoning skills in geriatric urinary incontinence. J. Am. Geriatr. Soc. 2010, 58, 2178–2184. [Google Scholar] [CrossRef] [PubMed]
  86. Sibert, L.; Charlin, B.; Corcos, J.; Gagnon, R.; Lechevallier, J.; Grise, P. Assessment of clinical reasoning competence in urology with the Script Concordance Test: An exploratory study across two sites from different countries. Eur. Urol. 2002, 41, 227–233. [Google Scholar] [CrossRef]
  87. Sibert, L.; Darmoni, S.J.; Dahamna, B.; Hellot, M.F.; Weber, J.; Charlin, B. On line clinical reasoning assessment with Script Concordance Test in urology: Results of a French pilot study. BMC Med. Educ. 2006, 6, 45. [Google Scholar] [CrossRef] [Green Version]
  88. Subra, J.; Chicoulaa, B.; Stillmunkes, A.; Mesthe, P.; Oustric, S.; Rouge Bugat, M.E. Reliability and validity of the Script Concordance Test for postgraduate students of general practice. Eur. J. Gen. Pract. 2017, 23, 208–213. [Google Scholar] [CrossRef] [Green Version]
  89. Wan, M.S.; Tor, E.; Hudson, J.N. Improving the validity of script concordance testing by optimising and balancing items. Med. Educ. 2018, 52, 336–346. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  90. Groves, M.; Dick, M.L.; McColl, G.; Bilszta, J. Analysing clinical reasoning characteristics using a combined methods approach. BMC Med. Educ. 2013, 13, 144. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  91. Gagnon, R.; Charlin, B.; Lambert, C.; Carriere, B.; van der Vleuten, C. Script concordance testing: More cases or more questions? Adv. Health Sci. Educ. 2009, 14, 367–375. [Google Scholar] [CrossRef] [PubMed]
  92. Dawson, T.; Comer, L.; Kossick, M.A.; Neubrander, J. Can script concordance testing be used in nursing education to accurately assess clinical reasoning skills? J. Nurs. Educ. 2014, 53, 281–286. [Google Scholar] [CrossRef]
  93. Funk, K.A.; Kolar, C.; Schweiss, S.K.; Tingen, J.M.; Janke, K.K. Experience with the script concordance test to develop clinical reasoning skills in pharmacy students. Curr. Pharm. Teach. Learn. 2017, 9, 1031–1041. [Google Scholar] [CrossRef] [PubMed]
  94. Cook, D.A.; Durning, S.J.; Sherbino, J.; Gruppen, L.D. Management reasoning: Implications for health professions educators and a research agenda. Acad. Med. 2019, 94, 1310–1316. [Google Scholar] [CrossRef] [PubMed]
  95. Boshuizen, H.P.A.; Schmidt, H.G. The development of clinical reasoning expertise. In Clinical Reasoning in the Health Professions; Higgs, J., Jones, M.A., Loftus, S., Christensen, N., Eds.; Elsevier: Edinburgh, UK, 2019; pp. 57–66. [Google Scholar]
Figure 1. Flow diagram of search.
Figure 1. Flow diagram of search.
Ijerph 19 00936 g001
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
Inclusion CriteriaExclusion Criteria
Studies meeting ALL the following:
  • Peer reviewed
  • Published 2000-2018
  • Published in English
  • Accessible to these authors
  • Any health profession
  • Pre-registration student education
  • Clinical reasoning or related concepts
  • Primary outcome is to develop or test an evaluation tool
  • Used but did not develop or test an evaluation tool
  • Developed or tested an evaluation of a whole range of clinical skills or competencies, even if these included a component of clinical reasoning
  • Evaluation tools only used outside of clinical placement or simulation settings
Table 2. Constructs measured in evaluations of students’ clinical reasoning.
Table 2. Constructs measured in evaluations of students’ clinical reasoning.
ConstructTheoretical UnderpinningEvaluation Tool or Measure (Discipline of Student)
Clinical Decision MakingNone stated (an adaptation of the Physical Therapist Clinical Performance Instrument) [28]Clinical Decision Making Survey Tool (Physical Therapy) [29]
Not statedSurgical Decision Making Rating Scale (Medicine) [59]
Clinical JudgementClinical Judgement Model [49]Lasater Clinical Judgement Rubric (Nursing) [17,70,71,72,73,74,75,76]
Lasater Clinical Judgement Rubric—Korean version (Nursing) [52]
Lasater Clinical Judgement Rubric—Dutch version (Nursing) [53]
Virtual Patient Lasater Clinical Judgement Rubric (Nursing) [54]
Scenario-specific Assessment Tool for Febrile Infant Care Simulation (adaptation of the Lasater Clinical Judgement Rubric; Nursing) [55]
Simulation Evaluation Tool (an adaptation of the Lasater Clinical Judgement Rubric) (Nursing) [56]
Clinical Reasoning“A SECRET” reasoning approach [33]A SECRET Assessment (Occupational Therapy) [32]
Clinical Reasoning Process model [51]Clinical Reasoning Evaluation Simulation Tool (CREST) (Nursing) [57]
Nurses Clinical Reasoning Scale (Nursing) [58]
IDEA Framework, structural semantics, and RIME [35]IDEAs Assessment Tool (Medicine) [34]
Novice Clinical Reasoning Model [45] and Social cognitive theory [46,47]Nursing Anxiety and Self-Confidence with Clinical Decision Making (NASC-CDM; Nursing) [48]
Outcome Present State Test Model [50]Outcome Present State Test (OPT; Nursing) [77]
Revised Bloom’s Taxonomy [44] and Dreyfus Model [13]Clinical Reasoning Grading Rubric (Physical Therapy) [43]
Script Theories [11,12]Multiple Choice Question Exam (Medicine) [36] *
Script Concordance Test
   (Medicine) [36] *, [78,79,80,81,82,83,84,85,86,87,88,89], [90] *, [91] *
   (Nursing) [92], [91] *
   (Pharmacy) [93]
Script Concordance Test with Think Aloud (Medicine) [37], [90] *
Not statedClinical Reasoning Problems Test (Medicine) [60], [90] *
Critical ThinkingBenner’s [41] levels of nursing experience, and Bloom’s [42] cognitive domainsClark Simulation Evaluation Rubric (Nursing) [40]
Consensus dimensions of critical thinking in nursing [18]Carter Assessment of Critical Thinking in Midwifery (Preceptor / Mentor Version) (Midwifery) [21], [23] *
Carter Assessment of Critical Thinking in Midwifery (Student Self-Rating Version) (Midwifery) [22], [23] *
Carter Assessment of Critical Thinking in Midwifery (Reflective Writing) (Midwifery) [23] *
Rubric for assessing critical thinking dimensions (Nursing) [20]
Expert Consensus on Critical Thinking [19]Critical Thinking Self-Reflection Tool (Nursing) [24]
Yoon’s Critical Thinking Tool (Nursing) [25]
IDEAS five-step critical thinking problem-solving process [31]Critical Thinking Skills Rating Instrument (CTSRI; Medicine) [30]
Structured Observation of and Assessment of Practice [27]Clinical Viva (Nursing) [26]
Situation AwarenessSituation Awareness [38]Situation Awareness Global Assessment Technique (SAGAT; Nursing) [39]
Not SpecifiedNot statedClinical Performance Examination (CPX; Medicine) [61]
Computer-based Case Simulation (CCS; DxR Clinician Software; Medicine) [63]
Exam formats: Context-rich single best answer versus key feature problems (Medicine) [62]
Exam formats: Extended matching questions, with think aloud (Medicine) [64]
Interactive Simulation of Patients Objective Structured Clinical Examination (OSCE) Station (Medicine) [65]
Objective Structured Clinical Examination (OSCE) Note Writing Station (Medicine) [66]
Reflective Thinking Instrument (Nursing) [67]
Virtual Patient Case Patient Summary Statement Rubric (Medicine) [68]
Virtual Patient Case Procedural Rubric and Semantic Rubric (Medicine) [69]
Note: Papers marked with * included more than one assessment and are thereby included more than once.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Brentnall, J.; Thackray, D.; Judd, B. Evaluating the Clinical Reasoning of Student Health Professionals in Placement and Simulation Settings: A Systematic Review. Int. J. Environ. Res. Public Health 2022, 19, 936. https://doi.org/10.3390/ijerph19020936

AMA Style

Brentnall J, Thackray D, Judd B. Evaluating the Clinical Reasoning of Student Health Professionals in Placement and Simulation Settings: A Systematic Review. International Journal of Environmental Research and Public Health. 2022; 19(2):936. https://doi.org/10.3390/ijerph19020936

Chicago/Turabian Style

Brentnall, Jennie, Debbie Thackray, and Belinda Judd. 2022. "Evaluating the Clinical Reasoning of Student Health Professionals in Placement and Simulation Settings: A Systematic Review" International Journal of Environmental Research and Public Health 19, no. 2: 936. https://doi.org/10.3390/ijerph19020936

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop