Abstract
Student perspectives on their final year clinical placements in biomedical sciences at Qatar University are assessed using the clinical practicum assessment tool (CPAT), which was developed in-house following accreditation body requirements. The tool, which we call the CPAT-Qatar University (CPAT-QU), covers the three clinical practicum domains: practicum content, preceptors, and competencies. Here, we validate this tool. The CPAT-QU has 27 Likert-scale questions and free-text open questions. CPAT-QU readability was calculated using the Flesch–Kincaid Reading Ease (FKRE) instrument. Content validity was assessed using the average and universal average scale-level content validity indices (S-CVI/Average and S-CVI/UA). For construct validity, 50 employed graduates who had completed the practicum were consented for study participation, and the validity was calculated by a principal component analysis (PCA). Reliability was analyzed by Cronbach’s alpha. The S-CVI/Average and S-CVI/UA were 0.90 and 0.59, respectively, indicating that an adequate proportion of the content was relevant. The PCA extracted two core components, which explained 63% of the variance in the CPAT-QU. Cronbach’s alpha values for the items were within the acceptable range of 0.60–1.00, showing that internal consistency has a good level. CPAT-QU appears to be a useful tool for assessing student perspectives on their clinical placements; however, construct validity needs continuous improvement.
1. Introduction
The goal of clinical education in medical laboratory science (MLS) is to prepare graduates with professional competencies crucial for delivering clinical laboratory services. These competencies require the integration of knowledge, skills and attitudes to appropriately handle complex laboratory procedures and ensure that staff are effective contributors to rapidly changing healthcare systems [1,2,3]. Laboratory complexity is increasing due to constant and rapid scientific advances and technological innovations. These changes influence the educational requirements and qualifications needed for medical laboratory professionals [4], and high quality laboratory diagnostic services must be underpinned by a high standard of education [5]. Therefore, it is important to simultaneously enhance educational outcomes as well as clinical field training for MLS graduates and other allied health personnel. There is currently a worldwide shortage in the diagnostic workforce. For example, a recent study reported a shortfall of 840,000 diagnostic staff in the UK workforce [6]. Therefore, competency-based education is needed to ensure expansion of the diagnostic workforce with highly skilled and appropriately trained individuals.
The National Accrediting Agency for Clinical Laboratory Sciences (NAACLS), the American Society for Clinical Pathology (ASCP), and the American Society of Clinical Laboratory Sciences (ASCLS) proposed a set of standards to control the quality of MLS education and the profession [1,7]. Effective learning in the clinical setting depends on adequate supervision, mentoring, feedback, and assessment. Curriculum mapping and assessment matrices that align courses with desired program goals and outcomes are useful tools to monitor the quality of education delivered by MLS programs. Direct assessment methods, including exams, essays, presentations, classroom assignments, and capstone courses, can be used to asses students’ knowledge and skills, whereas indirect assessment methods, such as surveys and interviews, assess students’ reflections, opinions, and feelings about learning and the learning environment [8,9]. MLS programs commonly use three types of surveys: clinical experience surveys, graduate surveys, and employer surveys, which help to plan continuous quality improvement activities in collaboration with the main medical laboratory testing stakeholders.
The Department of Biomedical Sciences at Qatar University provides an NAACLS-accredited Biomedical Science bachelor’s degree program (http://www.qu.edu.qa/chs/biomedical-sciences (accessed on 1 January 2022). In the final year of the program, students are required to complete educational practicum courses at the two main hospitals in Qatar, Hamad Medical Corporation (HMC) and Sidra Medicine. To assess their clinical practicum experience, we developed an in-house tool, termed the Clinical Practicum Assessment Tool-Qatar University (CPAT-QU), based on NAACLS, ASCP, and ASCLS guidelines. The tool covers three aspects of training: the practicum content, the preceptors (trainers, instructors, and supervisors), and competencies. Despite it being common to use surveys to evaluate MLS clinical practicum training, we are not aware of a tool that has undergone comprehensive validation, even though fully validated tools would be more useful for continuous improvement and accreditation purposes. Hence, here, we evaluated the validity and reliability of the CPAT-QU survey tool used in the MLS-accredited program at Qatar University. Box 1 summarizes the key massages from this analysis.
Box 1. Key Messages.
- What is already known on this topic: Tools are available to assess the quality of biomedical sciences practical courses in terms of content, preceptor quality, and competencies, because this information is essential for continuous quality improvement over time.
- What this study adds: This clinical practicum assessment tool is the first tool to be validated for assessing student perspectives on their clinical placements in biomedical sciences.
- How this study might affect research, practice, or policy: The clinical practicum assessment tool has value for continuous improvement and accreditation purposes and could help inform MLS program directors about ways to improve their clinical practicum based on high-quality evidence.
2. Materials and Methods
2.1. CPAT-QU Development and Description
The CPAT-QU was developed based on the NAACLS, ASCLS, and ASCP educational frameworks and guidelines [2,10,11,12]. In addition, we consulted training managers at the HMC and Sidra Medicine MLS clinical training sites. The tool has 27 Likert-scale questions as well as open-ended comment questions about students’ learning needs. It consists of three main domains (Table S1): practicum content, preceptors, and competencies. This tool is currently used by Department of Biomedical Sciences at QU to assess program graduate satisfaction with the clinical practicum. Entry-level competencies were categorized according to the three domains of learning (cognitive, psychomotor, and affective) by the research team and expert colleagues, as described by the ASCLS and ASCP (Table S2) [12,13]. Figure 1 shows a flow chart depicting the validation process.
Figure 1.
A flow chart depicting the process used for validating the CPAT-QU for the MLS clinical practicum.
A bipolar five-level Likert scale was used for domains 1 and 2: “very unsatisfied”, “unsatisfied”, “neutral/undecided”, “satisfied”, and “very satisfied”, coded into “−2”, “−1”, “0”, “1”, and “2”, respectively. A unipolar five-level Likert scale was used for competencies and skills required in workplaces (domain 3): “Not at all”, “little”, “to some extent”, “well”, and. “very well” coded into “0”, “1”, “2”, “3”, and “4”, respectively [14]. Qualitative responses were coded with integers for all items to make the responses equivalent to the quantitative responses for a unified statistical analysis. Domain scores were calculated by summing up the scores for all items in the domain. The individual index was calculated as:
Σ(item responses)/Σ(highest values in the items)
2.2. CPAT-QU Validation
2.2.1. Readability
The Flesch–Kincaid Readability Ease (FKRE) and the Flesch–Kincaid Grade Level (FKRA) tests, the oldest and most reliable tests for readability reading in English [15], were used to calculate the understandability of the CPAT-QU. The FKRE formula is RE = 206.835 − (1.05 × ASL) − (84.6 × ASW), where RE = readability ease, ASL = average sentence length, and ASW = average number of syllables per word. The FKRA formula is (0.39 × ASL) + (11.8 × ASW) − 15.59, where FKRA = Flesch–Kincaid reading age, ASL = average sentence length, and ASW = average number of syllables per word.
2.2.2. Content Validity
Six experts were selected based on their experience in clinical teaching, training, and MLS program accreditation standards to review the relevance of the CPAT-QU. All experts had at least 15 years of experience in the field. All experts had completed the College of American Pathology accreditation process for clinical sites in the State of Qatar. They had also participated in self-directed learning of the NAACLS accreditation. Three MLS experts were ASCP certified and actively maintained their certification. The experts were asked to comment on whether the tool’s content was appropriate for its purpose and for assessing the clinical practicum from the graduates’ perspective. The definitions of the three main domains and items were presented to the experts to rate each item independently using a 4-point ranking Likert scale: “Not relevant = 1,” “somewhat relevant = 2”, “quite relevant = 3”, and “highly relevant = 4” (Table S2). This scale was used to calculate the content validity index (CVI), scale-level content validity index using the average method (S-CVI/Average), and scale-level content validity using the universal average method (S-CVI/UA) indices based on the judgment of the six experts. These indices were computed, as they were easy to compute, understandable, and focused on the agreement of experts. Data analysis of content validity was performed using Microsoft Excel (Redmond, WA, USA).
2.2.3. Construct Validity
A principal component analysis (PCA) was performed to calculate factor loading of the three domains (practicum content, preceptors, and competencies). A PCA is used to cluster items into different factors [16]. Eigenvalues greater than 1 were considered for factor loading. Cumulative percentages were used to explain the variance of the latent variable. A varimax rotation was used to maximize the sum of the variance by total rotation of the sum of squares, calculating the squared correlation between variables and factors. The total rotation of the sum of squared loadings was considered the variance after rotation attributable to each factor. A scree plot was used to determine how many factors to retain and flattening of the curve, which shows the eigenvalues on the x-axis and the number of factors on the y-axis. Kaiser–Meyer–Olkin (KMO) values were considered to measure sampling adequacy for the PCA.
2.2.4. Reliability
Reliability was investigated by asking participants to answer the questionnaire twice with a 15–19-day gap between sessions. The same set of participants were given the questionnaire each time. The expectation was that there would be no substantial difference between the two outcomes [17].
2.3. Participants
The QU Institutional Review Board granted ethical approval (QU-IRB 1360-EA/20). The study targeted graduated students from the Biomedical Science program between 2015 to 2019 who are currently employed as Medical Laboratory Scientists/Technologists. A total of 150 students graduated and completed their clinical practicum during the specified period. The graduates were invited to participate by email and signed a consent form electronically via a Google link. Fifty (33%) employed graduates agreed to participate in this study. Initially, we were planning face-to-face interviews with the students, however; due to the COVID-19 pandemic, telephone interviews were performed. After obtaining informed consent, two telephonic interviews were conducted in English no more than 19 days apart by a trained researcher. Interviews were not audio recorded. Data were collected over a two-month period (September–October 2020), and the outcomes were entered into an Excel spreadsheet.
2.4. Statistical Analysis
A PCA was used for factor analysis using the principal factor method with a varimax rotation to test the hypothesized domain structure. KMO values ≥ 0.8 were used to establish an appropriate sample size for the analysis. The Kaiser criterion was used to select factors with eigenvalues ≥ 1, and scree plot/graphs were used to illustrate the descending variances for factor extraction. A Cronbach’s α coefficient of ≥0.7 was considered internally consistent. A two-tailed p-value of ≤0.05 was considered significant. IBM SPSS 26.0 statistical package (IBM Statistics, Armonk, NY, USA) was used for the analysis.
3. Results
3.1. Readability and Understandability
To determine whether the questionnaire was understandable by people of different educational levels, readability statistics were calculated. The FKRE score for the questionnaire was 30.9, which indicates that the survey could easily be understood by college graduates. The FKRA was 11.6, suggesting that the survey would be understandable to anyone with a grade-10-level education.
3.2. Content Validity
The survey’s content validity was assessed based on expert evaluations of item relevance. A few items in the survey were modified according to the experts’ suggestions to create the final version used in the study. CVI, S-CVI/Average, and SCVI/UA were chosen for their advantages in terms of computation, understandability, and agreement on relevance. The comprehensiveness and representativeness of content were measured on a scale to determine its validity. The S-CVI/Average was 0.90 and S-CVI/UA was 0.59, whereas the total agreement was 16 for 27 items, indicating adequate survey validity (Table 1).
Table 1.
CPAT-QU items with inadequate universal agreement among the experts.
3.3. Construct Validity
PCA of the 27 questionnaire items showed that 63% of the variation in the latent variable was explained by two components (Table 2). All variants in Table 3 were in the first component, whereas the second component only contained variant numbers 4, 6, and 8. Results were further illustrated by the scree plot. Visual inspection of the scree plot shows that the point of inflexion in the plot occurred at the third factor, indicating that two factors should be retained (Figure 2).
Table 2.
Factor analysis for construct validity.
Table 3.
Principal component analysis of the questionnaire variables matrix. All variables were positive in the first component, and only 4, 6 and 8 were positive in the second component.
Figure 2.
Scree plot for the principal component analysis (PCA) for the CPAT-QU construct validity. The descending tendency became weak from the third point.
3.4. Reliability
Reliability refers to ensuring that the various items measuring the different constructs of a test deliver consistent results. Consistency in the items was measured by Cronbach’s α. The values were 0.80, 0.76, 0.89, 0.81, 0.85, 0.86, and 0.92, respectively, falling within the acceptable range of 0.60–1.00 and indicating good internal consistency between the different variables (Table 4).
Table 4.
Internal consistency of the questionnaire domain indices.
4. Discussion
The purpose of this study was to validate the CPAT-QU tool used for MLS clinical practicum assessment from the graduate perspective. The CPAT-QU was developed based on the NAACLS, ASCLS, and ASCP educational framework and the three categories of MLS entry-level competencies expected to prepare graduates for MLS practice. Our results indicate that the CPAT-QU has an acceptable content validity and relevance, as the S-CVI/Average was 0.93. The tool captures 63% of the variance and explains two core components with eigenvalues greater than one. It has a good level of internal consistency and reliability, as Cronbach’s α scores fell within the acceptable range of 0.60–1.00.
The expert analysis showed that preceptors play an important role in the practicum experience. Preceptor attitude, their ability to convey knowledge, and their interest in clinical teaching were significant factors contributing to the overall satisfaction of MLS graduates toward the clinical practicum. Supervisory level, feedback, and formal competency assessments may be extremely variable in different student’s clinical experiences, as reported by others [18,19]. Preceptors expert in teaching, training, and adult learning styles are essential to maintain the overall quality of the clinical practicum. Having sufficient, efficient, and well-qualified MLS preceptors is often a major challenge for MLS program directors [20,21]. In a systemic review analysis, Griffiths et.al. found that the structure and content intervention of preceptors lacked rigor in outcome measurement [22]. This highlights the importance of measuring the impact of educational interventions on broader outcomes, such as quality of client care. It is particularly essential to promote professional socialization and to develop the professional identity of the newly graduated MLS students during their transition from the classroom into the medical laboratory workforce [23]. Other studies have shown that clinical rotation sites, work environment, and employment benefits influence MLS graduates’ satisfaction levels and attitudes toward their MLS clinical training, recruitment, and even future job selection [24,25,26].
We took every precaution to construct the CPAT-QU tool based on the cognitive, psychomotor, and affective domains addressed by the NAACLS, ASCLS, and ASCP, and it was able to capture 63% of the variance with two core components. Indeed, many committees and task forces from the NAACLS, ASCLS, and ASCP have provided road maps to help MLS programs develop well-constructed assessment plans for entry-level competencies and their effective contribution toward graduate employability. It is also well recognized that the quality of clinical education and training reflected in such assessment plans should be based on surveys [4,12,13]. Such surveys, which are widely used by MLS programs, might serve the goal of complying with accreditation bodies, but they might need revision to ensure their continuing validity. Therefore, we recommend investigating this tool in other institutional settings to further study its validity. To our knowledge, this is the first study to assess tool validity for the MLS clinical practicum, a major core of these educational programs. We believe that this study has significant implications for MLS program leaders and could help to inform MLS program directors about ways to improve their clinical practicum. Our data suggest that unperceived aspects influence the quality of the MLS clinical practicum. Exchange of knowledge, experience, and tools within the MLS program community would be beneficial in this regard. Information and knowledge about the procedures of the practicum and detailed explanations about activities are also important for students. A survey among nursing graduates found that providing concrete guidance about the training helped them to develop the required skills [27].
Our study has some limitations. First, all the participants were selected from a single educational institution, Qatar University, which may have resulted in selection bias. However, there is no other NAACLS-accredited program in Qatar. Second, no “gold standard” tool for assessing the clinical practicum is available. Therefore, we could not compare our results with other tools or perform cross-cultural validation, and the results may not be generalizable. Third, this assessment covered only the student perspective and not preceptor/trainer experience, knowledge, and attitudes. Lastly, the KMO measure of sampling adequacy was 0.61, so more subjects are need for questionnaire validation, and a confirmatory factor analysis should be conducted to confirm that the items accurately reflect the underlying constructs.
5. Conclusions
The CPAT-QU is a reliable tool that captures 63% of the variance in the MLS clinical practicum from the graduates’ perspective. Future work will be focused on increasing the construct validity of the survey. Moreover, to further improve tool validity and reliability, we recommend investigating this tool in other institutional settings to improve the reliability of the survey.
Supplementary Materials
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/ijerph19116651/s1. Table S1: CPAT-QU Questionnaire; Table S2: Description of CPAT-QU tool Domains/constructs.
Author Contributions
T.A. and M.A.-M. conceived the study. T.A. collected the data. R.S. and A.M.A. conducted the analysis. All authors contributed to subsequent analyses. T.A. drafted the paper, and A.M.A. critically revised the manuscript. M.A.-M. is the guarantor. All authors have read and agreed to the published version of the manuscript.
Funding
This study was supported by Qatar University, internal grant No. (QUST-1-CHS-2021-7). The findings achieved herein are solely the responsibility of the authors.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Qatar University (QU-IRB 1360-EA/20).
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
Not applicable.
Acknowledgments
We thank Emma Stokes for her variable comments on the manuscript. We thank the Department of Laboratory Medicine and Pathology at Hamad Medical Corporation and Sidra Medicine for facilitating the study.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Walz, S.E. Education & Training in Laboratory Medicine in the United States. EJIFCC 2013, 24, 1–3. [Google Scholar]
- Scanlan, P.M. A Review of Bachelor’s Degree Medical Laboratory Scientist Education and Entry Level Practice in the United States. EJIFCC 2013, 24, 5–13. [Google Scholar] [PubMed]
- Afrashtehfar, K.I.; Maatouk, R.M.; McCullagh, A.P.G. Flipped Classroom Questions. Br. Dent. J. 2022, 232, 285. [Google Scholar] [CrossRef]
- Bennett, A.; Garcia, E.; Schulze, M.; Bailey, M.; Doyle, K.; Finn, W.; Glenn, D.; Holladay, E.B.; Jacobs, J.; Kroft, S.; et al. Building a Laboratory Workforce to Meet the Future: ASCP Task Force on the Laboratory Professionals Workforce. Am. J. Clin. Pathol. 2014, 141, 154–167. [Google Scholar] [CrossRef] [PubMed]
- Sayed, S.; Cherniak, W.; Lawler, M.; Tan, S.Y.; El Sadr, W.; Wolf, N.; Silkensen, S.; Brand, N.; Looi, L.M.; Pai, S.A.; et al. Improving Pathology and Laboratory Medicine in Low-Income and Middle-Income Countries: Roadmap to Solutions. Lancet 2018, 391, 1939–1952. [Google Scholar] [CrossRef]
- Fleming, K.A.; Horton, S.; Wilson, M.L.; Atun, R.; DeStigter, K.; Flanigan, J.; Sayed, S.; Adam, P.; Aguilar, B.; Andronikou, S.; et al. The Lancet Commission on Diagnostics: Transforming Access to Diagnostics. Lancet 2021, 398, 1997–2050. [Google Scholar] [CrossRef]
- Wilson, M.L.; Fleming, K.A.; Kuti, M.A.; Looi, L.M.; Lago, N.; Ru, K. Access to Pathology and Laboratory Medicine Services: A Crucial Gap. Lancet 2018, 391, 1927–1938. [Google Scholar] [CrossRef]
- McDonald, B. Improving Learning through Meta Assessment. Act. Learn. High. Educ. 2010, 11, 119–129. [Google Scholar] [CrossRef]
- Afrashtehfar, K.I.; Assery, M.K.A.; Bryant, S.R. Patient Satisfaction in Medicine and Dentistry. Int. J. Dent. 2020, 2020, 6621848. [Google Scholar] [CrossRef]
- NAACLS—National Accrediting Agency for Clinical Laboratory Science—Starting a NAACLS Accredited or Approved Program. Available online: https://www.naacls.org/Program-Directors/Fees/Procedures-for-Review-Initial-and-Continuing-Accre.aspx (accessed on 13 January 2022).
- ASCLS Members. Available online: https://members.ascls.org/store_category.asp?id=69 (accessed on 14 January 2022).
- Beck, S.J.; Doig, K. CLS Competencies Expected at Entry-Level and Beyond. Clin. Lab. Sci. 2002, 15, 220–228. [Google Scholar]
- Beck, S.; Moon, T.C. An Algorithm for Curriculum Decisions in Medical Laboratory Science Education. Am. Soc. Clin. Lab. Sci. 2017, 30, 105–111. [Google Scholar] [CrossRef]
- Alwin, D.F.; Baumgartner, E.M.; Beattie, B.A. Number of Response Categories and Reliability in Attitude Measurement†. J. Surv. Stat. Methodol. 2018, 6, 212–239. [Google Scholar] [CrossRef]
- Jindal, P.; MacDermid, J. Assessing Reading Levels of Health Information: Uses and Limitations of Flesch Formula. Educ. Health 2017, 30, 84. [Google Scholar] [CrossRef] [PubMed]
- Singh, R.; Agarwal, T.M.; Al-Thani, H.; Al Maslamani, Y.; El-Menyar, A. Validation of a Survey Questionnaire on Organ Donation: An Arabic World Scenario. J. Transplant. 2018, 2018, 9309486. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- DeVon, H.A.; Block, M.E.; Moyle-Wright, P.; Ernst, D.M.; Hayden, S.J.; Lazzara, D.J.; Savoy, S.M.; Kostas-Polston, E. A Psychometric Toolbox for Testing Validity and Reliability. J Nurs. Scholarsh. 2007, 39, 155–164. [Google Scholar] [CrossRef] [PubMed]
- Daelmans, H.E.M.; Hoogenboom, R.J.I.; Donker, A.J.M.; Scherpbier, A.J.J.A.; Stehouwer, C.D.A.; van der Vleuten, C.P.M. Effectiveness of Clinical Rotations as a Learning Environment for Achieving Competences. Med. Teach. 2004, 26, 305–312. [Google Scholar] [CrossRef] [PubMed]
- Scott, C.S.; Irby, D.M.; Gilliland, B.C.; Hunt, D.D. Evaluating Clinical Skills in an Undergraduate Medical Education Curriculum. Teach. Learn. Med. 1993, 5, 49–53. [Google Scholar] [CrossRef]
- Mortazavi, D.R. Critical Look at Challenges in The Medical Laboratory Science Training in The Workplace. Int. J. Med. Sci. Educ. 2020, 7, 1–4. [Google Scholar]
- Isabel, J.M. Clinical Education: MLS Student Perceptions. Clin. Lab. Sci. 2016, 29, 66–71. [Google Scholar] [CrossRef]
- Griffiths, M.; Creedy, D.; Carter, A.; Donnellan-Fernandez, R. Systematic Review of Interventions to Enhance Preceptors’ Role in Undergraduate Health Student Clinical Learning. Nurse Educ. Pract. 2022, 62, 103349. [Google Scholar] [CrossRef]
- Schill, J.M. The Professional Socialization of Early Career Medical Laboratory Scientists. Clin. Lab. Sci. 2017, 30, 15–22. [Google Scholar] [CrossRef]
- Stuart, J.M.; Fenn, J.P. Job Selection Criteria and the Influence of Clinical Rotation Sites for Senior Medical Laboratory Science Students. Lab. Med. 2004, 35, 76–78. [Google Scholar] [CrossRef]
- Al-Enezi, N.; Shah, M.A.; Chowdhury, R.I.; Ahmad, A. Medical Laboratory Sciences Graduates: Are They Satisfied at Work? Educ. Health 2008, 21, 100. [Google Scholar]
- Bashawri, L.A.M.; Ahmed, M.A.; Bahnassy, A.A.L.; Al-Salim, J.A. Attitudes of Medical Laboratory Technology Graduates towards the Internship Training Period at King Faisal University. J. Fam. Community Med. 2006, 13, 89–93. [Google Scholar]
- Koo, H.Y.; Lee, B.R. Development of a Protocol for Guidance in the Pediatric Nursing Practicum in South Korea: A Methodology Study. Child Health Nurs. Res. 2022, 28, 51–61. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

