Next Article in Journal
Student Reactions to Just-in-Time Formative and Summative Feedback in a Tablet-Based Family Medicine MCQ Exam
Previous Article in Journal
Preparation for Residency: Effect of Formalized Patient Handover Instruction for Fourth-Year Medical Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generative AI in Healthcare: Insights from Health Professions Educators and Students

1
Education Office, Sengkang General Hospital, Singapore Health Services, Singapore 554886, Singapore
2
Department of General Medicine, Sengkang General Hospital, Singapore Health Services, Singapore 554886, Singapore
3
Department of Internal Medicine, Sengkang General Hospital, Singapore Health Services, Singapore 554886, Singapore
4
Nursing Education and Development, Sengkang General Hospital, Singapore Health Services, Singapore 554886, Singapore
5
Department of Physiotherapy, Sengkang General Hospital, Singapore Health Services, Singapore 554886, Singapore
*
Author to whom correspondence should be addressed.
Int. Med. Educ. 2025, 4(2), 11; https://doi.org/10.3390/ime4020011
Submission received: 6 March 2025 / Revised: 14 April 2025 / Accepted: 16 April 2025 / Published: 18 April 2025

Abstract

:
The integration of Generative Artificial Intelligence (GenAI) into health professions education (HPE) is rapidly transforming learning environments, raising questions about its impact on teaching and learning. This mixed methods study explores clinical educators’ and undergraduate students’ perceptions and attitudes about using GenAI tools in HPE at a tertiary hospital in Singapore. Using the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) as theoretical frameworks, we designed and administered a survey and conducted interviews to assess participants’ perceived usefulness, ease of use, and concerns related to GenAI adoption. Quantitative survey data were analyzed for frequencies and percentages, while qualitative responses underwent thematic analysis. Results showed that students demonstrated higher GenAI adoption rates (68.7%) compared to educators (38.5%), with GenAI perceived as valuable for efficiency, research, and personalized learning. However, concerns included over-reliance on GenAI, diminished critical thinking, and ethical implications. Educators emphasized the need for institutional guidelines and training to support responsible GenAI integration. Our findings suggest that while GenAI holds great potential for enhancing education, structured institutional policies and ethical oversight are crucial for its effective use. These insights contribute to the ongoing discourse on GenAI adoption in HPE.

1. Introduction

Generative artificial intelligence (GenAI) leverages large language models (LLMs) and deep neural networks with extensive parameter spaces to analyze and generate data from various modalities, including text, images, audio, and video. Among the most widely used GenAI tools is OpenAI’s ChatGPT (OpenAI, San Francisco, CA, USA), which shows a strong ability to answer complex medical questions and pass USMLE Step 1 [1]. The USMLE tests a medical doctor’s knowledge and skills for safe practice [2]. Beyond the USMLE, GenAI has also performed well on the Member of the Royal College of Physicians (MRCP) examinations in the UK, which evaluate medical knowledge and decision-making skills. Research has shown that ChatGPT-4 achieved an accuracy rate of 86.3% on the MRCP Part 1 and 70.3% on the MRCP Part 2, demonstrating its potential for health professions education (HPE) and assessment [3]. Similarly, another study reported that ChatGPT-4 scored 84.8% on an MRCP Part 1 sample paper, reinforcing its ability to handle complex clinical reasoning [4].
HPE requires learners to acquire and retain vast healthcare knowledge, develop clinical reasoning and problem-solving skills, and uphold professional and ethical standards. As GenAI tools such as ChatGPT and other generative models become more integrated into educational settings, they have the potential to transform traditional learning paradigms. A recent scoping review identified a rapidly growing body of research in this area, noting that while GenAI is more frequently used in knowledge acquisition and assessment context [5], GenAI-assisted learning could also support personalized education, adaptive assessments, and automated feedback [6]. However, concerns have been raised about over-reliance on GenAI, potential erosion of critical thinking skills, and ethical considerations related to AI-generated content [7,8]. These evolving capabilities and challenges underscore the importance of thoughtful implementation and ongoing evaluation of GenAI in health professions education.

Theoretical Framework

The adoption of GenAI in HPE follows a trend similar to that observed with other emerging technologies in education. This study used the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) frameworks to understand clinical educators’ and students’ GenAI acceptance.
The Technology Acceptance Model (TAM) [9] is widely used to study attitudes toward and behaviors related to new technology adoption. According to the TAM, technology acceptance is a three-stage process: external factors (e.g., system design features) trigger cognitive responses (perceptions of ease of use and usefulness), which then shape an effective response (attitude toward technology use), ultimately influencing actual usage behavior [10,11]. The model emphasizes two key factors affecting an individual’s decision to adopt new technology:
  • Perceived usefulness: the degree to which a person believes using the technology will enhance their performance.
  • Perceived ease of use: the extent to which a person believes using the technology will be free of effort.
The Unified Theory of Acceptance and Use of Technology (UTAUT) [12] expands on the TAM by incorporating additional factors influencing technology adoption. The model posits that the actual use of technology is determined by behavioral intention, which is shaped by four primary constructs:
  • Performance expectancy: the degree to which the user believes technology will help them achieve better outcomes.
  • Effort expectancy: the perceived ease of use of the technology.
  • Social influence: the extent to which colleagues, peers, or faculty support technology use.
  • Facilitating conditions: the availability of necessary resources and support to enable technology adoption.
The effects of these predictors are further moderated by age, gender, experience, and voluntariness of use [12].
The rapid evolution of GenAI necessitates a clear understanding of its impact on HPE. This study aims to investigate the perceptions, attitudes, and usage patterns among clinical educators and students, identifying opportunities and challenges in integrating GenAI into HPE. Specifically, we seek to answer the following research questions:
  • How do clinical educators perceive its impact on teaching, assessment, and faculty responsibilities?
  • How do undergraduate health professions students use and perceive GenAI tools?
By addressing these questions, this study identifies key opportunities and challenges in integrating GenAI into HPE, informed by the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT).

2. Materials and Methods

2.1. Study Setting and Participants

This study was conducted at Sengkang General Hospital (SKH), a 1000-bed public hospital under SingHealth, an academic medical center in Singapore. SKH serves as a clinical training site for students from three medical schools, four nursing schools, and two higher education institutions training allied health professionals (AHPs). SKH clinicians are involved in undergraduate, postgraduate, and continuing education and take on teaching roles such as curriculum design, classroom and clinical teaching, assessment, educator development, coaching and mentoring, leadership, and scholarly activities. Participants included SKH educators (medical, allied health, and nursing) and students rotating through SKH, all of whom provided informed consent.

2.2. Research Design

This study used a sequential explanatory mixed methods design, beginning with a survey to assess clinical educators’ and students’ perceptions of GenAI in HPE, followed by qualitative interviews for deeper exploration. This approach identified trends through survey data and contextualized them with qualitative insights [13].
Phase 1 involved a structured cross-sectional survey (Appendix A), guided by the TAM and UTAUT frameworks, evaluating perceived usefulness, ease of use, and concerns. The survey instrument was developed specifically for this study.
In Phase 2, semi-structured interviews (Appendix B) were conducted with educators to explore themes from the survey, providing deeper insights into experiences, motivations, and concerns regarding GenAI integration in HPE [14].

2.3. Data Collection

The convenience sampling strategy was used. The survey was administered through an online secure platform, FormSG, from March to May 2024. It was disseminated through email to SKH clinicians who teach and supervise undergraduate students and undergraduate students who rotated through SKH for the study period.
The educators’ survey (Appendix A.1) included the items (1) Perceived usefulness, assessed through Questions 1 and 2 (quantitative) and Questions 3, 4, 5, and 6 (open-ended); (2) Actual use, measured by Questions 7, 8, 10, and 11 (quantitative) and Question 9 (open-ended); and (3) Future use, evaluated through Questions 12, 13, and 14.
The students’ survey (Appendix A.2) included the items on (1) Perceived usefulness, covered in Questions 5 and 6 (quantitative) and Questions 7, 9, 10, and 13 (open-ended); (2) Actual use, assessed by Questions 11, 12, 14, 15 (quantitative) and Question 8 (open-ended); and (3) Future use, assessed by Questions 16 and 17 (quantitative).
The semi-structured interviews were conducted from November 2024 to January 2025. At the end of the educator survey, participants were invited to indicate their interest in a follow-up interview by providing their email addresses. Interview invitations were subsequently sent to those who expressed interest. The interviews were carried out by CD, an education scientist experienced in conducting qualitative studies, who does not have a direct reporting relationship with the interviewees to mitigate influences due to power relations. Semi-structured interview questions (Appendix B) guided the data collection and analysis. The interviews were 20 to 30 min long, were conducted over Microsoft (MS) Teams, and were recorded and transcribed verbatim by MS Teams. CD cross-checked the transcripts. Member checking of transcripts was performed when deemed necessary. The transcripts were anonymized before analysis.

2.4. Data Analysis

Survey data: The survey data consist of quantitative and open-ended questions, capturing perspectives from educators and students on using GenAI tools in education. Quantitative data were analyzed in Microsoft Excel to determine response frequencies and percentages. Open-ended responses underwent thematic analysis using OpenAI’s ChatGPT-4o (OpenAI, San Francisco, CA, USA), for preliminary coding to identify key themes and insights, followed by manual verification by the research team to ensure accuracy and alignment with participant responses [15].
Interview data: We uploaded the 16 anonymized interview transcripts to ChatGPT 4o for preliminary coding and thematic analysis. The instructions we provided to ChatGPT included the following: The coding and themes should follow (1) experience, (2) motivation, (3) GenAI tools preferences, (4) perceptions of students use, and (5) future perspectives. The five categories arose from the five interview questions.
With the preliminary codes and themes generated by ChatGPT-4o, the research team was assigned to code the interviews and cross-check with the preliminary coding carried out by ChatGPT-4o (DCWA three medical transcripts, DWCL three medical transcripts, CCY four AHP transcripts, CD three AHP transcripts, LSC three nursing transcripts). After that, CD reviewed the coding carried out by the researchers and compared that with the analysis by ChatGPT-4o. For any discrepancies, CD double-checked with the researchers and made joint decisions to adjust the codes and themes to ensure that the identified themes and codes accurately reflected the participants’ words and meanings. CD combined the data from the three professional groups for the final codes, frequency, and themes.

3. Results

A total of 109 participants; 26 educators (out of 321, 8.1% response rate) and 83 students (out of 156, 53.2% response rate) completed the survey, and 16 participants (6 medical doctors, 7 allied health professionals, and 3 nurses) participated in the interviews. The demographics of the survey participants are listed in Table 1; the demographics of the interview participants are in Table 2.
Table 3 summarizes the survey findings, highlighting educators’ and students’ perceptions, adoption, and concerns regarding GenAI tools in HPE.
Table 4 presents the qualitative themes identified from the interviews, providing deeper insights into participants’ experiences and motivations.
The following sections integrate quantitative survey results (Table 3) with qualitative interview findings (Table 4) to provide a comprehensive understanding of GenAI adoption in HPE.

3.1. Perceived Impact of GenAI on HPE

The survey results indicate that most educators (84.6%) and students (57.8%) believed that GenAI would significantly impact training and education. Additionally, 80.8% of educators and 73.5% of students recognized GenAI’s potential role in research. However, perceptions about its impact on patient care varied, with 72.3% of students compared to only 65.4% of educators believing that GenAI could influence clinical practice.
Interview findings provided deeper insights into these trends. Educators expressed concerns about institutional readiness and emphasized the need for structured implementation before GenAI could be effectively integrated into training. One educator noted the following:
“… The university needs assessment guidelines. For example, I think some universities say you can do it, but you need to cite, so that you give credit to the intelligence. And the students need to give credit where credit is due and not to do wholesale because that compromises integrity and challenges the whole principle of learning and ownership.”
(Participant, AHP 001)
These findings show that while GenAI is widely acknowledged as beneficial, its integration into education and patient care requires institutional support and safeguards to mitigate risks, such as potential threats to academic integrity, bias in AI-generated content, data privacy concerns, and over-reliance on AI in clinical decision making.

3.2. Adoption and Use of GenAI

The adoption rate of GenAI is almost double that for students (68.7%) compared to educators (38.5%). The most common uses by students were for the following:
  • Learning and examination preparation (54.2%);
  • Research and information retrieval (73.5%);
  • Clinical scenario simulations (31%).
Educators, on the other hand, primarily used GenAI for the following:
  • Curriculum development (65.4%);
  • Assessment question generation (38.5%);
  • Virtual patient case simulations (30.7%).
Interview findings aligned with these patterns and highlighted barriers to adoption among educators. A recurring theme among educators was uncertainty about institutional policies and whether GenAI use was formally encouraged. One educator stated the following:
“I think if institutions can make some policies on this, then I think it will be a good way moving forward.”
(Participant, Medical 006)
Both survey and interview responses emphasized that institutional support is critical in shaping GenAI adoption. Educators noted the need for structured guidelines, faculty development, and ethical oversight to support safe and effective implementation. One educator shared the following:
“So for the most frequent use, is of course this ChatGPT, because the students actually or the learners they are actually using it. So in order to for me to know why and how they use it, I myself must know the tools.”
(Participant, Nursing 001)
These findings underscore the importance of institutional leadership in providing clear policies, practical training, and ethical frameworks. Such support is essential for building faculty confidence, ensuring responsible use, and addressing risks such as over-reliance, ethical lapses, and data privacy concerns.

3.3. Concerns About GenAI Integration

Shared concerns centered around GenAI’s accuracy and its impact on critical thinking, plagiarism, and ethical considerations. Additionally, students were particularly worried about over-reliance on GenAI, while educators expressed concerns about potentially losing the human element in teaching.
These concerns were echoed in interviews. One educator expressed worry over students submitting AI-generated work without critical engagement:
“I do find that some of them, when they submit things, it just looks like a textbook kind of. It doesn’t feel like they’ve talked to the patient.”
(Participant, Medical 005)
These concerns highlight the need for critical AI literacy training to ensure responsible GenAI use and institutional safeguards to prevent plagiarism and misinformation.

3.4. Future Perspectives

Looking ahead, both educators and students expressed optimism about the continued role of GenAI in health professions education (HPE). However, they also emphasized the importance of thoughtful integration that complements, rather than replaces, critical thinking and professional judgment.
Educators suggested that future institutional strategies should prioritize ethical use, continuous evaluation, and AI literacy development among faculty and students. This balanced approach will help ensure that GenAI enhances learning while preserving core educational values.

3.5. Likelihood of Future GenAI Use

Survey results indicated the following:
  • Students (69.9%) were more likely to continue using GenAI, citing its efficiency in learning and research.
  • Educators were divided, some expressing enthusiasm for GenAI’s potential while others remained cautious.
Interviews provided further clarity. Some educators saw value in GenAI but needed institutional support, while others remained skeptical. One educator noted the following:
“But I guess that in terms of policies, I would say to protect the patient’s privacy as well, if it is going to involve patient and I, I believe there should be a policy in place.”
(Participant, Nursing 002)
These findings show that while students are more open to GenAI, educators may require institutional assurance before fully embracing its potential.

4. Discussion

Integrating GenAI tools in health professions education (HPE) has elicited a spectrum of responses from educators and students. While there is enthusiasm about GenAI’s potential to enhance efficiency and personalize learning, concerns about its impact on critical thinking, ethical considerations, and institutional preparedness remain prevalent. The findings from this study highlight the nuanced perspectives of educators and students, underscoring the need for a balanced approach to GenAI integration. As Masters et al. (2024) highlighted, HPE institutions were initially caught off-guard by the sudden rise of GenAI, leading to reactionary policies to address immediate concerns such as plagiarism rather than proactively shaping AI integration into training [16].

4.1. Integration of Quantitative and Qualitative Findings

This mixed methods study allowed for the triangulation of survey and interview data, revealing areas of convergence and divergence. Survey data showed higher adoption rates among students (68.7%) compared to educators (38.5%), which was further explained through the findings from the open-ended questions that revealed students’ openness to experimentation and digital fluency. Conversely, interviews with educators offered deeper insights into their hesitations, such as uncertainty about policy, pedagogy, and professionalism, which did not as clearly surface in the survey. This dual approach helped contextualize the adoption patterns and perceptions of GenAI tools more robustly.

4.2. Theoretical Implications

The findings of this study align with the key constructs of both the TAM and UTAUT, offering a dual theoretical lens to understand Generative AI (GenAI) adoption in health professions education (HPE). The higher adoption rate among students can be explained through the TAM’s constructs of perceived usefulness and perceived ease of use [10]. Students, who are generally more immersed in digital learning environments, found GenAI tools helpful for research, exam preparation, and clinical training, valuing their ability to provide quick, structured, and personalized access to information [17].
From a UTAUT perspective, student adoption was also influenced by social factors and enabling conditions [12]. Peer use, institutional exposure, and familiarity with digital tools created an environment conducive to experimentation and uptake. In contrast, educators demonstrated more cautious engagement with GenAI, which may reflect lower performance expectancy and less favorable facilitating conditions. Many cited a lack of institutional guidance and uncertainty about how to integrate GenAI meaningfully into curricula without undermining foundational learning. These concerns are consistent with broader apprehensions in the global health professions education community [16,17].
Both the TAM and UTAUT frameworks also help explain the perceived benefits across groups. Efficiency and personalized learning were recurring themes, with students appreciating streamlined study processes and educators recognizing potential uses in content generation and assessment design. These align with the TAM’s emphasis on perceived usefulness and the UTAUT’s performance expectancy [10,12]. However, educators remained cautious, warning that such benefits must not come at the expense of deep learning and critical thinking, echoing concerns in the literature about over-reliance on AI, potentially diluting essential cognitive skills [17]. Taken together, the TAM and UTAUT provide complementary insights into the behavioral drivers and barriers shaping GenAI adoption in clinical education settings.

4.3. Practical Implications

Both students and educators emphasized the importance of institutional support. Survey responses and interview data indicated a clear need for structured guidelines, faculty training programs, and GenAI literacy efforts. Educators requested capacity-building initiatives to help them understand GenAI’s pedagogical applications, while students expressed a desire for guidance on ethical and effective use. Institutions can play a crucial role in enabling responsible GenAI integration by developing clear policies and fostering a culture of critical, reflective technology use.

4.4. Concerns and Challenges

Despite recognizing the benefits, educators and students voiced concerns regarding the accuracy and reliability of AI-generated content. Several educators highlighted the risk of misinformation, especially when GenAI responses lacked domain-specific accuracy or failed to reflect current clinical guidelines. This is compounded by the fact that GenAI tools generate responses based on probability rather than true understanding, raising concerns about their suitability in clinical settings [17]. Students, on the other hand, were more worried about over-reliance on GenAI tools, fearing that excessive dependence could erode critical thinking skills. Additionally, ethical considerations, such as the potential for plagiarism and the difficulty of verifying AI-generated work, were prevalent concerns among both groups. The absence of clear institutional guidelines was consistently cited as a barrier to responsible adoption.

4.5. Institutional Support

Both educators and students emphasized the need for structured institutional support to ensure responsible GenAI integration in education. There was broad agreement that clear guidelines must be developed to govern GenAI’s role in teaching, learning, and assessment [16]. Such structured policies can enhance facilitating conditions and boost behavioral intention to adopt GenAI, as emphasized in the UTAUT [12]. Educators highlighted the importance of faculty training programs to build confidence in GenAI use and to establish best practices for leveraging GenAI tools effectively. Students also expressed interest in formal GenAI literacy initiatives to help them distinguish between appropriate and inappropriate GenAI applications in their studies. Moving forward, institutions must actively shape GenAI policies, ensuring that ethical and pedagogical considerations are embedded within GenAI adoption strategies [16].

5. Limitations and Future Directions

5.1. Limitations

While this study provides valuable insights into the adoption and perception of GenAI tools in health professions education (HPE), it has several limitations. First, the data were collected from a single institution. The sample size, though representative of the institution, may not capture the full diversity of perspectives across different healthcare settings and educational institutions. Second, the reliance on self-reported data introduces potential bias related to individual experiences, familiarity with the technology, and social desirability. Moreover, the survey instruments used were developed by the researchers and had not undergone formal validation, which may affect the reliability of the findings. Third, there was a notable imbalance in participant representation, with a higher response rate among students compared to educators in the survey and educator-only participation in the interviews. This discrepancy may have influenced the overall weighting and interpretation of perspectives in both the quantitative and qualitative components.
Fourth, although educator perspectives were explored in depth through interviews, student voices were only reflected in the survey responses. Incorporating qualitative insights from students in future research would provide a more comprehensive understanding of GenAI’s impact. Fifth, ChatGPT-4o was used for the preliminary stages of qualitative coding, but the research team subsequently verified and refined all themes. The research team recognizes that while the use of AI in qualitative research is promising, it remains an evolving methodology with ongoing concerns regarding interpretive depth and rigor. Lastly, although the interviews provided rich qualitative data and in-depth insights, the study would benefit from longitudinal research to track GenAI adoption trends and assess its long-term impact on student learning outcomes and clinical education practices.

5.2. Future Directions

Participants from both groups expressed interest in further integrating Generative AI (GenAI) into education, provided it is implemented in a structured and intentional manner. Future research should include comparative studies across institutions and professions to assess the generalizability of findings, and longitudinal designs to examine how perceptions, usage, and outcomes evolve over time.
To address current limitations, studies should incorporate both educator and student perspectives in qualitative analyses and aim for more balanced sampling. Validated instruments are also needed to improve the consistency and reliability of findings. As AI-assisted methods such as language model-based coding become more common, their impact on data interpretation and methodological rigor should be critically examined.

6. Conclusions

This study highlights the growing role of GenAI in HPE, unveiling its promise and challenges. While students are more inclined to adopt GenAI tools, educators remain more cautious, emphasizing the need for structured implementation, institutional oversight, and clear guidelines. Addressing concerns related to accuracy, critical thinking, and ethical considerations will be key to harnessing GenAI’s full potential while mitigating potential risks.
By incorporating both educator and student perspectives, the study provides a more holistic view of how GenAI influences clinical education. While students perceive GenAI as a helpful learning aid, educators stress the importance of responsible use, particularly in professional and clinical contexts. These contrasting perspectives underscore the need for inclusive policy development and stakeholder-informed training initiatives.
Key contributions of this study include identifying discipline-specific adoption patterns, uncovering practical barriers to implementation, and applying the TAM and UTAUT frameworks to explore behavioral drivers in a clinical education context. To ensure responsible integration of GenAI, educational institutions should establish clear guidelines, provide targeted faculty development, and embed GenAI literacy within curricula. As the technology evolves, continuous evaluation and research will be critical to ensuring its adoption supports—rather than undermines—educational quality and professional development in clinical training.

Author Contributions

Conceptualization, C.D., D.C.W.A., D.W.C.L., S.C.L. and C.C.Y.; methodology, C.D.; validation, C.D., D.C.W.A., D.W.C.L., S.C.L. and C.C.Y.; formal analysis, C.D., D.C.W.A., D.W.C.L., S.C.L. and C.C.Y.; data curation, C.D.; writing—original draft preparation, C.D.; writing—review and editing, C.D., D.W.C.L., S.C.L. and C.C.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of SingHealth (protocol code 2023/2621 and date of approval 17 February 2024).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study. Written informed consent has been obtained from the interview participants.

Data Availability Statement

Data sharing is not applicable to this study as the data contain sensitive participant information and are restricted by ethical guidelines.

Acknowledgments

We acknowledge the staff from the Education Office, Sengkang General Hospital, for disseminating the survey to the medical and allied health professions students who were doing clinical placement. We acknowledge Cynthia Allyssa Claire for disseminating the survey to nursing students. We acknowledge all the participants for the survey and the interviews. We acknowledge Olivia Lau Hui Yun for helping with the preliminary analysis of the survey results. We also acknowledge the use of OpenAI’s ChatGPT-4o for assistance in language refinement and preliminary thematic analysis of qualitative data. All intellectual contributions, interpretations, and conclusions remain our own.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. Survey Questions for Educators

Appendix A.1.1. Demographic Info

  • Gender
  • Age
  • Department/Sub-Specialty
  • Role in teaching
  • Years in teaching

Appendix A.1.2. Perceived Usefulness

  • In what ways do you feel Generative AI tools might affect you? (Select all that apply)
    • Patient care
    • Training and education
    • Research
    • Administrative work
    • Others
  • What are the functions of AI tools in teaching? (Select all that apply)
    • Enhancing efficiency
    • Creativity
    • Personalized training
    • Curriculum development
    • Plagiarism detection
    • Professional development
    • Others
  • What teaching activities are suitable for using Generative AI tools? (open-ended)
  • Do you know if students use Generative AI tools? (open-ended)
  • What are your concerns about the impact of Generative AI tools on teaching and learning? (open-ended)
  • How do you think the institution should approach the use of Generative AI tools in healthcare? (open-ended)

Appendix A.1.3. Actual Use

7.
Have you used Generative AI tools for teaching purposes? (Yes or No)
8.
How often do you use Generative AI tools?
  • Daily
  • Weekly
  • Monthly
  • Rarely
  • Only once or twice
9.
If answered No, why didn’t you use the AI tools? [List the reasons below] (open-ended)
10.
If answered Yes, what teaching activities have you used Generative AI tools for? (Select all that apply)
  • Virtual patients
  • Clinical scenario simulation
  • Creating exam/assessment questions
  • Research and data analysis
  • Bioethics training
  • Others
11.
How useful have you found the AI tool(s) to be?
  • Not at all useful
  • Somewhat useful
  • Neutral
  • Very Useful
  • Extremely useful

Appendix A.1.4. Future

12.
How likely are you to use Generative AI tools in the future? [1–5 Likert scale]
  • Extremely Unlikely
  • Unlikely
  • Neutral
  • Likely
  • Extremely Likely
13.
How interested are you in learning about Generative AI tools? [1–5 Likert scale]
  • Not at All
  • Very little
  • Neutral
  • Somewhat
  • To a Great Extent
14.
For future workshops on related topics, what aspects would you like to focus on? (Select all that apply)
  • Introduction of the tools
  • Sharing examples
  • Teaching guidelines
  • Others (Please explain)

Appendix A.2. Survey Questions for Students

Appendix A.2.1. Demographic Info

  • Gender
  • Age
  • Your field of study
  • Years of study

Appendix A.2.2. Survey

  • In what ways do you feel Generative AI tools might affect you? (Select all that apply)
    • Patient care
    • Learning
    • Exams
    • Research
    • Others
  • What are the functions of AI tools in study? (Select all that apply)
    • Enhancing efficiency
    • Creativity
    • Personalized learning
    • Plagiarism detection
    • Professional development
    • Others
  • What learning activities are suitable for using Generative AI tools? (open-ended)
  • Do you know if your teachers use Generative AI tools for teaching? (open-ended)
  • What are your concerns about the impact of Generative AI tools on teaching and learning? (open-ended)
  • How do you think the institution should approach the use of Generative AI tools in healthcare? (open-ended)
  • Have you used Generative AI tools for your study? (Yes or No)
  • How often do you use Generative AI tools?
    • Daily
    • Weekly
    • Monthly
    • Rarely
    • Only once or twice
  • If you answered No, why didn’t you use the AI tools? [List the reasons below] (open-ended)
  • If you answered Yes, what activities have you used Generative AI tools for? (Select all that apply)
    • Virtual patients
    • Clinical scenario simulation
    • Preparing for exams
    • Research and data analysis
    • Bioethics training
    • Others
  • How useful have you found these AI tool(s) to be?
    • Not at all useful
    • Somewhat useful
    • Neutral
    • Very useful
    • Extremely useful
  • If you answered No, why didn’t you use the AI tools? [List the reasons below] (open-ended)
  • If you answered Yes, what activities have you used Generative AI tools for? (Select all that apply)
    • Virtual patients
    • Clinical scenario simulation
    • Preparing for exams
    • Research and data analysis
    • Bioethics training
    • Others
  • How useful have you found the AI tool(s) to be for your studies?
    • Not at all useful
    • Somewhat useful
    • Neutral
    • Very useful
    • Extremely useful
  • How likely are you to use Generative AI tools in the future?
    • Extremely Unlikely
    • Unlikely
    • Neutral
    • Likely
    • Extremely Likely
  • How interested are you in learning about Generative AI tools?
    • Not at All
    • Very little
    • Neutral
    • Somewhat
    • To a Great Extent

Appendix B. Qualitative Data Collection Guide (For Educators)

  • Experience
    Can you describe your experience with Generative AI tools in your teaching practice?
    (Suggestions: Pre-clinical teaching, clinical teaching, creating resources, research, curriculum development)
    Are there specific examples where these tools have been particularly beneficial or challenging?
  • Motivation
    What initially motivated you to explore Generative AI tools?
    When and how did you first start using these tools?
    How did you learn to use them?
  • Tool preferences
    Which Generative AI tools do you use most frequently? Why?
    What are the primary advantages and disadvantages of these tools?
  • Perceptions of student use
    What are your thoughts on students using Generative AI tools in their learning?
    Have you observed changes in student behavior or learning outcomes as a result of AI usage?
  • Future perspectives
    Do you see yourself using these tools differently in the future?
    What changes (if any) would you like to see in institutional policies or training around Generative AI in education?

References

  1. Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; Elepaño, C.; Madriaga, M.; Aggabao, R.; Diaz-Candido, G.; Maningo, J.; et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS Digit. Health 2023, 2, e0000198. [Google Scholar] [CrossRef] [PubMed]
  2. United States Medical Licensing Examination (USMLE). Step 1 Content Description and General Information. 2024. Available online: https://www.usmle.org/exam-resources/step-1-materials/step-1-content-outline-and-specifications (accessed on 18 February 2024).
  3. Maitland, A.; Fowkes, R.; Maitland, S. Can ChatGPT pass the MRCP (UK) written examinations? Analysis of performance and errors using a clinical decision-reasoning framework. BMJ Open 2024, 14, e080558. [Google Scholar] [CrossRef] [PubMed]
  4. Vij, O.; Calver, H.; Myall, N.; Dey, M.; Kouranloo, K. Evaluating the competency of ChatGPT in MRCP Part 1 and a systematic literature review of its capabilities in postgraduate medical assessments. PLoS ONE 2024, 19, e0307372. [Google Scholar] [CrossRef] [PubMed]
  5. Gordon, M.; Daniel, M.; Ajiboye, A.; Uraiby, H.; Xu, N.Y.; Bartlett, R.; Hanson, J.; Haas, M.; Spadafore, M.; Grafton-Clarke, C.; et al. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. Med. Teach. 2024, 46, 446–470. [Google Scholar] [CrossRef] [PubMed]
  6. Ponce, B.A.; Jennings, J.K.; Clay, T.B.; Mayfield, C.A. Artificial intelligence in medical education: A scoping review of ChatGPT’s potential and limitations. Med. Teach. 2023, 45, 234–243. [Google Scholar] [CrossRef]
  7. Davenport, T.; Kalakota, R. The potential for artificial intelligence in healthcare. Futur. Healthc. J. 2019, 6, 94–98. [Google Scholar] [CrossRef] [PubMed]
  8. Park, S.H.; Do, K.H.; Kim, S.; Park, J.H.; Lim, Y.S. What should medical students know about artificial intelligence in healthcare? J. Korean Med. Sci. 2023, 38, e80. [Google Scholar] [CrossRef]
  9. Davis, F.D. A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results. Doctoral Dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 1986. [Google Scholar]
  10. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  11. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manag Sci. 1993, 35, 982–1003. [Google Scholar] [CrossRef]
  12. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  13. Creswell, J.W.; Plano Clark, V.L. Designing and Conducting Mixed Methods Research, 3rd ed.; SAGE Publications: Thousand Oaks, CA, USA, 2018. [Google Scholar]
  14. Fetters, M.D.; Curry, L.A.; Creswell, J.W. Achieving Integration in Mixed Methods Designs—Principles and Practices. Health Serv. Res. 2013, 48, 2134–2156. [Google Scholar] [CrossRef] [PubMed]
  15. Bijker, R.; Merkouris, S.S.; A Dowling, N.; Rodda, S.N. ChatGPT for Automated Qualitative Research: Content Analysis. J. Med. Internet Res. 2024, 26, e59050. [Google Scholar] [CrossRef] [PubMed]
  16. Masters, K.; Herrmann-Werner, A.; Festl-Wietek, T.; Taylor, D. Preparing for Artificial General Intelligence (AGI) in Health Professions Education: AMEE Guide No. 172. Med. Teach. 2024, 46, 1258–1271. [Google Scholar] [CrossRef] [PubMed]
  17. Patino, G.A.; Amiel, J.M.; Brown, M.; Lypson, M.L.M.; Chan, T.M.M. The Promise and Perils of Artificial Intelligence in Health Professions Education Practice and Scholarship. Acad. Med. 2024, 99, 477–481. [Google Scholar] [CrossRef] [PubMed]
Table 1. Demographic information of the survey participants.
Table 1. Demographic information of the survey participants.
Educators’ CharacteristicsCategoriesn (%)
Gender
Male16 (61.54%)
Female10 (38.46%)
Age (years)
30–3910 (38.46%)
40–4911 (42.31%)
50–594 (15.38%)
60>1 (3.85%)
Roles in Teaching
Medical students, postgraduate, in-service20 (76.92%)
Undergraduate students in-service teaching4 (15.38%)
In-service teaching2 (7.69%)
Years in Teaching
1–10 years14 (53.85%)
11–20 years8 (30.77%)
More than 20 years4 (15.38%)
Clinical Specialty
General Medicine8 (30.77%)
Allied Health Professional3 (11.54%)
Nursing3 (11.54%)
Emergency Medicine2 (7.69%)
Radiology2 (7.69%)
ICU2 (7.69%)
Psychiatry2 (7.69%)
Family Medicine1 (3.85%)
Orthopedic Surgery1 (3.85%)
Occupational Medicine1 (3.85%)
ENT1 (3.85%)
Students’ CharacteristicsCategoriesn (%)
Gender
Male15 (18.07%)
Female68 (81.93%)
Age (years)
18–2021 (25.29%)
21–2338 (45.79%)
24–2618 (21.68%)
27–316 (7.22%)
Field of Study
Nursing38 (45.78%)
Medical22 (26.51%)
Allied Health8 (9.64%)
Undefined15 (18.07%)
Table 2. Demographic information for the interview participants.
Table 2. Demographic information for the interview participants.
Profession (n)SpecialtyAge (Mean)Years in Practice (Mean)Years in Teaching (Mean)
Allied health professional (n = 7)Physiotherapy,
Pharmacy,
Psychology
38.4313.716.36
Medical (n = 6)Dermatology,
Family Medicine,
Geriatrics Medicine, Psychiatry,
Rehabilitation Medicine, Renal Medicine
38149
Nursing (n = 3)Nursing Education40.3316.678.33
Table 3. Summary of survey results.
Table 3. Summary of survey results.
CategorySubcategoryEducators
(n/%)
Students
(n/%)
Quantitative Data
Perceived ImpactPatient Care17 (65.38%)60 (72.29%)
Training and Education22 (84.62%)48 (57.83%)
Research21 (80.77%)61 (73.49%)
Administrative Work17 (65.38%)38 (45.78%)
Learning and Exams-45 (54.22%)
Others3 (11.54%)1 (1.20%)
Perceived FunctionsEnhancing Efficiency21 (80.77%)73 (87.95%)
Creativity19 (73.08%)47 (56.63%)
Curriculum Development17 (65.38%)29 (34.94%)
Personalized Training15 (57.69%)56 (67.47%)
Professional Development12 (46.15%)46 (55.42%)
Plagiarism Detection8 (30.77%)47 (56.63%)
Perceived UsefulnessNot at all useful2 (7.69%)1 (1.20%)
Somewhat useful1 (3.85%)14 (16.87%)
Neutral10 (38.46%)21 (25.30%)
Very useful10 (38.46%)40 (48.19%)
Extremely useful3 (16.67%)7 (8.43%)
Actual UseYes10 (38.46%)57 (68.67%)
No16 (61.54%)24 (28.92%)
Not Sure-2 (2.41%)
Frequency of UseRarely8 (30.77%)32 (38.55%)
Only once or twice5 (19.23%)8 (9.64%)
Weekly5 (19.23%)23 (27.71%)
Monthly4 (15.38%)13 (15.66%)
Daily4 (15.38%)7 (8.43%)
Activities SupportedClinical Scenario Simulation4 (15.38%)31 (37.35%)
Research and Data Analysis2 (7.69%)33 (39.76%)
Creating Exam Questions4 (15.38%)10 (12.05%)
Virtual Patients3 (11.54%)24 (28.92%)
Bioethics Training-7 (8.43%)
Preparing for Exams-17 (20.48%)
Other Uses-5 (6.02%)
Future UseExtremely Unlikely1 (3.85%)0 (0.00%)
Unlikely2 (7.69%)2 (2.41%)
Neutral7 (26.92%)23 (27.71%)
Likely7 (26.93%)43 (51.81%)
Extremely Likely9 (34.62%)15 (18.07%)
CategorySubcategoryEducators
(frequency)
Students
(frequency)
Qualitative Data from Open-ended Responses
ConcernsAccuracy and Validity710
Reduced Critical Thinking611
Plagiarism and Ethics319
Loss of Human Element46
Lack of Framework/Guidelines38
Over-reliance on AI-19
Other Concerns-12
Institutional ActionsGuidelines and Regulations8-
Training and Support7-
Cautious Implementation8-
Practical Applications7-
Data Protection and Ethics6-
Positive Attitude Towards AI12-
Uncertainty/Lack of Opinion34-
Table 4. Interview findings by combining the data from 3 professions.
Table 4. Interview findings by combining the data from 3 professions.
CategoriesCodesFrequencyThemes
Experience(1) Moderate use of GenAI for research, teaching aids, communication (language refinement), gamification of learning
(2) Challenges with clinical applicability
46GenAI tools supports basic educational tasks but faces limits in clinical application.
Motivation(1) Curiosity about AI’s potential
(2) Efficiency in handling teaching materials
(3) Colleague recommendations and encouragement
(4) Interest in technological innovation
(5) Institutional exposure (e.g., master’s programs)
33Motivation for GenAI use includes curiosity, efficiency, and peer influence.
GenAI tools preferences(1) ChatGPT preferred for versatility,
(2) Institutional tools used for compliance, but quite limited
(3) Meta AI and Copilot occasionally used for specific tasks
29ChatGPT is favored for its ease of use and quality outputs; institutional tools offer security and compliance.
Perceptions of students’ use(1) Students use AI for quick answers and assignments, reports, and communication
(2) Concerns over over-reliance and critical thinking
(3) Boosts confidence but raises originality issues
46GenAI tools support learning but require oversight to ensure ethical use and critical thinking.
Future perspectives(1) Expectation of deeper AI integration
(2) Need for structured guidance to mitigate ethical risks
(3) Concerns over critical thinking loss
27Effective GenAI integration requires balanced institutional support and clear guidelines.
Loss of skillsets and jobsOver-reliance on technology3Concerns exist about overdependence on GenAI and loss of certain skills.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dong, C.; Aw, D.C.W.; Lee, D.W.C.; Low, S.C.; Yan, C.C. Generative AI in Healthcare: Insights from Health Professions Educators and Students. Int. Med. Educ. 2025, 4, 11. https://doi.org/10.3390/ime4020011

AMA Style

Dong C, Aw DCW, Lee DWC, Low SC, Yan CC. Generative AI in Healthcare: Insights from Health Professions Educators and Students. International Medical Education. 2025; 4(2):11. https://doi.org/10.3390/ime4020011

Chicago/Turabian Style

Dong, Chaoyan, Derrick Chen Wee Aw, Deanna Wai Ching Lee, Siew Ching Low, and Clement C. Yan. 2025. "Generative AI in Healthcare: Insights from Health Professions Educators and Students" International Medical Education 4, no. 2: 11. https://doi.org/10.3390/ime4020011

APA Style

Dong, C., Aw, D. C. W., Lee, D. W. C., Low, S. C., & Yan, C. C. (2025). Generative AI in Healthcare: Insights from Health Professions Educators and Students. International Medical Education, 4(2), 11. https://doi.org/10.3390/ime4020011

Article Metrics

Back to TopTop