Next Article in Journal
Comparing Two Distribution Models of Paul’s Literary Techniques: Poisson Versus Negative Binomial
Previous Article in Journal
AI, Consciousness, and the Evolutionary Frontier: A Buddhist Reflection on Science and Human Futures
Previous Article in Special Issue
Attention (to Virtuosity) Is All You Need: Religious Studies Pedagogy and Generative AI
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence in Religious Education: Ethical, Pedagogical, and Theological Perspectives

by
Christos Papakostas
Department of Social Theology & Religious Studies, National and Kapodistrian University of Athens, 15784 Athens, Greece
Religions 2025, 16(5), 563; https://doi.org/10.3390/rel16050563 (registering DOI)
Submission received: 3 April 2025 / Revised: 23 April 2025 / Accepted: 26 April 2025 / Published: 28 April 2025
(This article belongs to the Special Issue Religion and/of the Future)

Abstract

:
This study investigates the integration of Artificial Intelligence (AI) in Religious Education (RE), a field traditionally rooted in spiritual formation and human interaction. Amid increasing digital transformation in education, theological institutions are exploring AI tools for teaching, assessment, and pastoral engagement. Using a critical literature review and analysis of institutional case studies, the paper examines the historical development of AI in education, current applications in general and theological contexts, and the ethical challenges it introduces, especially regarding decision making, data privacy, and bias as well as didactically grounded opportunities such as AI-mediated dialogic simulations. The study identifies both the pedagogical advantages of AI, such as personalization and administrative efficiency, and the risks of theological distortion, overreliance, and epistemic conformity. It presents a range of real-world implementations from institutions like Harvard Divinity School and the Oxford Centre for Digital Theology, highlighting best practices and cautionary approaches. The findings suggest that AI can enrich RE when deployed thoughtfully and ethically, but it must not replace the relational and formational aspects central to RE. The paper concludes by recommending policy development, ethical oversight, and interdisciplinary collaboration to guide responsible integration. This research contributes to the growing discourse on how AI can be aligned with the spiritual and intellectual goals of RE in a rapidly evolving digital age.

1. Introduction

Artificial Intelligence (AI) is among the most vital transformative drivers of the 21st century, significantly revolutionizing various sectors like healthcare, finance, and education (Papakostas et al. 2024). Within education, AI provides personalization of learning experience, instant feedback, and decision making through data analytics, thus contributing significantly to educational advancement (Gonzalez 2023; Ragni 2020; Saxena et al. 2023). As a result, while researchers (Ahmad et al. 2021; Chen et al. 2020; Hamal et al. 2022; Harry and Sayudin 2023) have extensively examined AI’s overall impact on general education, its adaptation into Religious Education (RE), a discipline that is historically marked by entrenched traditions, religious development, and communal-oriented instruction, has not attained similar academic scrutiny prior to this study. The issue remains particularly pertinent, as religious societies and theological institutions are increasingly under pressure to digitize with faith-based education intact. The convergence of AI and RE thus forms a space both new and complex.
The integration of AI into RE poses profound questions about balancing algorithmic structures with theological norms (Kurata et al. 2025; Tran and Nguyen 2021). When, say, AI is able to generate doctrinal commentary, author sermon content independently, and perform virtual religious counseling, this evokes both optimism and apprehension. AI may increase accessibility, make religion more personalized, and enhance operational effectiveness in the RE space. On the other hand, threats from doctrinal error, religious genuineness erosion, as well as the marginalization of pastoral care are inherent in introducing such technologies to sacred spaces of instruction. The complex state of affairs attracts deep probes, especially as RE goes beyond intellectual endeavor and incorporates a formative process deeply linked with historical continuities, moral judgment, and communal bonds.
Pressure for digital transformation in RE stems from falling seminary enrollment, rising demand for distance learning, and students’ expectations of digital literacy (Hakim and Anggraini 2023; Ocampo and Gozum 2024). AI platforms appear to answer these challenges by tailoring content, automating assessment, and widening access to primary texts across regions and denominations. Their scalability offers rapid feedback and adaptive study paths, promising more responsive learning environments.
The infusion of AI into RE raises important questions about authority and interpretation. In religious contexts, understanding is inseparable from tradition and moral formation. When an algorithmic process offers a scriptural interpretation or doctrinal answer, questions arise about which theological position is being conveyed and how other perspectives are accounted for. Since most of these systems rely on probabilistic pattern-matching (Rodgers et al. 2022), they lack the historical consciousness and empathetic discernment essential to theological inquiry. Overreliance on these technologies, therefore, risks diminishing complex religious narratives into oversimplified outcomes and creating a dependence on machine-generated sermons, prayers, or liturgical expressions.
Ethical risks compound these pedagogical concerns. Large training corpora may reproduce doctrinal, cultural, or gender biases, reinforcing majority perspectives while marginalizing others (Balasubramanian 2023; Čartolovni et al. 2022; Kulaklıoğlu 2024). For example, a model trained mainly on Protestant sources will tend to under-represent Catholic, Orthodox, or Global South interpretations, limiting the inclusivity RE seeks to foster. Privacy is an additional fault line (Huang 2023; Jose 2024; El Mestari et al. 2023; Meurisch and Mühlhäuser 2021; Shofiyyah et al. 2024). AI platforms often collect sensitive data on belief, conscience, and pastoral struggles; commercial providers may process or share such data beyond educational purposes. RE institutions therefore require robust governance: transparent algorithms, diverse training sets, informed-consent protocols, and teacher-led oversight to ensure that AI serves formation rather than distorting it.
Given these anxieties, a complete rejection of the role of AI in RE would be unrealistic and counterproductive. Instead, what there needs to be is a thoughtful and context-specific methodology, an awareness of AI’s potential to enrich RE coupled with sensitivity about its limitations. AI should act as an adjunct tool, not a replacement for human-to-human educational interaction. Its usefulness lies in its ability to aid instructors, streamline administration, and provide additional tools enhancing the educational environment. Used properly, AI can free instructors to invest more time on mentorship, pastoral care, and religious development—activities machines cannot match.
Several theological schools of thought have initiated their exploration of AI through systematic and creative methods. Logos Bible Software,1 for example, utilizes AI technologies to facilitate more efficient scriptural study and theological exegesis, thus allowing students to explore cross-references, examine terminology, and gain access to historical commentaries with improved ease. Organizations like Harvard Divinity School2 and the Centre for Digital Theology at Durham University3 are currently undertaking a study of the ethical, philosophical, and educational implications of AI in religious studies. Together, these projects demonstrate a growing recognition of AI as not just a technical tool, but as a cultural and theological reality that needs systematic study by scholars, educators, and religious institutions as a collective whole.
Drawing on three established RE paradigms, namely phenomenological, interpretive/dialogical, and shared praxis, this study seeks to present a critical and comprehensive overview of AI in RE and its pedagogical, ethical, and theological contexts while the following research questions arise:
  • What role should AI play in doctrinal interpretation, pastoral care, and spiritual development within RE?
  • Can algorithmic systems align with the theological values and human-centered pedagogy that define RE?
  • What safeguards are necessary to ensure that AI supports rather than distorts the spiritual and educational goals of theological institutions?
Section 2 presents a review of the literature, determining key benchmarks within AI-infused learning but also noting discrepancies between RE adoption rates and wider trends. Section 3 offers the analytical framework, correlating AI application with three pedagogical models to frame the forthcoming analysis. Section 4 offers reasoned discussion by contrast, setting positive examples of noted pedagogical and moral risks against associated opportunities. Section 5 synthesizes institutional case studies to highlight best practice alongside persistent challenges. Finally, Section 6 elicits policy implications, details a research agenda for mixed-methods field studies, and offers concluding reflection on AI’s contribution to complementing, not controlling, RE’s core aims.

2. Literature Review

2.1. History and Development of AI in Education and RE

The historical development of AI in the educational sector illustrates a steady trajectory from early mechanized learning systems to modern adaptive and data-driven systems. While RE has historically been slower to adopt technological innovation, there has been increasing integration of AI-supported resources in keeping with broader educational trends. Understanding this development is critical to evaluating current methodology and predicting future developments.
The beginnings of AI in educational contexts date back to the mid-20th century. The groundbreaking works of Turing (Castelfranchi 2013; Guo 2015; Muggleton 2014) provided a solid foundation for AI research in this area. Through tools designed to deliver structured teaching materials, educators adopted behaviorist principles in the 1950s and 1960s. One of the early breakthroughs was Skinner’s Teaching Machine (Skinner 1954), where programmed instruction was delivered sequentially, testing correct answers by repeated exposure. While this device fails to match modern definitions of intelligence, it laid the ground for AI’s emphasis on personalized feedback and segmented delivery of material.
Around this time, developments in the area of computer science gave birth to the first computer-assisted instruction systems. One of these was the PLATO4 (Programmed Logic for Automated Teaching Operations) system, launched in the 1960s at the University of Illinois. PLATO included rudimentary types of online interaction, including discussion forums and educational games, and was among the first systems aimed at facilitating large-scale digital education. Despite being limited by available hardware thresholds and rule-based programmatic approaches, PLATO showed an early convergence of computational technology and educational practice.
The 1980s was a foundational decade with the arrival of Intelligent Tutoring Systems (ITSs). The systems utilized core AI methods to mimic some of human tutoring’s characteristics by tailoring educational material based upon a student’s academic performance data. Key examples are SCHOLAR (Carbonell 1970), developed to teach geography, and the Andes Physics Tutor (Schulze et al. 2000) developed by Carnegie Mellon University. The immediate feedback from these ITSs relieved students of problem-solving strain, thus moving educational technology toward more individualized study. The teaching abilities inherent in adaptive learning started gaining more interest, with interest in AI’s ability to emulate high-end teaching improving significantly over this decade.
Parallel to such advances, religious institutions launched initiatives toward digitization, though often with conservative restraint. The early attempts focused mainly on creating searchable theological databases, digitized concordances, and tools for analyzing texts. While these tools did not yet harness AI in its modern sense, they laid a groundwork infrastructure for future AI usage in religious scholarship by granting people access to doctrinal material in a systematic order. Projects like BibleWorks5 and, later, Logos Bible Software broke new ground in applying computational procedures in theological inquiry by adding features such as keyword searches, intertextual mapping, and syntactical analysis as basics.
The expansion of the internet throughout the 1990s and 2000s greatly enabled the integration of AI in learning settings. Web-based platforms, as represented by Moodle,6 integrated inherent AI-powered features like automatic evaluation, activity tracking, and performance analysis. These features evolved into complex Learning Management Systems (LMSs) that improved e-learning and personalized education. In RE, this period was characterized by the development of e-seminary programs as well as virtual materials in theology programs. Institutions of learning progressively used LMS platforms to successfully reach students in different geographical locations, covering a broad range of learning requirements.
The 2010s saw a rapid development of AI in the education sector, driven in large part by advances in machine learning, natural language processing (NLP), and cloud computing technologies. This development enabled the emergence of adaptive learning platforms like Khan Academy7 and Coursera,8 which used AI to personalize content delivery and recommend personalized learning paths. At the same time, chatbots and virtual assistants that leveraged NLP started to infiltrate the education space, offering students instant support and guidance.
RE began integrating similar technologies, albeit at a slow rate. AI-enhanced tools for biblical exegesis emerged, leveraging natural language processing to analyze scriptural texts, identify recurring theological themes, and cross-reference passages. Logos Bible Software incorporated semantic search features and interlinear text analysis, thus enhancing scholars’ and students’ ability to engage with primary sources. Additionally, AI-powered language translation tools promoted interfaith dialogue and increased accessibility to theological resources beyond various linguistic boundaries.
The integration of AI into religious studies has become increasingly evident with the emergence of virtual theological assistants. Trained on complete theological databases, these programs are able to answer doctrinal questions, give background information on scriptures, and aid in sermon preparation. Instances worthy of mention are Christian AI chatbots,9 QuranGPT,10 and “Ask Pastor AI”11 (Alan et al. 2024). While arguments over theological accuracy persist, their launch reflects an increasing desire for religious information made available in a virtual environment.
Over recent years, the COVID-19 pandemic greatly increased focus on AI and digital technologies-enabled education. The sudden shift to e-learning forced seminaries and religious institutions to adopt AI-based solutions to address gaps in teaching. Key features of RE in these times were virtual classrooms, AI-based assessment tools, and AI-delivered feedback. Additionally, this environment encouraged exploration in immersive technologies like virtual reality (VR) and augmented reality (AR), particularly when coupled with AI, in order to establish virtual pilgrimages, story-based encounters with scripture, and interactive theological simulation.
In the current decade, AI’s role in education is defined by personalization, scalability, and real-time analytics. AI now supports predictive modeling to identify at-risk students, recommends individualized learning strategies, and dynamically adjusts content complexity. These features are increasingly applied in RE to foster engagement, track spiritual formation metrics, and tailor theological curricula. AI-generated sermons, theological writing aids, and devotional planning tools illustrate how deeply automation has penetrated the field.
However, the historical trajectory also reveals that RE’s adoption of AI has consistently lagged behind general education due to concerns about doctrinal integrity, pastoral sensitivity, and the relational nature of religious learning. While general education emphasizes measurable outcomes, RE remains focused on intangible dimensions such as faith, community, and moral discernment. These priorities have necessitated a cautious approach to AI adoption, with many institutions emphasizing ethical reflection, theological oversight, and human-centered pedagogy.
Overall, AI’s transformation in education from programmed instruction to intelligent systems and, more recently, to fully adaptable digital worlds, has had a profound impact on education as a whole. Although historically conservative in its approaches, RE has only recently come into contact with AI in ways that enhance academic exploration and pastoral interaction. This historical review places high value on contextually informed integration, revealing both the innovative potential and the inherent challenges AI poses to theological training.

2.2. Current Applications in General and RE

The integration of AI into modern educational paradigms extends beyond speculative theorization and anticipatory projection; it has matured into a de facto and inherent aspect of educational institutions of all levels. From traditional schools and universities to specialist theological seminaries, AI technologies impact pedagogy significantly, improve access, and empower individualized opportunities for education. The nature of applications spans from universalized bureaucratic operations to advanced resources focusing on theological learning as well as on individualized spiritual growth.
One major application of AI in the area of general education is to facilitate individualized instructional methods and adaptive learning experiences. Programs like Carnegie Learning,12 Squirrel AI,13 and Smart Sparrow14 utilize advanced machine-learning methods to constantly analyze student performance, identify areas of difficulty, and adjust teaching methods as needed. Instead of following a standard approach, these programs adjust the tempo, difficulty level of material, and type of feedback based on each student’s unique needs. The paradigms supporting this personalization enable instructors to move beyond standard teaching methods and provide individualized assistance to improve understanding and build deeper human relationships.
Automated grading and assessment tools are common deployments of AI. Gradescope15 and Turnitin16 are tools used, among others, with natural language processing in addition to pattern recognition methods to grade essays, detect plagiarism, and analyze short-answer questions. They reduce instructors’ allocation of time to administration and also improve consistency in grading requirements. More advanced systems deliver individualized feedback by detecting common errors or misapprehensions and recommending areas of improvement. For RE, this function assists instructors in grading reflective essays, expositions of scripture, and dogmatic writings while maintaining academic honesty.
The implementation of virtual teaching assistants and chatbots has become increasingly common in schools and virtual classrooms. AI assistants such as Jill Watson,17 created by Georgia Tech, are programmed to answer common student inquiries, provide reminders, and manage classroom administration activities. Within religious contexts, religious chatbots like Ask Pastor AI, QuranGPT, and doctrinal bots based on Christianity also build on this functionality by delivering quick and theologically accurate answers to questions related to scripture, ethics, and liturgical obligations. They provide round-the-clock immediate student assistance beyond academic classroom timing and enable independent inquiry; however, it is important to regularly audit their accuracy to maintain doctrinal consistency.
The acquisition of language skills, fundamental in general as well as theological studies, has been significantly enhanced with the inclusion of AI programs. Language-learning programs like Duolingo,18 Babbel,19 and Rosetta Stone20 incorporate AI speech recognition and feedback tools to assist learners with language acquisition activities. They are especially important in theological study, where mastery of languages like Greek, Hebrew, or Arabic may become a common requirement. AI allows students to rehearse pronunciation, translation, and comprehension activities in a systematic and engaging form, all while keeping each student’s individual tempo in consideration.
LMSs are perhaps the most pervasive integration point for AI in education. Modern LMS platforms such as Moodle, Canvas,21 and Blackboard22 incorporate predictive analytics, recommendation engines, and automated feedback systems. These platforms track learner behavior, attendance, participation, and assignment submissions and use these data to forecast academic success or risk of dropout. In RE, AI-enhanced LMS platforms23 assist educators in monitoring students’ spiritual development through reflective journals, participation in online theological discussions, and sermon preparation tasks.
AI has also been highly effective in the context of special education as a tool contributing to inclusivity. Students with disabilities can leverage various assistive technologies such as speech-to-text programs, predictive text software, and eye-tracking tools. Programs like Dragon NaturallySpeaking24 are useful to students with dyslexia, while QTrobot25 assists autistic youngsters in improving their communicative competencies as well as their emotional know-how. The utilization of these tools is going on in some theological schools to assist neurodivergent students, thus promoting more inclusivity in faith schools.
AI is increasingly taking a central place in STEM education, crossing over with theological studies in fields such as digital theology and ethics of technology. Tools such as Photomath26 and Wolfram Alpha27 facilitate students in grasping mathematical concepts through in-depth, step-by-step descriptions. At the same time, AI-powered coding sites like Codio28 and CodeSignal29 enable immediate feedback and personalized paths of study, thus supporting theological scholars undertaking interdisciplinary research integrating theology with digital humanities or investigating the moral implications of technology.
The application of AI in theological studies is increasingly geared toward amplifying doctrinal study, study of scriptural passages, and personal growth in spirituality. NLP-based software programs, including Accordance30 and Logos Bible Software, apply advanced features for searching texts, annotations, and cross-references. The programs allow scholars to examine scripture translations, explore grammatical structures, and evaluate theological commentaries, thus significantly reducing study time spent in scholarly exegesis. Some programs also generate sermon plans based on certain liturgical cycles or theological themes.
AI’s use in production is more common by the day. Generative tools are currently used to assist in creating sermons, devotionals, and catechetical materials. While these are useful tools as brainstorming aids, their deployment must come with careful planning so as not to oversimplify or misinterpret theological matters. Despite possessing large doctrinal data sets, these computers lack the nuance and contextual awareness human theologians bring to their labor.
VR and AR, when combined with AI, provide immersive theological learning experiences. Platforms like Google Arts and Culture have already enabled virtual tours of religious sites, while specialized applications now simulate biblical environments, historical events, and liturgical practices. These technologies are particularly effective in facilitating experiential learning for students who may not have access to physical pilgrimage or sacred spaces. In theological training, such simulations offer visual and contextual reinforcement of doctrinal teachings and historical narratives.
Finally, AI is being applied in the fields of interfaith and comparative religious studies. Websites like Kialo31 and Patheos,32 supported by AI-powered debate structuring and argument visualization, allow students to engage in structured and respectful debates on controversial religious issues. These tools promote critical thinking and foster interreligious understanding. IBM Watson’s natural language processing33 powers have been used to compare theological positions across different religious traditions, thus enabling scholarly research in the fields of religious pluralism and ethics.
Overall, its application in generic and RE settings in times such as these represents a shift toward individualized learning, operational effectiveness, and greater scalability. While its ultimate aim remains serving to enhance teaching and learning processes, AI also presents certain paths toward inquiry, theological reflection, and greater accessibility to education. It becomes imperative to make these advances guided by ethical governance, religious scholarship, and educational awareness in order to ensure that RE’s intellectual and spirituality-laden components stay rooted in human values and received religious traditions.

3. Educational Theories Undergirding AI in RE

This section places our research on three broad pedagogical traditions, namely, phenomenological, interpretive/dialogical, and shared praxis, and shows how each can potentially advance the ethical use of AI tools in RE.

3.1. Phenomenological Framework

Following Smart’s foundational work (Barnes 2000; O’Grady 2005), the phenomenological approach invites people to explore the experiential dimensions of religion, such as myth, ritual, ethics, experience, and social structures, before making any judgments. AI opens up new possibilities for such engagements.
VR and AR create immersive environments that allow students to “travel” to sacred sites or interact with historical sites that may be otherwise inaccessible (Kurata et al. 2025). Adaptive software adjusts the level of support and contextual cues based on each individual learner’s pace. Regarding the multisensory information, artificially generated images and text can supply visual depictions of scriptural accounts and artifacts, thus supporting diverse learning styles. However, overreliance on generated media can lead to a loss of complexity, and cautionary measures are needed. Educators should provide guiding questions that promote comparisons with primary sources to maintain phenomenological bracketing.

3.2. Interpretive/Dialogical Framework

Jackson’s approach (Jackson 1997) highlights dialogue, empathy, and an appreciation that every representation about religion is incomplete. AI adds to such an approach if used as a dialogue partner instead of an authoritative figure.
Structured debate forums and argument mapping tools, as represented by Kialo and underpinned by NLP technology, enable students to assess conflicting truth claims and track logical argumentation patterns. Moreover, chatbots can engage in interfaith dialogue, thus exposing students to diverse perspectives in real time. As reflective analytics, AI examines conversation logs to recognize where misunderstandings or biases arise, giving teachers timely moments for clarification. The pedagogical authorities, which are the teachers, create prompts, manage tone, and ensure doctrinal accuracy, protecting against the dominance of algorithms (Pontifical Academy for Life 2020).

3.3. Shared Praxis Framework

Groome’s Shared Christian Praxis (Groome 2006) describes a circular process with five discrete movements: present action, story and vision, critical reflection, decision, and response. AI can assist every stage of this cycle while respecting human autonomy.
The adaptive reflection prompts in learning analytics dashboards suggest questions aligned with students’ expressed concerns, helping them connect faith tradition to lived experience (Ocampo and Gozum 2024).
Simulation games for virtue formation and rule-based AI environments model ethical dilemmas; learners test responses and immediately assess consequences. Mobile AI coaching apps record service projects or spiritual disciplines, providing reflection insights to shape the next cycle of praxis.
Table 1 operationalizes the prior theory by naming a real learning sequence, tags the AI’s role, and shows the teacher’s supervisory moves in every phase. This approach preserves educators’ secondary role in relation to guidance through metadiscourse in keeping human supervision. AI improves the availability of multiple viewpoints and prompt feedback but unambiguously retains human agency over interpretation at both group (de-briefing) and individual (feedback on journals) levels, thus preserving pastoral duty and maintaining doctrinal integrity.
In the aforementioned paradigms, AI works best as a complementary factor that accentuates human perception and social values. This analytical framework guides the following exploration of potential risks and opportunities and informs the recommendations for practice and policy.

4. Opportunities and Risks

4.1. Ethical Considerations: Decision Making, Privacy, and Bias

The integration of AI into educational as well as theological systems poses significant ethical challenges. The challenges are by far not trivial but are endemic to the creation, deployment, and consequences of AI systems in educational fields. With RE having connotations extending beyond intellectual growth to also incorporate moral maturation and religious guidance, its ethical challenges take on an added dimension. The major concerns can broadly be divided into three dominant areas: control over decision making, data security and privacy, and algorithmic partiality. All three are indicative of certain inherent complexities involved in RE’s deployment of AI and suggest the need for intensely regulated and theologically guided measures (Braun et al. 2020; Brendel et al. 2021; Demaree-Cotton et al. 2022; Dignum 2018; Etzioni and Etzioni 2017; Huang et al. 2023; Kazim and Koshiyama 2020; Lo Piano 2020; Lysaght et al. 2019; Mittelstadt 2019; Woodgate and Ajmeri 2022).
For a start, decision making in AI systems challenges profoundly established ideas about human agency and authority, especially in religion and education. The deployment of AI technologies has expanded into a myriad of activities including assessment, recommendation, and interpretation tasks. For general education, these AI technologies are used to decide on study materials appropriate to students, predict academic performance, and generate machine-generated feedback. Furthermore, in RE, these functions are extended to include religious scripture guidance, religious scripture interpretation suggestions, as well as even generating sermons or liturgical prayers.
Such delegation of these tasks to AI systems creates various apprehensions about losing human oversight. Theological analysis, by its very nature, depends on consideration of historical contexts, ethical nuances, and religious perspectives. The growing assertiveness of AI systems, which provide definitive answers to complex religious questions, threatens the traditional roles of educators as well as religious authorities. The interpretive authority delegated to AI is also usually bypassed; users are oftentimes not aware of inputs into data, linguistic paradigms, or doctrinal tendencies underlying a certain output. The lack of transparency creates a risk of users misattributing AI-generated conclusions as objectivity or impartiality when these are shaped by training data and deeply ingrained presumptions.
The application of AI into decision-making structures creates profound theological questions about the nature of discernment, wisdom, and divine guidance. While AI has the ability to detect patterns and build large datasets, it lacks imitating pastoral discernment or empathy. Where decisions are created in pastoral counseling or religious guidance, entrusting decision making to AI increasingly creates ethical issues. The risks are no longer limited to technical flaws but also include ethical errors because AI lacks the ability to assess particular pastoral needs or render empathetic guidance based on religious knowledge.
One of the major ethical issues of today relates to privacy and data security concerns. The operation of AI tools relies on accessing large datasets. Often, these sets are filled with sensitive personal data, particularly in educational institutions where student performance, actions, and psychological tests are analyzed. The sensitivity of this data becomes even more heightened in RE with religious reflections, moral dilemmas, and introspections based on religious faith.
The risk inherent in such data falling into the wrong hands or being misused poses major ethical and legal issues. Many AI-infused educational sites are run in business modes whereby users are commodified by utilizing or trading their data with external providers. While the strategy is common in secular techno-environments, it poses major challenges for religious institutions with a commitment to confidentiality, pastoral care, and ethical openness. For example, a seminary with a commercially upgraded AI–LMS may inadvertently expose students’ religious records or theological positions to external participants.
Moreover, data breaches can destroy public trust in educational and religious institutions. Unauthorized access to or disclosure of its confidential religious data, confidential confessions, moral dilemmas, or religious uncertainties can cause profound effects on both institutions and people. All these vulnerabilities call for stringent data governance measures, including informed consent protocols, data minimization practices, secure data storage methods, and well-delineated data ownership frameworks. On their part, institutions must also guarantee that AI technologies are compliant with extant international data privacy regulations, such as the General Data Protection Regulation (GDPR) adopted in the European Union.
One relevant concern deals with the lack of informed consent found in many AI systems. Educators and students can be in the dark about data collection activities, how data are processed, or who is entrusted with access to such data. There is an information disparity between this and who holds the authority, going against the guiding precepts of educational openness and violating those norms found in many religious institutions. One must ensure that this consent is not only attained but also fully informed, as this is crucial to maintaining ethical integrity when deploying AI technologies.
The third main ethical issue relates to algorithmic bias. AI algorithms learn from legacy datasets, which often reflect the biases that are present in the societal and institutional milieus from which they are derived. In the general education sector, this can manifest as biased admissions algorithms, inaccurately predicted grading, or discriminatory recommendation algorithms. The implications for RE are equally profound, as biased AI tools can reinforce and amplify exclusions based on doctrine, culture, and gender.
For example, an AI system largely trained on Western, Protestant theological sources may minimize the value of Catholic, Orthodox, or non-Christian perspectives. In addition, it may overlook or misinterpret the theological contributions of scholars from the Global South, women, and other marginalized communities. Such asymmetries can easily become tools that reinforce reigning theological paradigms and hinder the advancement of diversity in RE. In addition, algorithmic biases can insidiously materialize in the very questions AI systems are designed to answer, promoting certain theological traditions while ignoring other possibilities.
Initiatives to correct bias must first establish representative and diverse datasets used in training. Efforts to make this a reality, however, require technical changes as well as a commitment by institutions to theological diversity and inclusivity. RE institutions with AI technologies must engage interdisciplinary groups of theologians, ethicists, educators, and technologists to thoroughly examine and evaluate AI mechanisms for potential bias. Additionally, continuous monitoring of AI output and input by users is needed to address discriminatory trends through correctives.
One more aspect of bias refers to the interpretation of cultural as well as linguistic terms. AI programs trained on religious books in English may not fully understand vital concepts used in other languages or cultures. Cultural insensitivity or wrong translation can lead to misunderstandings, distorting theological meaning and excluding students from non-Western cultures. The challenge highlights the need to establish multilingual and cross-cultural AI in RE to ensure diverse voices and perspectives are represented accurately.
The ethical considerations of decision making, privacy, and bias are interconnected, because all these issues are deeply interrelated. For example, a system of AI delivering doctrinal counsel based on biased datasets with limited transparency to users violates theological integrity and people’s rights as well as institutional responsibility. It is very important to adopt an inclusive stance when dealing with these issues, requiring integration of ethics into all stages of AI development and deployment to successfully address these issues.
Some religious institutions have taken initial steps toward filling this gap by establishing ethical guidelines and fundamental values. One such example is the “Rome Call for AI Ethics”34 developed by the Pontifical Academy for Life (Pontifical Academy for Life 2020), defining necessary values including transparency, responsibility, and inclusiveness. The latter provides a guiding light for religious institutions working toward resolving the ethical issues entailed by AI. The implementation of these values, however, relies on establishing inclusive policies, technical solutions, and ongoing vigilance in institutions.
Concomitant with institutionalized responses, RE must incorporate critical literacy about AI into its program of study. Students and faculty alike should be prepared with the tools needed to understand AI systems’ operational mechanisms and the ethical challenges raised by them and to critically interact with their outputs. This kind of literacy will facilitate informed and responsible religious utilization of AI technologies. Additionally, theological reflection on what constitutes wisdom, discernment, justice, and personhood can add profound insights to the wider discussion of AI ethics, informed by diverse faith traditions.
In summary, the ethical deployment of AI in the field of RE must be guided by a commitment to human dignity, theological integrity, and spiritual growth. Realizing this goal will require leadership of exemplary character marked by humility and interfaith cooperation across traditions. Although AI offers great potential for educational innovation, it is important that its creation and use be predicated on profound respect for human sanctity and be oriented toward transformative ends in RE. As institutions proceed into this new territory, it is vital that they prioritize ethical stewardship as a theological mandate and practical imperative.

4.2. Opportunities

The notable risk of simplifying morals may occur when algorithmic feedback systems may simplify rich moral values to simplistic gamification-based scoring systems, encouraging homogeneity over real moral judgment. The profound opportunity is the ethical growth through simulation. Carefully designed AI-supported simulation games immerse students in ethically complex situations, like deciding on allocations of scarce resources for aid or resolving neighborhood conflicts. Debriefing questions translate in-game decisions to theological concepts such as prudence and justice, asking students to explain their reasoning and contrast it with biblical moral teachings. By making reflective participation necessary, these feedback mechanisms support advanced moral development over simplistic moral measures.
Visual superficiality is another risk that can have a constructive use. High-fidelity VR pilgrimages and AI-generated biblical scenes risk turning sacred narratives into consumable spectacle, encouraging passive awe rather than critical engagement. There is the opportunity of deep visual learning. When paired with scaffolded inquiry, VR site visits to, say, first-century Jerusalem or contemporary multi-faith hubs enable students to interrogate spatial, historical, and cultural layers of a text. Embedded note-taking widgets capture questions in situ; afterward, students compare AI renderings with archaeological data, cultivating visual literacy and historical empathy.
Furthermore, the echo-chamber reasoning risk can also have a constructive use. NLP engines that auto-suggest arguments may nudge learners toward majority views, reinforcing existing biases and dampening analytical rigor. There is an opportunity to enhance critical thinking through argument mapping. Argument-mapping AI-supported platforms visualize the connections between evidence and claims, forcing users to consult counterarguments before marking a debate tree as “complete”. Empirical research in classroom environments suggests significant argumentative depth and perspective-taking gains when these tools are paired with teacher-led reflection. The requirement to locate and reject opposing premises actively resists algorithmic echo chambers. Collectively, such pairs demonstrate that those very affordances which challenge pedagogical worries can, if purposefully designed, be harnessed to foster the acquisition of virtue, foster visualization, and stimulate reflective thought, which are all those factors that form the basis for robust RE.

4.3. AI’s Pedagogical Role: Augmentation vs. Dependency

The growing integration of AI into educational contexts requires a critical exploration of its pedagogical effects. The initial question surpasses a mere assessment of its efficacy in conveying content; it concerns how it affects the pedagogical relationship between teachers and students. In RE, with its inherently relational, dialogical, and formative pedagogy, this question takes on especially added import. Is AI used as a supplement to human teaching, or will it create an overdependence threatening critical reflection, religious judgment, and communal learning processes? The following analyzes the educational role of AI, examining its fine balance between augmentation and overdependence (Abbass 2019; Brynjolfsson 2022; Cukurova et al. 2019; Jarrahi et al. 2022; Lefevor et al. 2021; Petersen 2021; Song 2020; Wilkens 2020).
The educational potential of AI lies in its ability to deliver personalization and responsiveness within educational frameworks (Papakostas 2024; Papakostas et al. 2022). Through data analysis on users, AI programs can suggest personalized assets, adjust material complexity, and identify knowledge gaps where educational deficiencies are occurring. Within RE, this ability helps students tackle complex theological materials at their own rate, allows scriptural interpretation to match individualized study paths, and inspires engagement with digital commentary and multimedia resources (Kurata et al. 2025) in classroom study.
While adaptive learning engines allow students’ progress at their own rates through complex theological material, such modularized learning potentially results in splitting doctrines up into disconnected micro-concepts. To ensure coherence, we suggest dual scaffolding. First, AI-powered concept-mapping dashboards visually display the location of each verse, historical event, or doctrinal statement in an architecture of related concepts within the story of theology, thus enhancing learners’ perceptions of relationships as learning proceeds. Second, instructor-led synthesis sessions (after every learning module) use these dashboards: instructors select the most important nodes to highlight, point to broad trends, and stimulate class discussion among students, allowing for the combination of multiple insights in an integrated schema for religion. Such human-aided integration ensures that individualized learning does not sacrifice integrated integrity to sacred knowledge. For example, a student with difficulty with biblical Greek might be offered advanced language assignments, but another student with a personal interest in liturgical customs might be directed toward relevant image and text resources. The extent of personalization greatly enhances the relevance and effectiveness of educational experiences, particularly in varied classroom contexts.
Moreover, AI works together with instructors to fulfill mundane tasks like student assessments, student attendance, and revisions to curricula. Through this cooperation, instructors are able to dedicate a higher percentage of their time to more complex educational activities, such as mentorship, pastoral care, and theological discussion. For those working in RE, often extending beyond traditional academic teaching to spiritual growth, AI becomes a valuable partner in reducing advising loads while keeping the essential goals of the educational mission intact.
AI allows scholars access to an expansive list of theological tools. With tools like Logos, scholars are able to engage with cross-referenced scripture, historical commentaries, and works written in various languages, all without having to require a large bibliographic background. AI makes this academic avenue accessible, expanding access to high-quality academic materials to students who have limited access to physical library resources. Where faculty resources are limited, as in rural seminaries or schools with budget constraints, AI can act as an add-on resource by facilitating theological questioning and reflection.
However, the pedagogical opportunities presented by AI are offset by intrinsic dangers. Chief among these is the risk of educational dependency, which is a reliance on AI tools that can eventually undermine the capacity of students to reason autonomously, exercise critical thinking, and engage in dialogical interaction. Within the field of RE, where deep engagement with tradition, text, and classmates is vital, reliance on machine-generated answers can promote shallow knowledge attainment and passive reception of information. A student who increasingly depends on an AI tool for biblical interpretation will have less incentive to struggle with the text, engage with a variety of theological viewpoints, or collaborate with others in interpretive dialogue.
The chances of this occurring are heightened by AI systems having a tendency to deliver information as if it has inherent authority, often suppressing indications of ambiguity or other perspectives. Students who are not familiar with AI limitations can thereby perceive provided answers as definitive or complete, failing to acknowledge the contentious nature inherent to theological discourse. Within an arena where diversity of thought and interpretive nuance are paramount, this presents a major educational difficulty. It reduces theology to a mere information retrieval process, thereby avoiding necessary interaction with the meanings, ethics, and contextual factors inherent to theological study.
Furthermore, dependency upon AI presents a challenge also to the traditional role of educators. As students grow more reliant on AI technologies as aid, the role of educator becomes less needed in comparison with the machine. This shift in educational dynamics not only devalues the prestige granted to educators but also threatens the communal model of learning valued by many religious traditions. RE involves more than a mere transfer of knowledge; it constitutes a relational and formational process. AI has no ability to engage relationally, however. It has no ability to incarnate faith, empathy, or participate in spiritual discernment. These are intrinsic to the religious educator’s role and are not reducible to algorithmic efficiencies.
The danger of dependence poses a major difficulty for instructors. The temptation of syllabi, lectures, and testing tools created by AI may cause instructors to over-rely on these machine tools. While these tools may enhance productivity, their overuse by instructors may sacrifice educational goals and suffocate student creativity and the contextual awareness needed to successfully teach. The major problem is not with tools, but with their usage; if not closely monitored, AI can turn instructors into passive conveyors of machine-created materials instead of actively creating educational lessons.
Additionally, too great a reliance on AI risks imposing epistemic uniformity. Many AI programs are created according to dominant theological paradigms and therefore continue entrenched doctrinal positions. Without purposeful diversification of datasets and algorithms, there is a chance AI will perpetuate a limited spectrum of theological expression, risking the exclusion of vital, prophetic, or minority viewpoints. Educationally, this creates a monolithic learning environment suppressing dissent, eliminating nuance, and underpreparing students to cope with real-world complexities of actualized ministry or academic engagement.
There is a fear that AI promotes instrumentalist visions of education. Since AI systems are designed to value quantifiable outcomes such as test scores, completion rates, or content mastery, they might divert educational focus toward efficiency and away from the more expansive concerns of formation. In RE, the goal is more than the production of knowledgeable subjects; it is to cultivate wisdom, virtue, and a sense of responsibility to the community. These goals are inherently difficult to quantify and are often overlooked in algorithmic assessments. Educators must resist the temptation to reduce learning to simple metrics, instead insisting on pedagogical methods that value process, conversation, and personal development.
To mitigate these dangers, it is important to develop a pedagogical model that sees AI as a support and not a substitute. This involves intentional design approaches whereby AI is used to augment, not determine, learning trajectories. For example, an AI system can suggest relevant theological reading; yet, the final choice and the task of leading students through critical interaction with the content remain with the educator. Or, while AI can identify learning gaps through analytical processes, human tutors must provide the interpretive insights and pastoral support necessary to fill those gaps.
It is imperative that educators teach AI literacy to students, including not only the responsible usage of AI tools but also awareness of their limitations, bias, and ethical considerations. Students should be encouraged to critically analyze texts created by AI when compared with traditional resources and to discuss them in group settings. The teaching methodologies are meant to offset overreliance on technology while framing the usage of AI as a gateway to greater theological reflection.
It is imperative, therefore, that theological institutions give high priority to faculty development so as to enable effective integration of AI into instructional approaches. It involves not just technical training but also assistance in the ethics of decision making and instructional approaches as well as encouragement to think through multiple uses of AI, consider its implications, and share exemplary practices with academic constituencies. Additionally, institutional structures should create a climate of critical engagement with technology, keeping in mind that while AI can augment educational programs, it cannot replace the inherently human activity of teaching.
Overall, RE in the age of AI holds great promise, as well as great risk. Used well, AI can enhance quality, enhance access, and assist learners and educators alike. Used unwisely, AI risks promoting dependency, reducing theological investigation to computational processes, and undercutting the relational and formational nature of education. The challenge, first of all, is not in dismissing AI out of hand, but in its wise integration as a tool for human education guided by theological reflection and educational sensitivity.

5. Institutional Case Studies

The application of AI into RE expands beyond academic critique; it has also taken tangible form in institutional contexts. At seminaries, theological schools, and religious research centers, instructors are using AI technologies directly to enhance teaching methods, research investigations, and spirituality cultivation. The activities undertaken by these institutions vary in scope and aim but collectively give evidence of a commitment to balancing technological progress with theological fundamentals. The following section presents a sampling of case studies representing the ways in which different institutions are using and adapting AI tools in religious studies. The illustrations highlight effective methods, reveal challenges, and present useful lessons that will assist other educators and decision makers as they make similar transitions.
One major example of AI integration into theological paradigms is Logos Bible Software, a tool used extensively by divinity schools and seminaries. The site makes use of NLP and machine learning technologies to provide advanced tools of exegetical analysis. The tools available allow students and professors alike to apply semantic search over a massive library of theological works, examine interlinear scripture translations, and obtain AI-generated suggestions for sermon outlines. Dallas Theological Seminary35 and Gordon-Conwell Theological Seminary36 are among the institutions that incorporated Logos into their initial courses of study. The program acts as a virtual library of theological materials, a tool for research, as well as an academic tool. Through creating structured paths of navigation through difficult theological material, Logos allows students to engage more deeply with scripture, church history, as well as modern scholarly discussion.
However, instructors who are using Logos have recognized its limitations. While the auto-analyzed analysis and resource suggestions are very helpful, they require close monitoring. The faculty at Dallas Theological Seminary has created workshops and instructional materials designed to teach students how to evaluate AI-generated content, contrast it with traditional commentaries, and reflect on theological implications. Through this approach, AI is not regarded as a replacement for theological study but as a driver of deeper exploration.
A prominent example is the Centre for Digital Theology, based in Durham University. It represents a research-driven model of examining AI in the context of RE, engaging with digital culture, ethical exploration, and theological debate. It has initiated several projects harnessing AI toward theological exploration, including the use of machine learning to examine digital intercessory petitions aimed at promoting an awareness of contemporary spirituality. The center also evaluates the impact of AI on establishing religious authenticity in the virtual world. Through inter- and transdisciplinary discussion between theologians, data scientists, and ethicists, the OCDT promotes a reflective and critical exploration of AI, examining its benefits and its pitfalls.
The organization prioritizes public accountability and ethical formation. It has published directives on virtual theological research and has hosted symposiums on the accountable use of AI in pastoral settings. Its focus on facilitating conversation between technological innovation and theological tradition offers a template through which educational institutions can move forward in both innovation and ethical responsibility. By integrating AI into holistic discussions on ecclesiology, ethics, and epistemology, the OCDT moves beyond technological fascination to encourage meaningful engagement.
In a United States context, Harvard Divinity School37 (HDS) represents a relevant example of AI incorporated into a pluralistic interdisciplinary environment. While explicitly forbidding AI to be declared as a theological source of authority, HDS has still incorporated AI into its interfaith studies, comparative religion courses, and digital humanities initiatives. For example, professors used AI tools to analyze large collections of religious texts and thereby enable an exploration of how diverse traditions define theological notions such as mercy, justice, and forgiveness. The technological tool allows students to identify patterns that are difficult to understand based on human observation, thus enhancing discussion with regard to interpretation and meaning.
HDS has played a pioneering role in expanding access to religious archives with the adoption of AI. Working with the Harvard Library Innovation Lab,38 AI drives the digitization, organization, and annotation of religious texts from multiple traditions. Not only does this preserve sensitive material, but also it extends access so that students and scholars all over the world can engage with material previously inaccessible to them. The role AI plays in this archival process highlights AI’s ability to aid theological training in extending paths of inquiry and promoting diversity.
However, the HDS faculty shows a measured sense with regard to epistemological constraints dealing with AI. Teachers stress that while AI shows high expertise in pattern recognition and linguistic analysis, it has no meaning sense similar to human interpreters. For this reason, AI remains positioned in a system where human agency, theological awareness, and communal interpretation are higher in order of priority. Learners are encouraged to see AI as an added tool working toward supplementing, and not substituting, the interpretation role.
A relevant example can be taken from the Catholic tradition, referencing Rome’s Pontifical Academy for Life. While not a traditional seminary, this Vatican-affiliated organization has taken on a leading role in shaping the ethical landscape of AI’s development and deployment, especially in areas relevant to issues of religion and spirituality. In 2020, it joined in issuing the “Rome Call for AI Ethics”, with Microsoft and IBM, where it called for transparency, diversity, and responsibility in AI systems. While large in scope, its meaning to those of faith is transparent: institutions need to take on AI not as consumers but as moral co-creators.
Numerous Catholic institutions tried to answer this mandate by creating courses on ethics in AI. The Pontifical Gregorian University,39 for example, added digital ethics courses to its courses of study in theology, and its professors collaborate with technical specialists to study how AI affects catechesis and pastoral care. AI in these programs is understood not only as a neutral tool but as an ethical system shaped by human choices and institutional norms.
One other example from a non-Western context is of the Asia Theological Association40 (ATA), which operates a large network of theological seminaries across Asia. Recognizing different levels of technological expertise among its constituent schools, ATA began a multi-step project to explore low-cost and locally appropriate AI tools. Partnering with regional universities, ATA has experimented with church–state dialect chatbots to answer theological questions, auto-marking tools for distance-learning courses, and mobile phone applications combining traditional devotional routines with AI-generated insights.
ATA’s approach emphasizes scalability as well as cultural relevance. Where internet penetration may be limited or educational infrastructures poor, reduced AI solutions provide an avenue for participation in digital theological inquiry. Secondly, ATA emphasizes context-based theological judgment, compelling its instructors to examine AI not only in terms of functional effectiveness but also based on its correspondence to local religious statutes as well as church norms. This case shows the relevance of context apparatus in implementing AI and challenges the assumption of technological progress following a single universal path.
Finally, we discuss FaithTech,41 a global movement and organization representing Christian technologists, developers, and theologians all coming together under a common banner. FaithTech operates in urban cities all over North America, Europe, and Asia, running hackathons and innovation labs focused on digital ministry. Not quite an academic organization, though, FaithTech works with seminaries and churches to create AI solutions aimed at tackling biblical engagement, crisis response, and discipleship topics. One effort in particular stands out, as it created an AI chatbot meant to serve new converts to provide responses to questions about faith and build relationships with local church groups.
The activities carried out by FaithTech demonstrate how AI can organically emerge through religious groups instead of corporate impositions. Additionally, it highlights the relevance of co-creation, whereby theologians, coders, and religious leaders collaborate together to bring about tools representative of shared values. Through this system, AI is not adopted, but instead evolved to be in line with theological precepts as well as pastoral aspirations.
Collectively, these institutional case studies present the varied and context-specific ways in which AI is coming to be integrated into RE. There are a number of recurring themes to be identified: an awareness of AI’s ability to enhance participation and accessibility; a commitment to ethical regulation and critical assessment; and a commitment to cooperation between human and AI as opposed to replacement. Overall, these examples highlight the imperative necessity of thoughtful engagement by institutions. When implemented intentionally and supplemented by training, reflection, and ethical guidelines, AI augments RE. When applied with little consideration, it threatens to misrepresent or compromise the educational and faith-based goals of RE.
As more schools of theology consider integrating AI, these case studies offer insightful observations. First, effective integration requires interdisciplinary cooperation and institutional support. Next, ethical formation needs to happen alongside technical education. Further, context matters: AI instruments need to be adapted to respond to local needs, theological traditions, and pedagogical capacities. Finally, the use of AI must always be centered on service, augmenting human learning, building community, and advancing the spiritual and intellectual goals of RE.

6. Conclusions

This study has examined the ethical, pedagogical, and theological implications of AI in RE, arguing that AI is not a neutral or purely technical innovation but a transformative force whose integration into RE demands critical reflection and institutional intentionality. As outlined in the preceding sections, AI offers significant pedagogical advantages, including personalized learning, increased accessibility, and administrative efficiency. It also enables new forms of theological inquiry through tools such as natural language processing and adaptive learning systems. At the same time, AI introduces ethical dilemmas regarding authority, privacy, bias, and educational dependency, all of which are particularly sensitive within the context of religious instruction.
The central thesis of this paper is that while AI can augment RE by enhancing the delivery and accessibility of theological content, it must be implemented in a manner that upholds spiritual integrity, human agency, and doctrinal diversity. Key findings support this position. Historically, RE has adapted slowly to technological innovation, primarily due to concerns about preserving tradition and relational pedagogy. However, the COVID-19 pandemic and the rise of digital-native students have accelerated the need for scalable and flexible solutions, prompting institutions to adopt AI tools ranging from language platforms to sermon generators and scriptural analytics. Despite these benefits, unchecked adoption risks reducing theological inquiry to mechanistic information retrieval and undermining the formative dimensions of spiritual education.
There are many case studies, including those by the Harvard Divinity School, Oxford Centre for Digital Theology, Pontifical Gregorian University, and Asia Theological Association, where contextually appropriate and ethically grounded approaches to incorporating AI are demonstrated. They provide paradigms capable of achieving a symphonic balance between technological progress and wise judgment. They are as effective as they are only because of interdisciplinary cooperation, continuous evaluation, and a commitment to universal theological norms. Other institutions dealing with AI less critically risk opening their approach to faith development to commercial interests or algorithmic biases by doing so.
Reflexively returning to questions raised in this introduction about the role of AI in doctrinal interpretation, pastoral care, and RE, it becomes apparent why AI’s role must be very specifically articulated. AI should not eschew human judgment, pastoral guidance, or the communal discussion inherent in RE. The duty to deliver and explicate religious truth should not rest with systems lacking in knowledge, compassion, or theological awareness. AI should also not serve as an unbiased arbiter of truth; its results depend on the data fed into it, its processing methodologies, and the institutions using it.
Theologically, AI’s reach extends far beyond educational fields. The traction these considerations take with deeply fundamental inquiries into human nature, what it means to learn, and what constitutes wisdom finds expression in Christian teaching. Learning in Christian doctrine transcends intellectual gain; it is about transformation toward relationships with God and with human beings. This perspective resists reductions of teaching methodology to data streams or performance metrics. While AI may assist with the organization and dissemination of information, it lacks the capture of the nuance of human interaction, cultivation of moral awareness, or the expression of the Holy Spirit in and through the educational process as a whole.
As such, educational institutions have a twofold responsibility. First, institutions are charged with helping students develop the competencies needed to engage with artificially intelligent systems critically, grasping not just their technical operations but also their ethical considerations. This endeavor involves developing what might be called “AI literacy”, a fusion of technological expertise, ethical reflection, and theological discernment. In addition, institutions need to make faculty development a priority by ensuring that teachers are sufficiently trained to thoughtfully interact with and teach the use of these tools. Instructors should be encouraged to reflect on how AI is changing their professional identity and to develop strategies that avoid overreliance while promoting authentic augmentation.
The policy imperatives are of similar importance. Institutional policies need to define strong parameters with regard to data usage, clarity of algorithms, and theological oversight. Of particular import in this regard are maintaining the confidentiality of students spiritually and guaranteeing that diversity of doctrine is preserved and not undermined by algorithmic standardization. Where AI is used to grade or counsel students, mechanisms of human monitoring and appeal must be instituted. Similarly, accrediting agencies and theological associations should take the first steps toward creating ethical guidelines and standards of best practices in governing how AI is used in their associated institutions.
Considering future research paths, a number of promising lines of inquiry become apparent. First, there remains an imperative need for more empirical studies working to ascertain the long-term effects of implementing AI in RE, especially with respect to vocational development and comparative learning outcomes. To what extent does reliance on AI impact students’ theological arguments or vocational identity perceptions? With what negative effects does this impact faculty morale or teaching innovation? Second, academic scholarship needs to explore the cross-cultural and interfaith contexts of AI integration. While much of the present discussion originates in the Global North, there are also theological schools in Africa, Asia, and Latin America that are creatively applying AI in manners unique and appropriate to their context. The voices from these contexts are needed to provide necessary alternatives to mainstream paradigms and insights.
Finally, a deeper and more extensive theological exploration of AI is called for. This requires a re-examination of the doctrines of creation, personhood, and revelation in light of machine agency and automation. Can AI facilitate spiritual growth, or does it inevitably constitute a barrier between students and religious experiences? What are the implications of being created in the image of God in a reality marked by digital simulation? These questions are more than abstract speculation; they are of pastoral and pedagogical concern, addressing the very telos of RE.
Consequently, AI deeply reshapes the educational landscape, and its impact on RE will continue to increase in this sense as well. Schools must take this reality into consideration, yet without an unreflective assumption of it. Through theological nuance, moral awareness, and creative teaching practices, RE has the potential not just to survive but to flourish in the era of AI. It is not a goal to unreflectively adopt technology or to take refuge in positions of technological pessimism, but to see, in a thoughtful and educated manner, what these technologies can and should enable in terms of faith, human growth, and social justice. As AI continues to advance, so too must the methods by which we understand and apply it, such that RE in a digital world continues to align with human dignity, religious depth, and communal wisdom.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
REReligious Education
PLATOProgrammed Logic for Automated Teaching Operations
ITSIntelligent Tutoring Systems
LMSLearning Management Systems
NLPNatural language processing
ARAugmented reality
VRVirtual reality
GDPRGeneral Data Protection Regulation
HDSHarvard Divinity School
ATAAsia Theological Association

Notes

1
https://www.logos.com/, accessed on 21 March 2025.
2
https://www.hds.harvard.edu/, accessed on 21 March 2025.
3
4
5
6
https://moodle.org/, accessed on 21 March 2025.
7
https://www.khanacademy.org/, accessed on 21 March 2025.
8
https://www.coursera.org/, accessed on 21 March 2025.
9
10
11
https://pastors.ai/?utm_source=chatgpt.com, accessed on 21 March 2025.
12
13
https://squirrelai.com/, accessed on 21 March 2025.
14
15
https://www.gradescope.com/, accessed on 21 March 2025.
16
https://www.turnitin.com/, accessed on 21 March 2025.
17
https://www.ibm.com/watson, accessed on 21 March 2025.
18
19
20
21
https://www.instructure.com/, accessed on 21 March 2025.
22
23
https://e-soctheol.ellak.gr/, accessed on 21 March 2025.
24
https://www.nuance.com/dragon.html, accessed on 21 March 2025.
25
https://luxai.com/, accessed on 21 March 2025.
26
https://photomath.com/, accessed on 21 March 2025.
27
https://www.wolframalpha.com/, accessed on 21 March 2025.
28
https://www.codio.com/, accessed on 21 March 2025.
29
https://codesignal.com/, accessed on 21 March 2025.
30
https://www.accordancebible.com/, accessed on 21 March 2025.
31
https://www.kialo.com/, accessed on 21 March 2025.
32
https://www.patheos.com/, accessed on 21 March 2025.
33
34
https://www.romecall.org/, accessed on 21 March 2025.
35
https://www.dts.edu/, accessed on 21 March 2025.
36
https://www.gordonconwell.edu/, accessed on 21 March 2025.
37
See note 2 above.
38
39
https://www.unigre.it/, accessed on 21 March 2025.
40
https://www.ataasia.com/, accessed on 21 March 2025.
41
https://faithtech.com/, accessed on 21 March 2025.

References

  1. Abbass, Hussein A. 2019. Social Integration of Artificial Intelligence: Functions, Automation Allocation Logic and Human-Autonomy Trust. Cognitive Computation 11: 159–71. [Google Scholar] [CrossRef]
  2. Ahmad, Sayed Fayaz, Mohd Khairil Rahmat, Muhammad Shujaat Mubarik, Muhammad Mansoor Alam, and Syed Irfan Hyder. 2021. Artificial Intelligence and Its Role in Education. Sustainability 13: 12902. [Google Scholar] [CrossRef]
  3. Alan, Ahmet Yusuf, Enis Karaarslan, and Ömer Aydın. 2024. A RAG-based Question Answering System Proposal for Understanding Islam: MufassirQAS LLM. SSRN Electronic Journal 9: 544–59. [Google Scholar] [CrossRef]
  4. Balasubramanian, Sivasubramanian. 2023. Ethical Considerations in AI-assisted Decision- Making for End-Of-Life Care in Healthcare. Power System Technology 47: 167–82. [Google Scholar] [CrossRef]
  5. Barnes, L. Philip. 2000. Ninian Smart and the Phenomenological Approach to Religious Education. Religion 30: 315–32. [Google Scholar] [CrossRef]
  6. Braun, Matthias, Patrik Hummel, Susanne Beck, and Peter Dabrock. 2020. Primer on an ethics of AI-based decision support systems in the clinic. Journal of Medical Ethics 47: e3. [Google Scholar] [CrossRef]
  7. Brendel, Alfred Benedikt, Milad Mirbabaie, Tim-Benjamin Lembcke, and Lennart Hofeditz. 2021. Ethical Management of Artificial Intelligence. Sustainability 13: 1974. [Google Scholar] [CrossRef]
  8. Brynjolfsson, Erik. 2022. The Turing Trap: The Promise and Peril of Human-Like Artificial Intelligence. Daedalus 151: 272–87. [Google Scholar] [CrossRef]
  9. Carbonell, Jaime R. 1970. AI in CAI: An Artificial-Intelligence Approach to Computer-Assisted Instruction. IEEE Transactions on Man-Machine Systems 11: 190–202. [Google Scholar] [CrossRef]
  10. Castelfranchi, Cristiano. 2013. Alan Turing’s “Computing Machinery and Intelligence”. Topoi 32: 293–99. [Google Scholar] [CrossRef]
  11. Chen, Lijia, Pingping Chen, and Zhijian Lin. 2020. Artificial Intelligence in Education: A Review. IEEE Access 8: 75264–78. [Google Scholar] [CrossRef]
  12. Cukurova Mutlu, Carmel Kent, and Rosemary Luckin. 2019. Artificial intelligence and multimodal data in the service of human decision-making: A case study in debate tutoring. British Journal Educational Technology 50: 3032–46. [Google Scholar] [CrossRef]
  13. Čartolovni, Anto, Ana Tomičić, and Elvira Lazić Mosler. 2022. Ethical, legal, and social considerations of AI-based medical decision-support tools: A scoping review. International Journal of Medical Informatics 161: 104738. [Google Scholar] [CrossRef] [PubMed]
  14. Demaree-Cotton, Joanna, Brian D. Earp, and Julian Savulescu. 2022. How to Use AI Ethically for Ethical Decision-Making. The American Journal of Bioethics 22: 1–3. [Google Scholar] [CrossRef]
  15. Dignum, Virginia. 2018. Ethics in artificial intelligence: Introduction to the special issue. Ethics and Information Technology 20: 1–3. [Google Scholar] [CrossRef]
  16. El Mestari, Soumia Zohra, Gabriele Lenzini, and Huseyin Demirci. 2023. Preserving data privacy in machine learning systems. Computer Security 137: 103605. [Google Scholar] [CrossRef]
  17. Etzioni, Amitai, and Oren Etzioni. 2017. Incorporating Ethics into Artificial Intelligence. The Journal of Ethics 21: 403–18. [Google Scholar] [CrossRef]
  18. Gonzalez, Cleotilde. 2023. Building Human-Like Artificial Agents: A General Cognitive Algorithm for Emulating Human Decision-Making in Dynamic Environments. Perspectives on Psychological Science: A Journal of the Association for Psychological Science 19: 860–73. [Google Scholar] [CrossRef]
  19. Groome, Thomas H. 2006. A Shared Praxis Approach to Religious Education. In International Handbook of the Religious, Moral and Spiritual Dimensions in Education. Edited by Marian de Souza, Gloria Durka, Kathleen Engebretson, Robert Jackson and Andrew McGrady. Dordrecht: Springer, pp. 763–77. [Google Scholar] [CrossRef]
  20. Guo, Ting. 2015. Alan Turing: Artificial intelligence as human self-knowledge. Anthropology Today 31: 3–7. [Google Scholar] [CrossRef]
  21. Hakim, Abdul, and Pauli Anggraini. 2023. Artificial Intelligence in Teaching Islamic Studies: Challenges and Opportunities. Molang: Journal of Islamic Education 1: 57–69. [Google Scholar] [CrossRef]
  22. Hamal, Oussama, Nour-Eddine El Faddouli, Moulay Hachem Alaoui Harouni, and Joan Lu. 2022. Artificial Intelligent in Education. Sustainability 14: 2862. [Google Scholar] [CrossRef]
  23. Harry, Alexandara, and Sayudin Sayudin. 2023. Role of AI in Education. Interdiciplinary Journal and Hummanity (INJURITY) 2: 260–68. [Google Scholar] [CrossRef]
  24. Huang, Changwu, Zeqi Zhang, Bifei Mao, and Xin Yao. 2023. An Overview of Artificial Intelligence Ethics. IEEE Transactions on Artificial Intelligence 4: 799–819. [Google Scholar] [CrossRef]
  25. Huang, Lan. 2023. Ethics of Artificial Intelligence in Education: Student Privacy and Data Protection. Science Insights Education Frontiers 16: 2577–87. [Google Scholar] [CrossRef]
  26. Jackson, Robert. 1997. Religious Education: An Interpretive Approach. London: Hodder & Stoughton. [Google Scholar]
  27. Jarrahi, Mohammad Hossein, Christoph Lutz, and Gemma Newlands. 2022. Artificial intelligence, human intelligence and hybrid intelligence based on mutual augmentation. Big Data & Society 9. [Google Scholar] [CrossRef]
  28. Jose, Deepthy. 2024. Data Privacy and Security Concerns in AI-Integrated Educational Platforms. Recent Trends in Management and Commerce 5: 87–91. [Google Scholar] [CrossRef]
  29. Kazim, Emre, and Adriano Soares Koshiyama. 2020. A high-level overview of AI ethics. Patterns 2: 100314. [Google Scholar] [CrossRef]
  30. Kulaklıoğlu, Duru. 2024. Ethical AI in Autonomous Systems and Decision-Making. Human Computer Interaction 8: 87. [Google Scholar] [CrossRef]
  31. Kurata, Lehlohonolo, Musa Adekunle Ayanwale, Rethabile Rosemary Molefi, and Tajudeen Sanni. 2025. Teaching religious studies with artificial intelligence: A qualitative analysis of Lesotho secondary schools teachers’ perceptions. International Journal of Educational Research Open 8: 100417. [Google Scholar] [CrossRef]
  32. Lefevor, Tyler, Edward Britt Davis, Jaqueline Paiz, and Abigail Smack. 2021. The relationship between religiousness and health among sexual minorities: A meta-analysis. Psychological Bulletin 147: 647–66. [Google Scholar] [CrossRef]
  33. Lo Piano, Samuele. 2020. Ethical principles in machine learning and artificial intelligence: Cases from the field and possible ways forward. Humanities and Social Sciences Communications 7: 9. [Google Scholar] [CrossRef]
  34. Lysaght, Tamra, Hannah Yeefen Lim, Vicki Xafis, and Kee Yuan Ngiam. 2019. AI-Assisted Decision-Making in Healthcare. Asian Bioethics Review 11: 299–314. [Google Scholar] [CrossRef] [PubMed]
  35. Meurisch, Christian, and Max Mühlhäuser. 2021. Data Protection in AI Services. ACM Computing Surveys (CSUR) 54: 1–38. [Google Scholar] [CrossRef]
  36. Mittelstadt, Brent. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1: 501–7. [Google Scholar] [CrossRef]
  37. Muggleton, Stephen. 2014. Alan Turing and the development of Artificial Intelligence. AI Communications 27: 3–10. [Google Scholar] [CrossRef]
  38. Ocampo, Leo-Martin Angelo R., and Ivan Efreaim A. Gozum. 2024. AI in the Academe: Opportunities and Challenges for Religious Education. Religion and Social Communication 22: 372–94. [Google Scholar] [CrossRef]
  39. O’Grady, Kevin. 2005. Professor Ninian Smart, phenomenology and religious education. British Journal of Religious Education 27: 227–37. [Google Scholar] [CrossRef]
  40. Papakostas, Christos. 2024. Faith in Frames: Constructing a Digital Game-Based Learning Framework for Religious Education. Teaching Theology & Religion 27: 137–54. [Google Scholar] [CrossRef]
  41. Papakostas, Christos, Christos Troussas, Akrivi Krouska, and Cleo Sgouropoulou. 2022. Personalization of the Learning Path within an Augmented Reality Spatial Ability Training Application Based on Fuzzy Weights. Sensors 22: 7059. [Google Scholar] [CrossRef]
  42. Papakostas, Christos, Christos Troussas, and Cleo Sgouropoulou. 2024. Introduction and Overview of AI-Enhanced Augmented Reality in Education. In Special Topics in Artificial Intelligence and Augmented Reality: The Case of Spatial Intelligence Enhancement. Edited by Christos Papakostas, Christos Troussas and Cleo Sgouropoulou. Cham: Springer Nature Switzerland, pp. 1–11. [Google Scholar] [CrossRef]
  43. Petersen, Arthur. 2021. Normativity and Biblical Criticism. Zygon: Journal of Religion and Science 56: 3–5. [Google Scholar] [CrossRef]
  44. Pontifical Academy for Life. 2020. Rome Call for AI Ethics; Pontifical Academy for Life. Available online: https://www.romecall.org (accessed on 21 March 2025).
  45. Ragni, Marco. 2020. Artificial Intelligence and High-Level Cognition. A Guided Tour of Artificial Intelligence Research, 457–86. [Google Scholar] [CrossRef]
  46. Rodgers, Waymond, James M. Murray, Abraham Stefanidis, William Y. Degbey, and Shlomo Y. Tarba. 2022. An artificial intelligence algorithmic approach to ethical decision-making in human resource management processes. Human Resource Management Review 33: 100925. [Google Scholar] [CrossRef]
  47. Saxena, Parul, Vinay Saxena, Adarsh Pandey, Uri Adrian Prync Flato, and Keshav Shukla. 2023. Multiple Aspects of Artificial Intelligence. Maharashtra: Book Saga Publications. [Google Scholar] [CrossRef]
  48. Schulze, Kay G., Robert N. Shelby, Donald J. Treacy, Mary C. Wintersgill, Kurt Vanlehn, and Abigail Gertner. 2000. Andes: An Intelligent Tutor for Classical Physics. Journal of Electronic Publishing 6. [Google Scholar] [CrossRef]
  49. Shofiyyah, Nilna Azizatus, Ogi Lesmana, and Hendra Tohari. 2024. Metamorphosis of Islamic Religious Education Learning Method: Classic Approach Converted by Artificial Intelligence (AI). Jurnal Pendidikan: Riset Dan Konseptual 8: 265–75. [Google Scholar] [CrossRef]
  50. Skinner, Burrhus F. 1954. The science of learning and the art of teaching. Harvard Educational Review 24: 86–97. [Google Scholar]
  51. Song, Yong Sup. 2020. Religious AI as an Option to the Risks of Superintelligence: A Protestant Theological Perspective. Theology and Science 19: 65–78. [Google Scholar] [CrossRef]
  52. Tran, Khoa, and Tuyet Nguyen. 2021. Preliminary Research on the Social Attitudes toward AI’s Involvement in Christian Education in Vietnam: Promoting AI Technology for Religious Education. Religions 12: 208. [Google Scholar] [CrossRef]
  53. Wilkens, Uta. 2020. Artificial intelligence in the workplace—A double-edged sword. The International Journal of Information and Learning Technology 37: 253–65. [Google Scholar] [CrossRef]
  54. Woodgate, Jessica, and Nirav Ajmeri. 2022. Macro Ethics Principles for Responsible AI Systems: Taxonomy and Directions. ACM Computing Surveys 56: 1–37. [Google Scholar] [CrossRef]
Table 1. Pedagogical design: Human-in-the-Loop dialogic model.
Table 1. Pedagogical design: Human-in-the-Loop dialogic model.
StageLearning ActivityAI ContributionTeacher Contribution
1. Dialogue SimulationStudents enter a multi-party, AI-mediated conversation between differing faith perspectives.Natural-language engine generates balanced prompts and character responses, ensuring under-represented voices are surfaced.Selects prompt set, sets ground rules, and monitors real-time output for doctrinal fidelity and respectful tone.
2. Moderated DebriefWhole-class or small-group discussion immediately after the simulation.Conversation analytics highlight contested concepts and emotional peaks to support focus.Leads meta-dialogue, invites clarifications, and reconnects insights to curriculum goals and community tradition.
3. Self-Reflection JournalIndividual learners record takeaways, questions, and future action points.Adaptive reflection prompts suggest scripture passages, ethics readings, or practical applications keyed to the dialogue themes.Reviews a sample of journals, offers personalized feedback, and plans follow-up activities that weave reflections into ongoing praxis.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Papakostas, C. Artificial Intelligence in Religious Education: Ethical, Pedagogical, and Theological Perspectives. Religions 2025, 16, 563. https://doi.org/10.3390/rel16050563

AMA Style

Papakostas C. Artificial Intelligence in Religious Education: Ethical, Pedagogical, and Theological Perspectives. Religions. 2025; 16(5):563. https://doi.org/10.3390/rel16050563

Chicago/Turabian Style

Papakostas, Christos. 2025. "Artificial Intelligence in Religious Education: Ethical, Pedagogical, and Theological Perspectives" Religions 16, no. 5: 563. https://doi.org/10.3390/rel16050563

APA Style

Papakostas, C. (2025). Artificial Intelligence in Religious Education: Ethical, Pedagogical, and Theological Perspectives. Religions, 16(5), 563. https://doi.org/10.3390/rel16050563

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop