3.1. From Competition to Symbiosis: A Phenomenological Exploration of Natural and Artificial Intelligence
The dynamic development of AI systems raises fundamental questions about AI’s relationship with natural human intelligence. Adopting phenomenological vigilance, we first bracket common assumptions about AI to examine the dynamic development of AI systems and their relationship with natural human intelligence. This epoché—suspending both techno-optimism and alarmism—allows us to attend to the concrete phenomena of how artificial and natural intelligence actually interact in lived experience. The dichotomy of “threat or ally” simplifies a complex problem yet provides a useful conceptual framework for analysing this phenomenon. As
Floridi (
2019) emphasises in his analysis of the ethical aspects of AI, AI systems do not function in a vacuum, they are complex socio-technical entities whose impact on humans is deeply rooted in the social, cultural, and ethical contexts of their implementation.
AI primarily encompasses systems with so-called narrow specialisation, built on machine learning and deep neural networks. Unlike general human intelligence, current AI focuses on selected tasks—from text processing and generation, image recognition to logical analysis—achieving superhuman proficiency without a general understanding of the world. Machine learning allows such systems to learn patterns from huge data sets and improve their performance through trial and error, while humans often learn from fewer examples, using their innate abilities to generalise and contextualise. For example, algorithms can now detect correlations in medical or financial data with remarkable precision, while humans may not be able to match their speed of calculation, but they have common sense, intuition, and awareness of purpose. Despite impressive advances, digital intelligence still operates on different principles than biological intelligence; it is essentially a different “operating system” based on digital processing, without the consciousness and self-awareness of the human mind (
Korteling et al. 2021).
However, it is worth noting that the latest direction in the development of artificial intelligence and a specific trend are multimodal systems that integrate different modalities related to text, image, sound, and video within a unified computing architecture. Large language models (LLMs), such as GPT-4 and Claude, have evolved from pure text models towards multimodal capabilities, enabling them to interpret and generate content in various formats. This integration of modalities allows AI systems to create richer representations of knowledge, similar to the human way of processing information from multiple senses simultaneously. Models such as GPT-4.5 and Gemini can analyse images, understand their content, and answer questions about visual elements while maintaining their linguistic abilities. This enables them to perform tasks that require multimodal coordination, such as describing photos, solving mathematical problems presented graphically, or interpreting charts and diagrams. Despite these advanced capabilities, even multimodal LLMs still rely on statistical relationships between data rather than a deep understanding of reality comparable to that of humans.
A critical realist analysis exposes the hidden generative mechanisms beneath multimodal AI’s apparent sophistication. While surface capabilities impress, the underlying architecture remains fundamentally statistical, lacking the ontological depth of human meaning-making. These systems operate through what Bhaskar would call the “empirical” level of pattern matching, without access to the “real” structures of semantic understanding that emerge from embodied, cultural, and spiritual human existence.
Natural language processing (NLP) is an area where AI has come closer to human capabilities, although qualitative differences are still evident. Modern language models (such as GPT) can generate texts that resemble human style and fluency, and speech recognition systems enable natural interaction with machines. AI can handle machine translation, summarise articles, and answer questions in a fraction of a second—tasks that require years of language learning and reading comprehension for humans. However, despite their impressive linguistic fluency, these models do not “understand” the meaning of words in the same way as humans; they operate on statistical relationships in training data. Bender aptly described large language models as “stochastic parrots” that mimic human speech without true semantic awareness (
Vinay 2024).
Applying Ricoeur’s hermeneutics of suspicion to this phenomenon, we must interrogate the corporate rhetoric surrounding “emotional AI”. When technology companies claim their systems can “understand” emotions, what ideological and commercial interests drive such assertions? The marketing language of “empathy” and “emotional intelligence” in AI systems masks a fundamental commodification of human affect, reducing complex emotional experiences to data patterns that can be monetised. This critical hermeneutical lens reveals how the discourse of AI “understanding” serves to normalise surveillance capitalism while obscuring the absence of genuine intersubjective encounter.
In practice, this means that AI can generate a convincing-sounding statement or response, but it does so without reference to actual experience or deep reflection; when the answer is correct, it is sometimes a “lucky guess” resulting from averaging patterns from the data (
Vinay 2024). Humans, on the other hand, use language embedded in a cultural context, with an understanding of intent and pragmatics. As a result, AI’s understanding of language is superficial; for example, a system may not grasp sarcasm, ambiguity, or situational context that are obvious or to humans. This illustrates a general principle: AI matches or surpasses us in narrowly defined tasks (e.g., grammatical correctness, translation speed) but lacks the full human linguistic competence rooted in experience and understanding of the world. However, this is changing very dynamically and radically.
When it comes to emotions and emotional intelligence, the difference between AI and humans is particularly clear. Human thinking is intertwined with feelings; emotions influence decision-making, motivation, and creativity (according to hypotheses such as Damasio’s on the role of feelings in cognitive processes). Artificial intelligence, by its very nature, does not have emotional states or self-awareness, but it can simulate them to a certain extent. The field of
affective computing, also known as
emotion AI, is developing rapidly and aims to equip machines with the ability to recognise and respond to users’ emotions (
Somers 2019). Algorithms are being developed that can assess a person’s mood based on facial expressions, tone of voice, or writing style and adjust the system’s response accordingly. For example, a voice assistant detects frustration in the user’s voice and apologises for the inconvenience, or a care robot imitates a caring tone towards an elderly person. Work is also underway on models capable of generating artificial emotions in virtual agents to make interactions with them more natural (
Somers 2019). However, this “empathy” of artificial intelligence is purely simulated: a computer does not feel joy, fear, or empathy, it only imitates external manifestations of emotions based on recognised patterns. For example, a chatbot can be programmed to respond compassionately to a user’s post about feeling unwell, but, in reality, it has no internal emotional experience. This is a key difference: humans have a subjective sphere of experience. This phenomenal consciousness gives emotions authenticity and influences cognitive processes; machines remain at the level of advanced role-playing. Nevertheless, at a functional level, certain elements of emotional intelligence can be implemented in AI, which creates opportunities for cooperation. For example, AI systems can relieve humans of the burden of routine responses to emotional states (crisis helplines operated by bots, etc.) However, this support will always be lacking the “human factor”.
Both the similarities and differences between AI and human intelligence create unique opportunities for synergy. Areas where machines excel—fast calculations, flawless memory, consistency of performance—can complement what we as humans are weaker at, and conversely, human abilities to cope with ambiguous circumstances and understand other people can complement the limitations of AI (
Korteling et al. 2021). This idea is confirmed in practice by human–machine hybrid systems. This is exemplified by “centaurs” in chess, where pairs of players consisting of a human assisted by a chess program competed against pure machines and humans. It turned out that a well-coordinated human–AI duo was able to achieve better results than a grandmaster playing alone or the strongest computer (
Cassidy 2014). The “centaur” combined human intuition and creativity with the computational power of the program, achieving a playing style that surpassed either of them individually (
Cassidy 2014).
Through phenomenological vigilance, bracketing both technological triumphalism and scepticism, we can attend to the lived experience of human–AI collaboration. Players report a distinctive phenomenology: neither purely human nor machine cognition, but an emergent hybrid consciousness where intuitive leaps and computational precision interweave. This epoché reveals that the centaur experience transcends simple tool use, manifesting as a genuinely new mode of cognitive being-in-the-world.
This example illustrates a broader principle: properly designed collaboration with AI can enhance human effectiveness. In practice, we already see this today in many fields. For instance, doctors use AI systems to detect subtle changes in medical imaging (X-rays, ultrasounds, etc.), which increases the accuracy of diagnoses, data analysts use algorithms to sift through large data sets so that they can focus on interpreting the results, etc. Synergy is also evident in language interaction, with translation tools suggesting translations of sentences to humans, speeding up the translator’s work. However, they still require human vigilance in terms of meaning and style. The boundaries between “natural” and “artificial” intelligence are becoming increasingly blurred in terms of interaction of human and machine cognitive systems, which can complement each other, creating a more efficient whole. However, to fully exploit this potential, it is necessary to have a deep understanding of the strengths and weaknesses of both solutions and the consequences of transferring certain cognitive functions to machines.
The “threat or ally” dichotomy gives way to a more complex vision of co-evolution or symbiosis between natural and artificial intelligence.
Kelly (
2010) proposes the concept of “symbiotechnogenesis”, a process of mutual influence and adaptation between humans and technology. In this view, artificial intelligence is neither an autonomous threat nor a neutral tool, but an active factor in a dynamic socio-technological system whose shape depends on conscious human decisions and values.
Rahwan et al. (
2019) postulate the need to develop a new discipline—"machine behaviourism”—to study human–AI interactions and their consequences for cognitive, social, and cultural processes. Understanding these complex interactions is a prerequisite for designing AI systems that support, rather than replace, human natural intelligence.
Thus, our phenomenological analysis reveals that the relationship between AI and human intelligence is neither one of replacement nor mere tool use but an emerging symbiosis where each form of intelligence can enhance the other within properly understood limits.
3.2. Cultivating Digital Wisdom: Hermeneutical Perspectives on AI in Christian Education
Christian education is based on philosophical and theological anthropology. Shifting to a hermeneutical lens, we now interpret how AI reshapes educational horizons within Christian pedagogy. This hermeneutical turn allows us to discern both the promises (hermeneutics of trust) and the perils (hermeneutics of suspicion) that AI brings to education grounded in philosophical and theological anthropology. It takes into account the ontological, ethical–axiological, sociocultural, and theological dimensions of the person. Christian pedagogy derives specific goals, principles, and methods of education from human nature and the message of the Gospel. Christian education aims to form mature individuals in the image of Christ, which requires introducing them to the mystery of salvation as it is realised in a specific religious culture. It is necessary to initiate them into the life of a religious and sacramental community, which helps to overcome obstacles to the full development of the person and facilitates the discovery of mature religiosity. Christian education takes into account the moral dimension of life, helps to form conscience, and teaches responsibility for guiding personal conduct, which is linked to religious motivation. It values the word in the process of education (
Rynio 2010), the role of the media in evangelisation, and encourages apostolate understood not as proselytism but as a joyful witness to the gifts of God received in Holy Baptism; it also presents social life as dialogue with everyone and the pursuit of the common good of humanity, which is to lead to true unity and peace in the world (
Kiciński 2016, p. 1381).
Civilisational progress is the basis for changes in the life of every human being, regardless of their origin, skin colour, cultural or religious affiliation. The development of information, telecommunications, and multimedia technologies represents a considerable advance in the history of human civilisation. The fourth industrial revolution (
Schwab 2016) has rapidly moved us from an industrial society to an information society where AI plays a leading role. Information is becoming the basis not only for the efficient functioning of all types of institutions but also of the education process. The implementation of AI in the education system, including religious education, is met with both enthusiasm and concern. On the one hand, the possibilities offered by AI in the education process seem almost limitless, promising, among other things, personalised teaching, optimisation of teaching processes, and support in diagnosing and developing students’ skills. On the other hand, questions arise about data security, ethics in the use of AI, and the potential replacement of teachers by “intelligent” machines. Undoubtedly, thanks to its ability to analyse large data resources, AI is able to adapt teaching materials to the individual needs and learning pace of each student. AI also offers opportunities to support teachers by automating time-consuming tasks, allowing them to focus on more valuable aspects of teaching, such as developing students’ creativity and critical thinking skills. However, there is a risk of over-reliance on technology at the expense of direct human interaction, which is crucial for the emotional and social development of students. It is vital that AI technology supports teachers rather than seeks to replace them. It is important to strike a balance between innovation and the humanistic dimension of education.
One of the current challenges in Christian education is AI technology, which
Pope Francis (
2024) sees as a tool that is “both fascinating and dangerous”. Thus, when does the use of AI in education become an opportunity and when does it become a threat? Generally speaking, AI is an opportunity for Christian education when those who use AI respect the basic principles, functions, goals, forms, and methods of education. They are aware that AI is a technology that offers enormous educational opportunities. However, they know its development and application raise legal, ethical, and pedagogical dilemmas. They are convinced that many controversial issues require the creation of ethical and legal norms and the subsequent development of appropriate pedagogical criteria. Therefore, it should not be forgotten that research on AI should involve not only computer scientists and engineers, but also experts in law, ethics, philosophy, pedagogy, and catechetics.
The use of AI-based technologies in education requires teachers, especially religious education teachers, to consider who human beings are; why they are who they are; who they can become; who they should become; and how they can be helped to achieve this. During his speech at UNESCO headquarters in Paris on 2 June 1980, John Paul II said the following:
Education is essentially about becoming more human, about being more and more “being” rather than simply “having” more and more, and consequently, through everything one “has” and “possesses,” being able to “be” more fully human. To this end, man must know how to “be more” not only “with others” but also “for others”. Education is fundamental to the formation of interpersonal and social relationships.
Therefore, in addition to specialist pedagogical knowledge, teaching skills, the ability to create teaching aids, use of computer and internet tools, acquiring knowledge about information and communication technologies, knowledge about development and perception mechanisms, knowledge about the information society and the risks associated with the irresponsible use of artificial intelligence, teachers should know the basic principles of development and perception mechanisms, the information society, and the risks associated with the irresponsible use of artificial intelligence. Christian teachers and educators should be familiar with the basic criteria for adapting artificial intelligence to the educational process in general and to Christian education in particular.
Among the risks associated with the use of AI in education, which can be successfully applied to Christian education,
Charchuła (
2024, pp. 79–87) lists concerns about student privacy first. In his opinion, “the personal data of minors is more susceptible to being used for purposes other than those declared, which may make them victims of various types of manipulation” (p. 84). According to Charchuła,
there are concerns that biases hidden in new AI applications will not help to ensure high-quality inclusive education for all. AI algorithms operate on data from specific individuals, which may lead to these systems applying biased or discriminatory criteria. As such, their use may replicate existing biases, maintaining or increasing gaps that already exist in education.
(p. 85)
In the context of using AI technology in education, problems related to educational equality within and between countries may also arise, caused by huge differences in the economic status of families and their children in rich and poor countries. Charchuła also states:
there are also challenges and dangers associated with the conditions of interaction that AI generates between students. These are primarily aspects that are significantly influenced by technology. Furthermore, the widespread perception of robots with human abilities, often publicised by the media, reinforces the belief that, as in other sectors of social life, machines can automate tasks that are the responsibility of teachers.
(p. 85)
However, given the specific nature of the role of teachers, especially in religious education, there is currently no possibility of them being completely replaced by AI systems.
Jeziorański (
2024, pp. 141–53) is concerned not only with the effective but also the appropriate use of AI in the educational process. Both the identified pedagogical criteria for adapting AI and the conclusions drawn from them can be successfully applied to Christian education. Jeziorański, pointing to four main dimensions of education: descriptive, exploratory, optative and normative, identifies the corresponding pedagogical criteria for adapting AI: (a) the criterion of humanistic irreducibility; (b) the criterion of teleological difference; (c) the criterion of the irreplaceability of the person in educational activity; (d) the criterion of educational problematisation (pp. 143–44). Jeziorański arrived at the following conclusions within the above-mentioned criteria for the adaptation of AI to the educational process:
The description and explanation of the human phenomenon in pedagogy should not be limited to empirical data; the image of a human being generated solely on the basis of empirical data—in accordance with Popper’s assumptions—should be treated as “irrevocably hypothetical” and having only temporary value. (…) Educators may use AI in selecting educational goals, but the final decision rests with the educator; the selection of educational goals is linked to value judgements, specific provisions and socially confirmed ideals. (…) Education in the strict sense is an interpersonal activity; responsibility for the choice of educational means rests with the educator. (…) Confronting the pupil with problematic situations is desirable from an educational point of view; individualisation of problematic situations
(pp. 151–52)
Jeziorański’s theses mentioned above can also be applied to Christian education open to technological innovations. The author rightly recommends that the catalogue of criteria and conclusions should not be closed, but open to further exploration. Therefore, this catalogue should include specific competences of Christian education teachers. When using new AI technologies, which should be seen as an opportunity to make the teaching process more attractive, it is crucial for teachers to continuously acquire methodological competences, with particular emphasis on practical IT, multimedia, and social skills.
At the current stage of civilisation, it is important not to prohibit baptised people from using AI technologies rationally and safely but to teach them how to use them properly, eliminating the risks they entail (
Francis 2023,
2024). According to a document of the Holy See, AI should not be seen as an artificial form of human only as its product (
AeN 2025, no. 3). Taking into account the praxeological dimension of Christian education, it should be remembered that it is the person who educates, not the tool they use. Therefore, when treating AI solely as a tool, one must remember the irreplaceability of the person in educational activities. This is confirmed by the experience of past generations who, without knowing AI, were raised in the Christian spirit and were usually able to make good choices even in extreme situations. Christian education, a natural expression of humanity, without violating what is natural in a human being or who a human being is, presents them with a living and non-utopian image of what they can become (
Rynio 2004). This is possible when education is related to God and understood as “the shaping of the human person towards his ultimate goal, and at the same time for the good of the communities of which he is a member and, in whose duties, he will participate when he grows up” (
Vatican Council II 1965, no. 1).
The hermeneutical examination of Christian education in the AI era thus confirms our thesis: rather than viewing AI as threat or saviour, we must cultivate a symbiotic relationship that preserves human dignity while embracing technological enhancement of pedagogical practice.
3.3. The Limits of Silicon Souls: A Critical Realist Analysis of AI in the Pastoral Activities of Churches
Through critical realist triangulation of empirical findings and theological reflection, we now examine the relationship between AI and pastoral ministry. This approach enables us to move beyond surface phenomena to identify the underlying generative mechanisms—technological, social, and spiritual—that shape how AI transforms pastoral care. To proceed systematically, we must first adopt specific definitions of these two realities. AI has already been defined in the introduction to this article. On the other hand, the concept of “pastoral ministry” has been defined differently in various Christian traditions. According to the
Lexikon für Theologie und Kirche, the term “pastoral” means the full range of ecclesial activity in connection with the mission entrusted to the Church by Christ to be a sign of salvation for the world, to make God present in the world, and to ensure that all people attain unity in Christ and that human society is transformed into the Kingdom of God (
Müller 1998, p. 1434). According to Catholic theologians in Poland, it is accepted that “pastoral ministry is the organised activity of the Church which realises Christ’s saving work in the service of humanity through the proclamation of the Word of God, liturgy, pastoral ministry and the witness of Christian life” (
Kamiński and Przygoda 2006, p. 201).
In Protestant traditions, the term “pastoral care” means the ministry of healing human souls, aimed at healing, sustaining, guiding, and reconciling people in difficult situations whose problems arise in the context of ultimate meanings and concerns (
Clebsch and Jaekle 1983, p. 4). The North American definition contained in the 1990
Dictionary of Pastoral Care and Counselling is universal and encompasses more faith traditions than just Christianity:
Pastoral care is considered to be any form of personal ministry to individuals and family and community relationships by persons (ordained or lay) and by their communities of faith who understand and direct their caring efforts from a theological perspective rooted in the tradition of faith
Pastoral care should be distinguished from professional pastoral counselling, which means “a specialised form of ministry characterised by a deliberate agreement between a pastoral caregiver and a person or family seeking help, usually involving a series of pre-arranged counselling sessions” (
Hunter 1995).
According to
Dreyer (
2019, p. 2), we live in the age of homo digitalis, which challenges theologians to reimagine the Church and develop an ecclesiology that would help the Church to reintegrate itself into society at the beginning of the third millennium. This requires the development of a contextual and practical ecclesiology that is adequate to global cultural changes and capable of applying at least some of the achievements of the digital revolution. The main challenge facing practical theology and the Church is the reinterpretation of traditional categories.
Louw (
2017, p. 8) proposes focusing on issues of everyday life, which can be called “operational ecclesiology”, instead of the traditional clerical paradigm, denominational divisions, and selective morality. He suggests that the way forward will be determined by the ability of theology and the Church to support “reflective spirituality.” This will be a huge challenge for traditional ministry and for being the Church. Theology in the 21st century must become fides quaerens vivendi—faith seeking a way to live authentically in the presence of God. In operational ecclesiology, the emphasis shifts from “the splendour and glory of the cathedral to the audience of the marketplace—public spaces as locus theologicus” (
Louw 2017, p. 8). Louw is undoubtedly right about the direction of change in ecclesiology, but many details still need to be worked out. The dynamics of cultural change in the era of the fourth industrial revolution require adequate changes in ecclesiology. We have reached a point in human development where each generation of Christians is now forced to develop their own existential ecclesiology thanks to the creativity of theologians, probably enhanced by AI technology. However, for the Church’s mission in the world to be successful, this new theological theory must be translated into pastoral practice.
Research conducted on a small sample of Polish Catholic priests confirmed that there are both supporters and sceptics of the use of AI technology in pastoral care (
Ignatowski et al. 2024). Supporters see opportunities for the use of AI primarily in the sphere of religious information and education. Sceptics, on the other hand, warn against religious misinformation found in online resources, leading to many religious errors, as well as the dehumanisation of interpersonal relationships due to the temporary abuse of digital tools. Based on this work alone, it is clear that AI technologies offer certain opportunities for use in pastoral care, but they also have clear limitations and may even pose certain threats to individuals and society. Let us begin by showing the positive ways in which AI can be used in pastoral care.
The first promising area of AI application in pastoral care is the collection and processing, thanks to AI technology, of a global theological knowledge base accessible to millions of users around the world in natural languages. According to Romanian researchers (
Necula and Dumulescu 2024, pp. 49–50), the use of digital technologies to collect theological knowledge resources can be successfully used for pastoral diagnosis or for recognising biblical or post-patristic patterns. AI technology can help Christians see themselves better, communicate their religious needs more accurately, and improve communication systems in parishes or communities, ultimately translating into better pastoral care. The fundamental question is: What resources should be used to build a global theological knowledge base? It seems evident that such a database cannot be built from publicly available Internet resources. There is too much false information, false narratives, and malicious comments about religion, Churches, and even God Himself. Therefore, what remains to be used? We already have well-verified databases of scientific theology. For example, the ATLA database of American theologians contains abstracts of monographs and theological articles from around the world, most of which are freely available. Another example of a peer-reviewed database is SCOPUS, which has an AI tool that uses only its own resources. Currently, these databases are mainly used by researchers, but in the near future, they may become a source of knowledge for religious education teachers and their students, as well as for seekers of truth about God and His plans for humanity. With the use of AI technology, this knowledge can be conveyed to people in various forms, e.g., through pastoral carebots.
Similar to AI bots currently used in medical care, psychotherapy, and palliative care, it is conceivable that soon, carebots could replace old, exhausted, and often ill pastors in at least some of their pastoral duties. The most far-reaching visions on this subject are put forward by
Young (
2022, p. 6), who is researching the interaction between AI technology and pastoral care at the Department of Computer Science at the University of Texas Austin. In his opinion, carebots are not yet capable of providing pastoral care comparable to that of a living human being. However, technological advances may ultimately force religious communities to face new questions about the potential for automation in pastoral care.
According to
Young (
2022, pp. 8–9), telepresence is an obvious alternative to traditional face-to-face interactions between service providers and clients in contemporary pastoral care. Generally speaking, telepresence can mean telephony, text messaging, and video conferencing. The latest additions to the AI toolkit include virtual, augmented, and mixed reality. These are related but distinct technologies. Virtual reality (VR) immerses users in a completely artificial digital environment, usually through a head-mounted display system. Augmented reality (AR) overlays virtual objects onto the real environment via mobile devices such as smartphones and tablets, computer or TV screens, or head-mounted devices or glasses. Finally, mixed reality (MR) extends a step beyond AR, allowing the viewer to interact with virtual objects.
Currently, there are no known pastoral care providers using VR, AR, or MR, although these are used in traditional mental health facilities in the US. The most futuristic aspect of pastoral care seems to be telepresence technology, such as holography and holoportation.
Young (
2022) describes them as follows:
In this case, photorealistic 3D images of distant people and objects are captured, compressed, transmitted over a network, decompressed and finally displayed using lasers in the user’s field of vision, along with real-time audio communication, rivalling physical presence. The other person appears in the user’s presence as a living hologram. Imagine sitting in a room ‘with’ your pastor, even if he is thousands of kilometres away. You see the facial expressions, gestures, posture and affect of your conversation partner as if he were physically present. The discussion proceeds completely naturally because the interaction takes place in real time.
(p. 9)
Young (
2022, p. 11) mentions another fascinating technology that may be used in pastoral care in the future. This is video capture, a technology already used in some museums, such as the Illinois Holocaust Museum and Education Centre in Skokie. This technology allows the image and voice of a person (living or deceased) to be used to hold a conversation resembling that with a living person. This is possible thanks to large amounts of information processed by AI and transmitted in 2D or 3D technology.
The technologies mentioned by Young, such as VR, AR, MR, holography, and holoportation, seem difficult to apply in pastoral care. However, these technologies should not be seen as partners in building personal relationships, but as important resources and tools for pastoral care. The world of liturgy and rituals—not only Christian ones—is rich in symbols and signs. Symbolic meaning is hidden in “visible things” but becomes understandable to those in the know. An important feature of religious symbols is respect for human freedom. A symbol is a form of human expression, enriching the inner world of the spirit and allowing people to experience the sacred in community while respecting individual identity. The hermeneutics of religious signs and symbols is one of the main tasks of pastors. It seems that distinguishing between real and symbolic presence would provide a useful framework for the use of these AI products in pastoral care.
What are the limitations, ethical dilemmas, and threats arising from the use of AI in pastoral care? Can interaction with a machine replace human relationships? Human life is more than a series of biological processes, and emotions are more than a biochemical algorithm. According to
Stoddart (
2023, p. 673), carebots can assist but never replace humans in pastoral care. AI can generate artificial sensitivity, compassion, even some signs of sympathy, but it will remain a soulless and hopeless machine. It is incapable of grasping human conditionality and learning kenotic love. A bot makes no sacrifice; it merely fulfils its technical function. A human caregiver expresses their humanity in their relationship with another. Stoddart expressed the differences between human caregivers and artificial carebots in eight aphorisms:
Biological “feelings” of love do not exhaust love. Complex pattern recognition does not imply intelligence. Reaction to physical stimuli is not equivalent to empathy. Gathering information is not the same as knowledge. Probabilistic reasoning is not hope. The possibility of being ‘turned off’ is not equivalent to mortality. Assisted activities do not replace care. Technical skills are not wisdom
(p. 673)
A person cannot delegate their responsibility for another person, especially for a person in existential need, to AI technologies. Pastoral care must be based on a horizon of genuine unpredictability and mortality, and this requires a genuine presence, a dialogue that draws on the resources of wisdom and not just on mere information, as well as sacrificial (agapeic) love, which is not a decision-making process but a way of living in communion with others. Since the Second Vatican Council, pastoral ministry, at least in the Catholic Church, has been viewed in the spirit of ecclesiology of communion. To build an authentic community (koinonia), real relationships are necessary between people who are capable of mutual commitment, shared life, and responsibility for one another. AI tools can contribute to improving relationships between people, but only in an analogous sense and to a limited extent.
Proudfoot (
2023, p. 677), citing the achievements of
Herzfeld (
2023) and analysing Karl Barth’s four factors of authentic encounter (open and mutual eye contact; speaking to and listening to each other; mutual giving and receiving of help; doing all this with joy), concludes that “an elementary I–You encounter between a human being and an AI agent is possible, although it will lack the full depth of a human–human encounter due to the different nature of the AI agent and its assumed lack of capax Dei”. After a thorough analysis of the four factors of authentic encounter in relation to AI, Proudfoot came to the following conclusion:
If conscious computers ever emerge (and I agree that this is a big “if”), this article has shown that I–You encounters with these new beings can take place, even from a theological point of view. Their potential role in pastoral care will be different and shallower than that which humans can provide. Such a role would still have value—after all, even humans cannot provide the perfect care that only our gracious God can provide.
(p. 693)
Summarising the results of the above analysis, it should be concluded that AI is not capable of replacing humans in pastoral ministry at the current stage of development, but it can perfectly complement them within the limits of its capabilities, especially in the creation of global databases of theological and spiritual knowledge, which can be used in religious education. At the moment, carebots are not able to respond to human moral dilemmas, taking into account a person’s experience, successes, failures, traumas, and spiritual sensitivity. Humans reflect the imago Dei in their capacity for social relationships with God and other people. In contrast, automated AI systems are unable to create the relationships necessary for authentic pastoral ministry. Carebots, while perfect machines, are unfortunately devoid of hope, soulless and unable to bear witness to faith and kenotic love.
The essence of humanity includes a physical component thanks to which humans are able to express and communicate their innermost experiences and thoughts to others. The activity of disembodied AI tools is capable of simulating empathic affectivity, but it is unable to convey the relationship of a living human being as an integrated and embodied being.
Herzfeld (
2023, p. 165) notes that “being unique requires a body. The fact that we are embodied beings is central to the Christian faith.” Jesus shared our physical pains, mental fears, even our sense of emptiness when he cried out from the cross: “My God, my God, why have you forsaken me?” (Mark 15:34). According to
Herzfeld (
2023, p. 170), “this vulnerability to suffering and death, shared by Jesus, is an obstacle between humans and artificial intelligence. It is unlikely that we will design carebots that can age and die like us.” Artificial intelligence is machines, not living beings. They can be a valuable resource when used well. But they are tools and nothing more. Hence the following statement by
Herzfeld (
2023, p. 179): “What makes life worth living is not the information encoded in AI tools [...]. It is love. Embodied love that we see, hear, taste, touch and nurture. The love that our God has shared with us and that we will, in some way, take with us to the end.”
Christian Churches face one crucial task: to actively participate in shaping the ethical framework for AI in collaboration with AI technologists and theologians. Otherwise, instead of contributing to human progress, AI may become a source of human deconstruction, dehumanisation and the collapse of social order on a global scale. An important step in this direction was taken on 26–28 February 2020, at the conclusion of the international workshop The “Good” Algorithm? Artificial Intelligence, Ethics, Law, Health, organised by the Pontifical Academy for Life, representatives of Microsoft, IBM, the Food and Agriculture Organisation of the United Nations (FAO) and the Italian government. At that time, a document entitled “Rome Call for AI Ethics” was signed “to support an ethical approach to artificial intelligence and promote a sense of responsibility among organisations, governments and institutions to create a future in which digital innovation and technological progress serve human genius and creativity, rather than gradually replacing them” (
Pontifical Academy for Life 2020). The Rome Call for AI Ethics recognises that AI offers enormous potential for improving social coexistence and personal well-being, enhancing human capabilities and enabling or facilitating many tasks that can be performed more efficiently and effectively. An important outcome of the Rome meeting was the development of six ethical principles for good AI innovation: transparency, inclusiveness, accountability, impartiality, reliability of AI systems, security of AI systems, and respect for user privacy. The
World Council of Churches (
2023), concerned about the accelerating development and unregulated use of artificial intelligence (AI), on 27 June 2023 called on theological education institutions to reflect on the ethical issues related to AI and its impact on human self-understanding. WCC member Churches and ecumenical partners were encouraged to lobby their governments for swift action to introduce appropriate regulatory systems and accountability frameworks.
Our critical realist analysis of pastoral care demonstrates that while AI cannot replace the human capacity for love and spiritual accompaniment, it can enter into a symbiotic relationship with human pastors, enhancing their reach while respecting the irreducibly personal nature of authentic pastoral encounter.