Previous Article in Journal
Traffic Prediction with Data Fusion and Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Artificial Intelligence Applied to the Analysis of Biblical Scriptures: A Systematic Review

by
Bruno Cesar Lima
1,*,†,
Nizam Omar
1,
Israel Avansi
1 and
Leandro Nunes de Castro
2,†
1
Graduate Program in Electrical Engineering and Computing, Mackenzie Presbyterian University, São Paulo 01302-907, Brazil
2
Dendritic: A Human-Centered AI and Data Science Institute, Florida Gulf Coast University, 10501 FGCU Boulevard South, Fort Myers, FL 33965, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Analytics 2025, 4(2), 13; https://doi.org/10.3390/analytics4020013
Submission received: 8 November 2024 / Revised: 24 January 2025 / Accepted: 14 March 2025 / Published: 11 April 2025

Abstract

:
The Holy Bible is the most read book in the world, originally written in Aramaic, Hebrew, and Greek over a time span in the order of centuries by many people, and formed by a combination of various literary styles, such as stories, prophecies, poetry, instructions, and others. As such, the Bible is a complex text to be analyzed by humans and machines. This paper provides a systematic survey of the application of Artificial Intelligence (AI) and some of its subareas to the analysis of the Biblical scriptures. Emphasis is given to what types of tasks are being solved, what are the main AI algorithms used, and their limitations. The findings deliver a general perspective on how this field is being developed, along with its limitations and gaps. This research follows a procedure based on three steps: planning (defining the review protocol), conducting (performing the survey), and reporting (formatting the report). The results obtained show there are seven main tasks solved by AI in the Bible analysis: machine translation, authorship identification, part of speech tagging (PoS tagging), semantic annotation, clustering, categorization, and Biblical interpretation. Also, the classes of AI techniques with better performance when applied to Biblical text research are machine learning, neural networks, and deep learning. The main challenges in the field involve the nature and style of the language used in the Bible, among others.

1. Introduction

All over the world, religion plays an important role in society, closely influencing the culture of a nation [1,2,3]. From a population of approximately 7.9 billion people, around 5.75 billion (72.78%) have a full Bible available (https://www.wycliffe.net/resources/statistics/, accessed on 20 May 2024). As such, the Holy Bible is a highly studied document, raising interest not only from religious people but also from academics. Since the Holy Bible is composed of several books and possesses unique linguistic characteristics, it offers a significant challenge to scholars, both from a literature and a computational perspective. Considerable effort is required to extract insights, thus making the whole process complex from a human standpoint. Given this scenario, the use of Artificial Intelligence (AI) methods to automatically analyze the Bible becomes a relevant topic of research.
Over the past years, AI has been extensively used in the analysis of documents, mainly combining text mining, natural language processing, neural networks, machine learning, and other fields of investigation. Altogether, they provide a powerful suite of tools for machine translation, authorship identification, tagging, semantic annotation, and even interpretation. In this direction, to the best of the authors’ knowledge, this paper brings the first systematic survey on the general use of AI in the analysis of the Biblical Scriptures.
The paper combines Kitchenham’s and PRISMA-P protocols, considering nine different, but complementary, search expressions over the Scopus and Web of Science repositories. From among 147 papers retrieved, 85 were included in the survey based on the inclusion and exclusion criteria. Three research questions were proposed:
  • What are the main tasks solved by AI methods in the analysis of the Bible?
  • What are the main AI algorithms used in the analysis of the Bible?
  • What are the main limitations of AI approaches in the analysis of the Bible?
We applied some standard text mining methods in the 85 papers selected and found that machine learning, neural networks, and deep learning are the most common AI techniques used in the literature. Also, we could observe the following main tasks being solved: machine translation; authorship identification; tagging; semantic annotation; clustering; categorization; and interpretation. Based on this knowledge, we performed the systematic review by answering the three research questions, and then discussing the contents of most papers in each of these sections. It is in such context that the present work contributes to the scientific community by providing a systematic review of the use of AI concerning Biblical scriptures analyses. Greater emphasis is given to the advancements and the limitations of the field, as well as the most widely used algorithms and their achieved results.
The unique characteristics of Biblical scriptures present both challenges and opportunities that distinguish this area of study. The Bible’s linguistic diversity, spanning ancient Hebrew, Aramaic, and Greek, and its combination of literary genres such as poetry, prophecy, and narrative, require specialized approaches to natural language processing and machine learning. Furthermore, its cultural and spiritual significance invites interdisciplinary collaboration, uniting computer science, linguistics, theology, and history. These complexities show the Bible’s dual role as both a challenging dataset for AI research and a source of profound insights into linguistic, historical, and theological traditions. By exploring these dimensions, this paper aims to illuminate how AI methodologies can both respect and reveal the intricate layers of meaning within the Biblical corpus.
The paper is organized as follows. Section 2 and Section 3 provide a general overview of the Biblical Scripture and Artificial Intelligence. Section 4 presents the employed methodology, including the research sources and terms, the inclusion and exclusion criteria, and the research questions. Section 5 provides the systematic review itself, surveying the selected papers and their context, and the AI methods and applications in Biblical text analysis. Section 6 summarizes the results and discussions obtained from the review. The paper is concluded in Appendix A with some final remarks and future works. Appendix A brings a table summarizing the eligible papers, including information about the authors, types of applications, AI method used, year of publication, and where the paper was published.

2. The Holy Bible: A Complex Topic

The Holy Bible is a singular document in history. According with the Book of Records (https://www.guinnessworldrecords.com/world-records/best-selling-book-of-non-fiction, accessed on 10 December 2024, the Bible is the world’s most sold book, reaching 5 billion printed copies. The influence of the Bible achieved overall relevance in the literature, not only by its extraordinary number of copies, but also for its crucial role in the alphabetization of medieval European populations, particularly in the 16th Century Germany [4], and for its participation in the formation of the early American political rationale of the 18th Century [5]. Such prestige made the Bible a book to be studied for hundreds of years by theologists, critics, and enthusiasts.
There are different versions of the Holy Bible. For example, the Tanakh is the Holy Bible of Judaism and corresponds to the Old Testament of Christianity, containing 24 books. In Christianity, there are variations in the Biblical canon: the Holy Bible of Western Roman Catholicism contains 73 books, while the Holy Bible of the Greek Orthodox Church includes 76 books, and the Protestant Bible consists of 66 books. The differences among these versions are more related to the canon of included books rather than the content of their message. Therefore, this section will specifically describe the characteristics of the Protestant Bible to illustrate the textual complexity of the Biblical corpus [6,7].
The document widely identified as “The Holy Bible–Protestant version” is a collection of 66 books written by 40 authors from the Levant Region, whose existence comprehend a timespan of 1600 years. It is divided in 1189 chapters and 31,102 verses (https://www.biblebelievers.com/believers-org/kjv-stats.html, accessed on 10 December 2024). As the Holy Bible is studied by its numbers, its interpretative complexity is revealed. For example, when the Old Testament is tokenized, it generates 1.5 million tokens in vocabulary (American Standard version) [8]. It also possesses a diversity of literary genres, such as poetic, narrative, metaphorical, among others [6].
The Bible has been translated, in whole or in part, into more than 3600 languages. Among these, 736 languages have a complete translation, covering about 72% of the world’s population (approximately 6 billion people). Additionally, 1658 languages have the New Testament and portions of the Old Testament translated, reaching over 824 million people. However, there are still more than 1200 languages worldwide awaiting the start of translation work. This article focused on analyzing the Holy Bible in its translation into English, as the main works and techniques in artificial intelligence are limited to the English language (https://www.wycliffe.sg/post/2023-global-scripture-access, accessed on 10 December 2024, https://www.scottishbiblesociety.org/blog/bible-translation-statistics-2023, accessed on 10 December 2024).
The traditional method of Biblical knowledge extraction is human-based hermeneutics, the theory and methodology of interpretation, especially of scriptural text [9,10]. Exegesis, on the other hand, is the result of the critical analysis of a Biblical text, that is, the product of hermeneutics [11]. Hermeneutics offers a framework for approaching the scripture systematically, while exegesis provides insights as a result of textual analysis. Together, they ensure that interpretations remain grounded in historical, linguistic, and cultural contexts.
The study of the Bible through hermeneutics and exegesis focuses on human interpretation, guided by theological, historical, and philosophical frameworks to uncover the spiritual and doctrinal meaning of scripture. These approaches are deeply influenced by faith, the interpreter’s theological background, and the cultural context in which the text is read [12,13]. In contrast, the use of AI in Biblical studies leverages data-driven techniques such as Natural Language Processing (NLP) and machine learning to analyze the Bible at scale, identifying linguistic patterns and themes without spiritual or theological insight [14,15]. AI serves as a complementary tool to human scholarship, aiding in textual analysis while leaving deeper theological interpretation to human scholars.
The Biblical corpus, due to its uniqueness, becomes an ideal dataset to be used in the study of the performance of artificial intelligence techniques. In many studies, corpora without literary diversity are used, meaning they are limited to one or another literary genre. However, the Biblical corpus stands out for its specificity in containing this literary diversity. Therefore, an investigation into the plausibility of academic works that have, in some way, used artificial intelligence in Biblical literature is necessary. The challenges may not be small when applying artificial intelligence to the Holy Bible, as the Biblical corpus has been a subject of debate and discussion for centuries, highlighting the degree of interpretative complexity it possesses.

3. Topics in Artificial Intelligence

Artificial Intelligence (AI) is a broad field of research that can be divided into several subjects, each one with its specificities. Defining what artificial intelligence is can be a complex task, since the word intelligence, from a psychological and neuroscientific point of view, does not have a simple definition. Some attempts at definition have been constructed over time. In one of these definitions, it is said that “artificial intelligence is the ability of computers to solve problems that are normally associated with the higher intellectual processing capacities of human beings” [16]. Another attempt at definition is that “artificial intelligence is the simulation of human intelligence in a machine, making the machine efficient at identifying and using the right piece of ’knowledge’ at a particular stage of problem-solving” [17,18,19].
This survey paper deals with the problem of extracting knowledge from the Holy Bible and, thus, involves some specific areas within AI, mainly computational intelligence, machine learning, data mining, text mining, Natural Language Processing (NLP), data science, neural networks, and deep learning. The survey will be performed taking into account the convergence of all these areas and the analysis of texts from the Holy Bible. In the following paragraphs we will provide a brief description of each of these terminologies.
Computational Intelligence (CI) is a terminology formalized as a discipline to refer to the combination of three AI areas, namely, neural networks, evolutionary computing, and fuzzy systems [20]. With time, CI broadened its scope to include nature-inspired algorithms [21,22], such as artificial immune systems, swarm intelligence, and others [23].
Machine Learning (ML) is the area within AI focused on the development of algorithms capable of automatically learning to solve problems based on experience, usually represented as sample data [24,25]. There are several machine-learning paradigms, including supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Supervised learning involves training a model on labeled data, where each input has a corresponding output and the model learns to map inputs to the correct outputs by minimizing errors between its predictions and actual results. Supervised learning is commonly used for tasks such as classification and regression. Unsupervised learning works with unlabeled data, meaning the model must find patterns and structures on its own. The aim is to discover hidden relationships or groupings within the data, and this paradigm is frequently used for clustering and dimensionality reduction. Semi-supervised learning is a blend of supervised and unsupervised learning, utilizing labeled data along with unlabeled data. This method is useful when labeling data is expensive or time-consuming. The labeled data helps guide the model, while the unlabeled data enhances learning. Reinforcement learning focuses on training a model to make decisions through interaction with an environment. The model learns by receiving rewards or penalties based on its actions and seeks to maximize cumulative rewards over time [26,27,28].
Deep Learning (DL) is a specific branch of machine learning that deals with models (neural networks) with multiple layers of processing units. With the advancement of computational power and data production, neural networks have become more complex and with multiple layers, and it is due to these multiple layers of neural networks that the term deep learning was coined. Currently, deep learning is one of the most robust segments of the AI era [28].
Data Mining (DM), by contrast, refers to the process of extracting knowledge from (usually large) datasets. The DM process can be divided into data acquisition and selection, pre-processing, descriptive analysis, inferential (modeling) analysis, and evaluation [29,30,31]. Therefore, the data mining process can use any of the methods previously described, including neural networks, machine learning, and computational intelligence algorithms.
Data Science is a younger terminology related to an interdisciplinary area that can be defined as the study of all aspects of data, from its generation to processing it into a valuable source of knowledge. Another definition is that data science is a set of processes to extract useful and non-obvious patterns from large datasets. Data science uses concepts from mathematics, statistics, machine learning, data mining, and artificial intelligence [32,33,34,35]. In essence, it is not different from data mining, though the latter is more focused on structured data, and the former deals with structured and unstructured data.
From among these two main data categories, structured and unstructured, texts have ever deserved particular attention, not only for their specificity but also for their pervasiveness. A great deal of our communication is performed using text, and this requires some very specific analysis tools and techniques. Therefore, some novel research areas have emerged to deal particularly with text, such as text mining and Natural Language Processing (NLP) [36,37,38].
Text mining is the area dedicated to the extraction of knowledge from text data, independently of its origin. It normally starts by structuring the documents by means of parsers, tokenization, stopwords removal, stemming, and the application of other methods capable of preparing the texts to be analyzed. Once the documents are structured, the next step usually involves the application of some intelligent algorithm to perform classification, clustering, or other tasks. Finally, a specific evaluation or interpretation approach is used to complete the process [36,37]. Note that text mining follows a very similar process to data mining, but tailored to text data.
Natural Language Processing (NLP), by contrast, merges concepts from linguistics with some or all the other methods mentioned previously to deal with the interactions of computers and language, emphasizing the processing and analysis of natural language data [38]. Its applications include text and speech processing, human–computer interactions, sentiment analysis, morphological analysis, text classification, and many others.

4. Survey Protocol

This research combines Kitchenham’s and PRISMA-P protocols to perform systematic reviews [39,40]. Our methodology was organized based on three main steps: planning, conducting, and reporting. The first step, planning, consists of defining the review protocol, that is, specifying the objective, the research questions, keywords, databases, and the inclusion and exclusion criteria. After defining the protocol, the survey of related works is made, selecting primary studies, performing quality assessment, and extracting and synthesizing data. The final step is the papers’ reporting itself, specifying the dissemination mechanisms and formatting the main report.

4.1. Research Sources and Terms

Scopus and Web of Sciences were chosen as search engines because they index a large number of scientific papers, including high impact journals. The main search expressions were based on a combination of the following terms: Bible, Artificial Intelligence, Text Mining, Neural Network, NLP, Machine Learning, Deep Learning, Computational Intelligence, Data Science, and Data Mining.
Table 1 summarizes the search expressions, and the number of papers retrieved from each source. The choice of these search terms aimed to cover a wide spectrum of artificial intelligence techniques. Since AI is a broad field of study that is subdivided into more specific branches, using only terms like “artificial intelligence” or “computational intelligence” could overlook important subareas, such as “natural language processing”, among others. Therefore, we used some of the main nomenclatures of key branches of artificial intelligence, aiming to minimize any bias in the search for papers.
English was chosen as the language of the papers eligible for review. Figure 1 summarizes the methodology block diagram, which was implemented following the PRISMA-P protocol with the initial planning three phases of identification (data capture), screening, and inclusion.

4.2. Inclusion and Exclusion Criteria

The eligibility criteria for inclusion in the systematic review consist of being an original paper written in English, having a full text available, and displaying the application of AI to the Holy Bible texts. A key aspect for inclusion is the presentation and use of AI, and the chosen subareas, as a tool to extract knowledge from the Biblical text. Duplicates, abstract only, and works focused solely on the philosophical study of the Bible and technology (that is, in a reflection on the impact of technologies on religion) were excluded from the review.
Papers [41,42] are examples of papers that did not meet the inclusion criteria for the review. It is noticeable that they focus more on a philosophical discussion about the rise of artificial intelligence in a broader religious context, rather than on a study applying any AI technique to the Biblical corpus. Table 2 summarizes the inclusion and exclusion criteria.

4.3. Research Questions

The research questions are aiedm at summarizing the field of knowledge related to the intelligent data analysis of the Bible, providing a broad view of state-of-the-art works in this area. The research questions that will be addressed are the following:
  • Question 1: What are the main tasks solved by AI methods in the analysis of the Bible?
  • Question 2: What are the main AI algorithms used in the analysis of the Bible?
  • Question 3: What are the main limitations of AI approaches in the analysis of the Bible?
The research questions were formulated with the aim of addressing the gap in the literature regarding the absence of a study that connects these themes. In other words, this paper aims to be the first reference for a qualitative study of works that apply artificial intelligence to the Biblical corpus. Therefore, the qualitative exploration of the eligible papers, guided by these research questions, is presented in a balanced manner.

5. A Systematic Review

This section starts by applying some simple text mining methods to the surveyed papers in order to present their context. The mining conducted was applied to the abstracts of the eligible and the excluded papers. A pre-processing step was implemented, including the removal of irrelevant words and tokenization. The Python language was used along with the NLTK and Scikit-learn. It then follows with a presentation of the main AI techniques and applications in Biblical text analysis. In this direction, the works were divided into four main groups: machine translation and authorship identification; grammatical and semantic analysis; clustering and categorization; and Biblical interpretation.

5.1. Selected Papers and Their Context

The eligible papers are summarized in Table A1, available in Appendix A. The papers are organized in terms of authorship, type of application, algorithm used, and year and vehicle of publication.
To have a better understanding of the context, the bar charts for bigrams and trigrams extracted from the selected papers are presented in Figure 2 and Figure 3, respectively. The bigrams show mainly the methods used and tasks solved in the literature. It can be observed that the main AI approaches used are machine learning, neural networks, and deep learning, whilst the main tasks solved are machine translation, digital image processing, language processing, information retrieval, PoS tagging (Part of Speech tagging), and topic modeling. The trigrams presented in Figure 3 reinforce these findings.
Figure 4 shows the bigrams from the abstracts of the excluded papers. As can be observed, the majority of the most frequent expressions in these papers are not related to the Bible, nor to artificial intelligence, supporting our exclusion criteria.

5.2. AI Methods and Applications in Biblical Text Analysis

The interest in scientific works where AI is applied to the Biblical literature is as recent as the beginning of the present millennium (2000), and it has grown more intense over the past five years. After reading the selected papers, it was possible to organize in more detail the main application areas and AI techniques used, as will be discussed in the following sections. The discussions in each section will bring the answers to the three research questions proposed in the paper.
The selected papers show that the Biblical corpus is being used in association with artificial intelligence mostly for the development of automatic translators or authorship identification methods [43,44,45,46,47,48,49,50,51,52,53,54,55,56]. Thus, textual recognition (text mining and natural language processing) is the main methodology in use. The preference for such method may be explained by the growing demand of the pattern recognition field as a whole [57].

5.2.1. Machine Translation

Algorithms frequently used in papers that perform authorship recognition works are the ones related to deep learning and Support Vector Machines (SVMs) [43,44,47]. The results shown in these works are robust and indicate the capability of these algorithms to recognize textual patterns and identify the textual styles of authors, even in ancient and grammatically complex languages from the medieval literature [44,46]. This goal of building translating machines for Biblical corpora is attractive, because many dialects and exotic languages suffer with the absence of automatic translators. This is because AI techniques are applied most commonly to broadly spoken languages, such as English. Limitations in this field include high numbers of false positives in identifying the authorship of medieval manuscripts via SVMs [45], the high volume of hyperparameters in deep neural networks [44,46], and a dependence of a symmetrical source comparable to the training source coupled to a post-processing stage to reduce the character recognition error rate by using LSTM networks.
In [48], the authors used a Recurrent Neural Network (RNN) to train a machine-translation solution using a standard text mining process: data loading, tokenization, vocabulary building, model training, and evaluation. A Biblical Corpus was used for both training and evaluation purposes. The works of Tschuggnall et al. [51,56] and Rista et al. [55] adopted a similar methodology.
In the work of Ashengo et al. [43], a bilingual dictionary was used along with a Context-Based Machine-Translation network (CBMT). Also, a Recurrent Neural Network Machine-Translation (RNNMT) was used as the output of the CBMT. In this case, a Biblical corpus was used to evaluate the model. In the research of Ziran et al. [52], the authors used two Faster R-CNN models, the first one focused on recognizing generic words and the second one used to recognize reference words. The Gutenberg Bible was used to evaluate the proposed machine-translation model.
In [43], the authors presented the implementation of a new approach that translates English text into Amharic using a combination of Context-Based Machine Translation (CBMT) and a Recurrent Neural Network (RNN) machine translation. The New Testament of the Holy Bible was used as the base corpus. The authors stated that the results showed that the accuracy of the dictionary and, therefore, the output of CBMT affect the combinatorial approach.

5.2.2. Authorship Identification

The work presented in [44] was conducted with five trained Deep Neural Network (DNN) models: VGG19, ResNet50, InceptionResNetV2, InceptionV3, and NASNetLarge. With a workflow involving transfer learning and fine tuning, the authors evaluated the performance of the authorship identification solution by making use of a 12th Century Biblical corpus. In the work of Eder [45], by contrast, the author proposed a stylometric method combining supervised learning and sequential analysis to assess mixed authorship. Three different versions of the method were proposed: Rolling SVM; Rolling NSC, based on the Nearest Shrunken Centroids method; and Rolling Delta, based on the Burrowsian measure of similarity. Among different datasets, he used a 15th century translation of the Bible into Polish.
In the research of Cilia et al. [47], the adopted methodology was the creation of two Convolutional Neural Networks (CNNs), the first one focused on detecting each line of the manuscript, and the second one responsible for identifying an author to each respective line. The processing occurred in two training stages: transfer learning and fine-tuning. A copy of a medieval Bible (Avila Bible), was utilized to evaluate the model. The authors justified the use of the medieval Bible because it is a robust source of data that allows for authorship identification of its scholars. In the work of Stefano et al. [50], by contrast, the implementation of a set of algorithms was performed, including decision trees, K-nearest neighbor, neural networks, and SVMs. A similar work was carried out by Krishna [58].
In the works of Cilia et al. [46,47], the authors proposed a system to identify the writer in medieval documents, and used digitized manuscripts of the Avila Bible to assess their proposal. The authorship identification process was divided into three stages: a detector of objects to identify each line on a page; the construction of a deep neural network with transfer learning; and a weighted majority vote to assign a writer to each page. The goal was to investigate the use of DL with relatively small datasets. The following works applied a similar approach: [53,54].
Popović et al. [59] implemented deep learning for authorship identification in the Dead Sea Scrolls (specifically fragments of the Isaiah scroll). The methodology does not specify the configuration of the neural network used, but focuses on the results achieved.

5.2.3. Part of Speech Tagging (PoS Tagging)

Other common tasks solved by AI in the analysis of the Bible include part of speech tagging and semantic annotations [15,60,61,62,63,64,65,66,67,68,69,70,71]. These works usually aim to collect information related to word use in an attempt to build dictionaries. The most commonly used algorithms for this kind of research are the classic ones from machine learning, such as decision trees, support vector machines, Conditional Random Fields (CRFs), K-Nearest Neighbors (KNNs), bagging, random forests, gradient tree boosting, and topic modeling. The AI techniques that presented the most promising results were SVM, where their performance exceeded those of related papers [63], and topic modeling, which generated insights regarding Hebraic literature diachrony and added the creation of a historical Hebraic ontology [62]. The main limitations of these works are how the performance depends on the size of the dataset.
In the work of Dione et al. [60], the authors created tags in accordance with the EAGLES guidelines to develop PoS tagging. These tags were conceived to reflect verbal inflections within the sentences or in their context. The authors desgined and developed annotated corpus resources to support PoS tagging for a language from Niger-Congo called Wolof. As the first effort to build a publicly available NLP resource for Wolof, they used part of the Bible as the gold standard corpus. For the training of the PoS tags, 26,846 tokens were generated from the Gospel of Matthew of Wolof’s Bible. Two well-known machine-learning taggers were used, namely TnT tagger and TreeTagger, and the results were compared with a baseline that assigns the most frequent tag from the training set to each known word.
In the work of Francis et al. [61], a hybrid PoS tagging model was proposed that combines elements from a rule-based method and an n-gram model. In both approaches, the authors performed morphological, lexical, and syntax analyzes. When displaying their results, the authors showed that the proposed machine learning-based solution performed better than SVM.
Yu et al. [63] proposed a delexicalized tagging method for cross-language transfer of PoS tagging models that requires only a raw corpus of the target language. Their proposal was composed of four main steps. The first focuses on the identification of the original languages (of which they already possessed the labels) and the identification of the destination languages. In the second stage, a feature vector was produced, and also the destination language labels were obtained. In the third stage, the origin language vectors are used as the training set for machine-learning classifiers, such as KNN and SVM. Finally, in the fourth stage, its efficiency is evaluated for the destination language.
Azawi et al. [65] proposed a new approach for the normalization of historical texts for applications such as PoS tagging. They made use of Martin Luther’s Bible (1545 version) as the textual corpus. They also implemented a deep neural network (LSTM—Long Short-Term Memory), using words from modern and ancient German, to make comparisons during training.
In [69], the authors presented a study on estimating lexical complexity for the Russian language, investigating the following topics: how the morphological, semantic, and syntactic properties of a word represent its complexity; and whether or not the surrounding context significantly affects the accuracy of the complexity estimate. To this end, they used the Russian Synodal Bible.
Kann [70] analyzes the reliability and generalization of word order statistics extracted from the Bible corpus from two perspectives: stability across different translations in the same language and comparability with Universal Dependencies corpora and typological database classifications such as URIEL and Grambank. The paper does not clarify which specific techniques are used, only mentioning that NLP interventions and PoS tagging are applied. The authors emphasize that the results of this article do not suggest that it is advisable to rely solely on Biblical texts for quantitative typology or NLP applications for low-resource languages.
In [71], the authors produce the first linguistically annotated corpus of any Judaeo-Arabic dialect (TAJA). The corpus is a collection of various genres of modern written Algerian Judaeo-Arabic texts, including translations of the Bible and liturgical texts, commentaries, and original Judaeo-Arabic books and periodicals. The TAJA corpus was manually annotated with part-of-speech (POS) tags and detailed morphological tags. This annotated corpus serves as the foundation for the development of NLP tools specific to Judaeo-Arabic, enabling automatic POS tagging and morphological annotation of large collections of previously unexplored texts in Algerian Judaeo-Arabic and other Judaeo-Arabic varieties.

5.2.4. Semantic Annotation

Coeckelbergs et al. [62] adopted a dataset with annotations from the Hebrew Bible (SHEBANQ) to run a topic modeling technique, more specifically, the LDA algorithm. They created a tool that readers can use to study the Biblical text in Hebrew. In the work of Varghese et al. [15], the authors started with the hypothesis that there is intersection between the Bible, the Tanakh, and the Quran. The goal was to perform text analytics on sacred texts to find similarities among them using NLP, ontology, and ML methods. In the research of Östling et al. [64], a project of semantic relations of languages was conceived, aiming to infer connections among them using continuous vector representation of languages. The experiments used a volumous Biblical corpus with 1303 translations in 990 languages. Their model was based on a deep neural network (LSTM) and various word emmbeddings. The papers written by Cernansky et al. [66], Bilovich et al. [67], and Jaenisch et al. [68] adopt similar methodologies.
Östling [72] aims to systematically explore a general research question: What typological features can be discovered using neural models trained on a massively parallel corpus? To this end, the authors used a corpus of Bible translations collected from online sources. In the version used, the corpus contains 1846 translations in 1401 languages, representing a total of 132 language families or isolated languages. The authors trained a language model to predict the embedding of the next word in a fixed multilingual embedding space. This model consists of a simple left-to-right LSTM conditioned on the previous word and a language embedding.
In [73], the authors introduce GUIDE, a language-independent tool that uses a Graph Neural Network (GNN) to create and populate semantic domain dictionaries, using seed dictionaries and Bible translations as a parallel text corpus. The authors transformed the dataset into a graph, where each node represents a word in one of the 20 languages. The unique key for each node is the language code and the word itself. The GNN adopts a Graph Convolutional Network (GCN) architecture. The authors conclude that they have implemented a framework that creates and populates multilingual semantic dictionaries in 20 languages from seven language families.

5.2.5. Clustering

The number of works related to the grouping or categorization of Biblical texts is modest when compared to the other aforementioned fields [8,74,75,76,77,78,79,80,81,82,83,84]. The main idea in segmenting the Biblical text is to investigate semantic similarities in order to evaluate the capacity of some algorithms to infer contextual synonymity.
There are few works in this area, and the main obstacles are the fact that the algorithms have to categorize a multilingual textual corpus [74], to identify similarities among authors from different geographical regions [75], and to identify the vectors used in the representation of words or sentences [8].
The algorithms employed in this field are mostly unsupervised, such as the Rocchio algorithm, the Widrow–Hoff algorithm, the Kivinen–Warmuth algorithm, Learning Vector Quantization (LVQ), the Vector Space Model (VSM), Latent Semantic Analysis, Self Organizing Maps (SOM), K-means, and some deep neural networks. This makes it evident that the techniques used to find textual patterns in the Biblical literature are under the same paradigm as the ones used in data mining, where the unsupervised approach is not related to content search. This reinforces the necessity of neutrality and of removing biases in the grouping of Biblical texts, so one can in fact identify textual patterns.
The main limitation of this approach is related to the fact that, while building the representation vectors, the number of dimensions results in an excessive need of memory space to process the texts [8,66].
Widdows et al. [75] conducted a comprehensive research about the clustering of a Biblical corpus by using machine-learning and linear-algebra techniques. Their research aimed to investigate if the algorithms are capable of identifying similarities among the three synoptic gospels (Matthew, Mark, and Luke) in contrast with the gospel of John. In another stage of the research, it was attempted to relate Biblical characters with geographical regions. The machine-learning algorithm used to perform clustering was the K-means. The papers from Bleiweiss [8], Popa et al. [76], and Geßner et al. [77] adopted similar methodologies. Schrader and Gultepe [78] conducted a study using word and document embeddings, combined with deep-learning clustering techniques, applied to the Bible in different languages. The study revealed the Bible’s ability to preserve meaning across various languages, highlighting similarities between different language families.
In the work of Valdivia et al. [74], the authors proposed a text categorization method based on unsupervised and competitive learning paradigms. The authors reported comparisons between Learning Vector Quantization (LVQ) and other machine-learning algorithms, concluding that LVQ presented better results for text categorization. The other algorithms compared were the Rocchio, Widrow–Hoff, and Kivinen–Warmuth algorithm. Two Biblical translations were used for training the categorization algorithms: the Reina Valera edition for Spanish, and the American Standard Version for English.
Visa et al. [79] presented a prototype solution whose goal was to extract knowledge from textual content. Such a solution used Self Organizing Maps (SOMs) and a vector representation of the words. Categorization was performed at three levels: word, sentence, and paragraph. The authors declared to have reached their goals with the proposed method. The paper by Ari et al. [80] used a similar approach.
In the approach presented in [81], association and word cloud analyses were performed for the Bible and the Epic of Gilgamesh, respectively. Through these analyses, the keywords represented in the Bible and the Epic of Gilgamesh, as well as the relationships between the words, were identified. The authors concluded that the intertextuality analysis revealed the interrelationship between the Epic of Gilgamesh and the Bible. Furthermore, text mining helped verify the association in the intertextuality analysis. As a result, the study proposed a research method for the study of civilization exchanges, objectively addressing the flow and directionality of exchanges between civilizations in the ancient Mediterranean regions.
Martinjak et al. [82] studied the use of deep learning and NLP techniques to analyze the distribution and appearance of names in the Polish, Croatian, and English translations of the Gospel of Mark. An entity graph was built for Named Entity Recognition (NER). The authors stated that newer NER models performed worse than older models on Biblical texts, contrary to standard benchmarks, which should be researched and explained in depth.
In [83], the authors presented a lexical-based language kinship analysis for low-resource Ethiopian languages. The Wolaita, Dawuro, Gamo, and Gofa languages belong to the Omotic language family and share rich cultural and linguistic similarities. However, the extent of their interrelationship remains unknown. To address this gap, the authors collected and prepared new corpora from the Bible and academic texts. The TF-IDF technique was employed for feature extraction, and the cosine similarity method was used to measure the similarities between these languages.
Schrader [78] implemented three methods for constructing phylogenetic trees and grouping languages without using language-specific information. The input for the methods is a set of document vectors trained on a parallel Bible translation corpus for 22 Indo-European languages, representing four language families: Indo-Iranian, Slavic, Germanic, and Romance. This text corpus consists of a set of 532,092 Biblical verses, with 24,186 identical verses translated into each language. The methods were (A) hierarchical clustering using centroid distance between language vectors, (B) hierarchical clustering using a network-derived distance measure, and (C) Deep Embedded Clustering (DEC) of language vectors.
In [84], the authors used a machine-learning method to organize chapters into sections based on the author’s word choices. The method employed synonym pairs in a hierarchical clustering algorithm within the R statistical software. The authors concluded that this type of analysis can validate findings from other methods, but some inherent biases and linguistic ambiguities make it unreliable as a primary method of investigation for the Hebrew Bible.

5.2.6. Biblical Interpretation

When it comes to the use of AI to interpret the Bible, there is an even smaller number of scientific contributions and only a few papers were found in our search [85,86,87,88,89]. The AI techniques employed are more related to representation than extracting implicit knowledge. Thus, an exploration of the use of AI techniques, such as DNN and segmentation algorithms, that aim to examine semantic aspects as a whole could be of interest; indeed, only a few studies have used LLMs or embeddings. One of the limitations identified in this area is the difficulty of extracting contextual knowledge, such as historical and cultural facts. However, the utilized methods are those of representation, and few extrapolation techniques were applied [85]. In the work of Zhao et al. [87], it was noticed that the simplest model (RNN) displayed a better result when compared with a more complex one (BiDAF). It concluded that Bible translations that follow a more literal style eventually display lower performance, since the coding was limited to consider only the semantic of the sentences and not their syntax.
In the work of Hu [86], an analysis of the Book of Psalms and the Book of Proverbs was conducted in order to achieve similar conclusions to the ones found by hermeneutics scholars. The authors reported to have been able to extract novel information from the texts. The applied method was relatively simple, making use of the Latent Dirichlet Allocation (LDA) algorithm. In the paper of Zhao et al. [87], the development of a system based on questions and answers from the Biblical text was conducted. To reach this goal, the authors used two textual datasets (SQuAD e BibleQA) as well as a word enbedding model (Word2Vec).
In [88], a chatbot was developed for the analysis of Biblical texts in the task of psychological counseling. The authors did not clearly detail their methodology; however, it is reported that natural language processing techniques were used to enable the chatbot.
In [90], the authors conducted a study evaluating the use of ChatGPT in religious education for children. The work focused more on analyzing the performance and impacts of using tools like chatbots in education; therefore, the focus of the study is not to conduct a quantitative analysis of the interpretive performance of the Biblical corpus.
Samosir [89] presented the creation of a semantic search model for the Bible in Indonesian, called IndoBerea, which aimed to provide relevant verses based on a user query, mirroring the practice of the Berean Jews who examined the Scriptures. The main motivation for this study was the lack of research related to this NLP task, especially in the domain of the Bible text in Indonesian. The semantic search model was implemented using the Sentence Transformer architecture, with the base model for fine-tuning being IndoBERT, an adaptation of BERT for the Indonesian language, since the base models of Sentence Transformer were not trained with Indonesian texts. The authors stated that IndoBerea was designed to generate a single verse per query. However, incorporating a broader context, such as a group of verses or an entire chapter, could align the model’s judgments more closely with human interpretations.
In [91], the authors proposed using deep-learning-based language models to detect metaphors in the Bhagavad Gita and the Sermon on the Mount from the Holy Bible. Selected English translations of the Bhagavad Gita and the Sermon on the Mount were used to evaluate the impact of translation-induced vocabulary changes on metaphor detection using Large Language Models (LLMs). The results showed that the LLMs recognized most of the metaphors and the metaphorical counts in the respective translations of the selected religious texts. However, it is worth noting that the Sermon on the Mount is a small excerpt from the Gospels. Therefore, a more comprehensive analysis of at least the entire New Testament would be necessary for a more accurate assessment. Another study [92] employed a similar architecture and, according to the authors, achieved 20% accuracy using the Bhagavad Gita dataset and 72% accuracy using the Bible dataset.

6. Results, Discussion, and Future Trends

The Biblical literature has been society’s preferred reading for centuries. It is the rule of faith and practice for the two oldest monotheistic religions of the World: Judaism–Old Testament, and Christianity–Old/New Testament. The knowledge extraction method from this hegemonic and relevant literature is done mostly by classic human-based hermeneutics.
The Biblical literature is relevant for society, given its publication statistics and reach. Thus, the knowledge obtained via text analysis aids the interpretation of the Bible. However, the findings of this systematic review showed that the use of AI in the interpretation cycle of the Biblical text is among the most scarce when compared with other, more constrained, works. It was possible to observe that the algorithms employed in the papers reviewed are, in their majority, the same as the ones applied in typical text mining and natural language processing applications.
The present systematic review investigated scientific works that used AI applications for the discovery of implicit knowledge in the Bible. Three research questions guided the review: the main tasks solved by AI, the main algorithms used, and the limitations of AI approaches when applied to Bible text analysis. In terms of the three research questions, the following were the main findings:
  • Question 1: What are the main tasks solved by AI methods in the analysis of the Bible? The review identified seven primary tasks: machine translation, authorship identification, part-of-speech tagging, semantic annotation, clustering, categorization, and Biblical interpretation. Among these, machine translation and authorship identification emerged as the most explored areas, driven by advancements in neural networks and deep learning. However, tasks like Biblical interpretation remain underexplored, highlighting a need for future research in developing AI tools capable of contextual and symbolic reasoning.
  • Question 2: What are the main AI algorithms used in the analysis of the Bible? The techniques most commonly used are KNN, K-means, deep learning (LSTM, RNN, DNN, CNN), SVM, embeddings, decision trees and self-organizing maps. It is worth noting that deep neural networks were the preferred method, achieving consistent results in most works reviewed. Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Support Vector Machines (SVMs) were the most commonly employed techniques, reflecting the linguistic and structural complexity of the Biblical text. The effectiveness of these algorithms in handling such complexity reinforces their potential applicability to other challenging corpora. However, reliance on these established techniques indicates a limited exploration of emerging AI methodologies, such as Large Language Models (LLMs) and transformer-based architectures, which could offer significant improvements.
  • Question 3: What are the main limitations of AI approaches in the analysis of the Bible? The limitations found in the papers correspond to classical problems generally found in data mining, such as memory storage [8], dataset size [61], and asymmetry between the training data and the real or test data [53]. However, some limitations are specific to AI applications in the Biblical literature, such as the identification of contextual elements in Biblical texts [85], which is a highly complex task. This complexity can be explained by the diversity of genres that compose the Biblical text and its semantics heavily charged with symbology and typology. This is a bottleneck that must be overcome for the application of AI in Biblical interpretation. Other limitations include a high number of false positives in classification tasks [45], the difficulty in finding suitable deep network architectures for dealing with the Bible text [87], and the lack of standardized performance metrics in the field.
The findings showed that this field is still recent and with scarce literature. The main goal of using AI in Bible text analysis is the development of translation machines and not knowledge extraction, as could be expected. Recurrent neural networks have shown better performance for translation machines and authorship identification. It is worth noting the fact that there is a gap when it comes to applying AI for Bible text interpretation. It is reasonable to conceive that scientific development in this direction will contribute to the works of theologists and natural language processing research in general, because the Biblical literature encompasses an emblematic and diverse textual format. Algorithms that achieve a satisfactory performance level when applied to the Biblical literature may also do so in texts with similar characteristics.
The search involving the use of AI in Biblical text analysis was performed using nine search terms in Scopus and Web of Science, and retrieved 147 papers, from which 85 were selected for review. This is mainly due to the popularization of techniques like machine learning and deep learning. As reviewed, most works deal with machine translation, authorship identification, PoS tagging, semantic annotation, clustering, categorization, and Biblical interpretation.
Recurrent neural networks were the most frequently used approach in Biblical text analysis, mainly because one of the main characteristics of the Biblical literature is the abundance of symbology, typology, and textual genres, which make evident its non-triviality. The Bible also possesses two literary genres that are more prevalent than others: narrative and poetry.
In terms of data, the Avila Bible was the most frequently used, maybe because it is a text written by different scholars in the Medieval period and which enabled the design and implementation of authorship identification solutions. The King James version is, however, the most widely adopted by the general public.
We also noted that not all analysis techniques used were open-source or freely accessible; some proprietary frameworks were chosen in specific papers. Thus, the scientific discussion of these methods is difficult, as there is no detail of the AI mechanisms behind such frameworks. This is the case for the Text2Onto and Gertrude frameworks.
Last, to better understand the performance of the AI methods used, it is necessary to have a clearer description of the metrics used to assess their performances. However, many papers surveyed did not contain such description, and a better formalism in this direction is certainly the subject for future study.
Regarding future works in the field, it is possible to detach the use of AI in the interpretation of Biblical corpora, that is, as a hermeneutic tool, and carry out a deeper investigation into performance measures in the analysis of the Bible with AI (for a future quantitative analysis), experimentation with other AI algorithms (e.g., text mining, LLMs, and NLP approaches), and the adoption of visualization tools for the knowledge extracted from the Bible texts.

Author Contributions

Conceptualization, methodology: B.C.L., N.O. and L.N.C.; writing—review and editing: B.C.L., L.N.C. and I.A.; investigation: B.C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financed by the São Paulo Research Foundation (FAPESP), Brasil, process number 2021/11905-0, CNPq, process number 444999/2024-8, and by the Dendritic Institute at FGCU, USA.

Data Availability Statement

Data are contained within this article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. A Summary of the Eligible Papers

Table A1 summarizes the eligible papers, including information about the authors, types of applications, AI method used, year of publication, and where the paper was published.
Table A1. List of eligible papers, including authors, type of application, AI technique employed, publication year, and publication vehicle.
Table A1. List of eligible papers, including authors, type of application, AI technique employed, publication year, and publication vehicle.
AuthorApplication TypeAlgorithm/MethodYearTitle of the Periodic/
Conference
Ashengo YA, Aga RT, Abebe SL [43]Automatic translationDeep Learning (DL)/RNN2021Machine Translation
Bria A, Cílio ND, Stefano C, Fontanella F, Marrocos C, Molinara M, Freca AS, Tortorella F [44]Authorship identification (ancient manuscripts)Deep Neural Network (DNN)20182018 IEEE International Conference on Metrology for Archaeology and Cultural Heritage, MetroArchaeo 2018—Proceedings
Bilovich A, Bryson JJ [67]Identifications of beliefs in textsSemantic spacy theory2008AAAI Fall Symposium—Technical Report
Jaenisch HM, Handley JW, Case CT, Songy CG [68]Identification of textual correlationArtificial Imagination Algorithm (AIM)2002Proceedings of SPIE—The International Society for Optical Engineering
Eder M [45]Authorship identification (ancient manuscripts)Support Vector Machines (SVMs)/Nearest Shrunken Centroids (NSC)/Delta in classical burrowsian2016Digital Scholarship in the Humanities
Ziran Z, Xavier P, Innocenti SU, Mugnai D, Marinai S [52]Textual recognitionConvolutional Neural Network (Faster R-CNN)2020Pattern Recognition Letters
Cilia ND, Stefano CD, Fontanella F, Marrocco C, Molinara M, Freca ASD [46]Authorship identification (ancient manuscripts)Deep Learning Network (DNN)2020Pattern Recognition Letters
Cília ND, Stefano CD, Fontanella F, Marrocos C, Molinara M, Freca ASD [47]Authorship identification (ancient manuscripts)Deep learning/Convolutional Neural Networks (CNN)/Decision Tree (DT)/Random Forest (RF)/Multilayer Perceptron (MLP)2020Journal of Imaging
Geßner A, Kötteritzsch C, Lauer G [77]Reutilization of textsText mining/tool Gertude2013ACM International Conference Proceeding Series
Óní QJ, Asahiah FO [53]Textual recognitionLong Short Term Memory (LSTM)2020Scientific African
Östling R, Tiedemann J [64]Continuous representation of languageLong Short Term Memory (LSTM)201715th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017—Proceedings of Conference
Dione CMB, Kuhn J, Zarrieß S [60]Set of labels of grammatical class for WolofTNT tagger (hidden Markov model)/Tree tagger (decision tree model)/Support Vector Machine (SVMTool)2010Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC 2010
Esan A, Oladosu J, Oyeleye C, Adeyanju I, Olaniyan O, Okomba N, Omodunbi B, Adanigbo O [48]Translation machineRecurrent Neural Network (RNN)2020International Journal of Advanced Computer Science and Applications
Francis M, Nair KNR [61]Grammatical class taggingSupport Vector Machines (SVMs)/Conditional Random Fields(CRF)2014Proceedings of the 2014 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2014
Yu Z, Mareček D, Žabokrtský Z, Zeman D [63]Delexicalized tagging (PNL and POS)Baseline/K-Nearest Neighbors (KNNs)/Support Vector Machines (SVMs)/Bagging/Random forest/Gradient tree boosting2016Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016
Coeckelbergs M, Hooland SV [62]Semantic notationTopic modeling2016CEUR Workshop Proceedings
Azawi MA, Afzal MZ, Breuel TM [65]Language modelingRecurrent Neural Network (RNN)/LSTM2013ACM International Conference Proceeding Series
Visa A, Vanharanta H, Back B [79]Knowledge discoverySelf Organizing Maps (SOM)2001Proceedings of the 34th Hawaii International Conference on System Sciences
Cernansky M, Makula M, Trebaticky P, Lacko P [66]Textual correctionVariable Length Markov Models (VLMMs)/Recurrent Neural Network (RNN)2007CEUR Workshop Proceedings
Thomas D, Valenzuela ROC [56]Textual formalityText Mining/Sentiment analysis2020Journal of Research on Christian Education
Golovin SF, Shaus A, Sober B, Levin D, Na’aman N, Sass B, Turkel E, Piasetzky E, Finkelstein I [49]Authorship identification (ancient manuscripts)Machine learning2016Proceedings of the National Academy of Sciences of the United States of America
Valdivia MTM, Vega MG, López LAU [74]Categorization of textsRocchio algorithm Widrow–Hoff algorithm/Kivinen –Warmuth algorithm/Learning vector Quantization (LVQ)/Vector Space Model (VSM)2003Neurocomputing
Schrader SR, Gultepe E [78]Discovery of similarities and dissimilaritiesAnalyzing Indo-European Language Similarities Using Document Vectors.2023Informatics
Stefano CD, Maniaci M, Fontanella F, Freca ASD [50]Authorship identification (ancient manuscripts)Decision Tree/K-Nearest Neighbors (KNNs)/Support Vector Machines (SVMs)2018Engineering Applications of Artificial Intelligence
Widdows D, Cohen T [75]Discovery of similarities and dissimilaritiesLatent semantic analysis2009Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Covington MA, Potter I, Snodgrass T [54]StylometryEuclidean distance/Manhatt an distance2015Digital Scholarship in the Humanities
Bleiweiss A [8]Semantic groupingDeep learning/CBOW2017ICAART 2017—Proceedings of the 9th International Conference on Agents and Artificial Intelligence
Visa A, Toivonen J, Vanharanta H, Back B [79]Information recoveryEuclidean distance/protótipo2001Proceedings of the 34th Annual Hawaii International Conference on System Sciences
Murai H [85]Interpretation of textsTF-IDF2013Studies in Computational Intelligence
Popa RC, Goga N, Goga M [76]Extraction of Biblical knowledgeText2Onto20192019 International Conference on Automation, Computational and Technology Management, ICACTM 2019
Zhao HJ, Liu J [87]Extraction of Biblical knowledgeRecurrent Neural Network (RNN)/Convolutional Neural Network (CNN)/BI-Direction Attention Flow (BIDAF) model/Long Short-Term Memory (LSTM)2018Proceedings of the International Joint Conference on Neural Networks
Tschuggnall M, Specht G [51]Authorship identificationNaive Bayes/Support Vector Machines (LibSVMs)2016Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Varghese N, Punithavalli M [15]Semantic analysisLatent Semantic Analysis (LSA)/Euclidean distance/Multinomial Naïve Bayes/Support Vector Machines (SVMs)2019International Journal of Scientific and Technology Research
Hu W [86]Identification of correlations/groupingLatent Dirichlet Allocation (LDA)/K-means2012Sociology Mind
Rista A Kadriu A [55]Speech recognitionCASR: A Corpus for Albanian Speech Recognition2021International Convention on Information, Communication and Electronic Technology (MIPRO)
Loekito J A Tjahyanto A Indraswari R [88]Interpretation of textsNatural Language Processing20243rd International Conference on Creative Communication and Innovative Technology (ICCIT)
Popović et al. [59]Authorship identification (ancient manuscripts)Deep learning2021PLoS ONE
Chrostowski Najda [90]Interpretation of textsNatural Language Processing/ChatGPT2024J. Relig. Educ.
Kang, J Kim, S [81]Categorization of textsAssociation/Word cloud2022Jahr–European Journal of Bioethics
Abramov, A Ivanov, V. Solovyev, V [69](PNL and POS)Embeddings/NLP2023Computación y Sistemas
Ashengo, Y Aga, R Abebe, S L [43](Translation machine)RNNs2021Machine Translation
Östling, R Kurfalı, M [72](Semantic notation)RNNs /LSTM2023Computational Linguistics
Kann, A [70](PNL and POS)PLN2024LREC-COLING 2024
Samosir, F V P S [89](Interpretation of texts)Transformer2023Eighth International Conference on Informatics and Computing (ICIC)
Martinjak, M Lauc, D Skelac, I [82]Correlations/ groupingDeep learning/NER2023International Journal of Advanced Computer Science and Applications (IJACSA)
Tirosh-Becker O Becker, O M Skelac, I [71]POSPOS2022Journal of Jewish Languages
Bade, G Y Kolesnikova, O Oropeza, J L Sidorov, G [83]Categorization of textsTF-IDF2024Procedia Computer Science
Janetzki, J Melo, G Nemecek, J Whitenack, D [73]Semantic analysisGNN2024Proceedings of the 6th Workshop on Research in Computational Linguistic Typology and Multilingual NLP (SIGTYP 2024)
Schrader, S Gultepe, E [78]Correlations/
grouping
Clustering/Embeddings2023Informatics
Krishna, K. et al. [58]Authorship identification (ancient manuscripts)Decision tree/Random forest202313th International Conference on Computing Communication and Networking Technologies (ICCCNT)
Campbell, N J [84]Correlations/
grouping
Hclust2021Old Testament Essays
Chandra, R et al. [91]Interpretation of TextsLLMs2021IEEE Access
Mishra, K et al. [92]Interpretation of textsTransformer2023International Journal of Computer Information Systems and Industrial Management Applications

References

  1. Maoz, Z.; Henderson, E.A. The world religion dataset, 1945–2010: Logic, estimates, and trend. Int. Interact. 2013, 39, 265–291. [Google Scholar]
  2. Amore, R.C. Religion and Politics: New Developments Worldwide; MDPI: Basel, Switzerland, 2019. [Google Scholar] [CrossRef]
  3. Beyers, J. Religion and culture: Revisiting a close relative. HTS Theol. Stud. 2017, 73, 1–9. [Google Scholar]
  4. Gawthrop, R.; Strauss, G. Protestantism and Literacy in Early Modern Germany. In The Past and Present Society; Oxford University Press: Oxford, UK, 1984; pp. 31–55. [Google Scholar]
  5. Lutz, D.S. The relative influence of European writers on late eighteenth-century American political thought. Am. Politi. Sci. Rev. 1984, 78, 189–197. [Google Scholar] [CrossRef]
  6. Rogerson, J.W.; Lieu, J.M. (Eds.) The interpretation of the Bible. In The Oxford Handbook of Biblical Studies; Oxford University Press Inc.: Oxford, UK, 2006. [Google Scholar]
  7. Bruce, F.F. New Testament. In The Canon of Scripture; InterVarsity Press: Lisle, IL, USA, 1988. [Google Scholar]
  8. Bleiweiss, A. A Hierarchical Book Representation of Word Embeddings for Effective Semantic Clustering and Search. In Proceedings of the 9th International Conference on Agents and Artificial Intelligence—ICAART 2017, Porto, Portugal, 24–26 January 2017; pp. 154–163. [Google Scholar]
  9. Zimmermann, J. Hermeneutics: A Very Short Introduction; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
  10. Schmidt, L.K. lUnderstanding Hermeneutics; Routledge: London, UK, 2016. [Google Scholar]
  11. Patte, D. What is structural exegesis? In What Is Structural Exegesis? Wipf and Stock Publishers: Eugene, OR, USA, 2015. [Google Scholar]
  12. Virkler, H.A.; Ayayo, K.G. Hermeneutics: Principles and Processes of Biblical Interpretation; Baker Books: Ada, MI, USA, 2023. [Google Scholar]
  13. Bartholomew, C.G. Introducing Biblical Hermeneutics: A Comprehensive Framework for Hearing God in Scripture; Baker Academic: Ada, MI, USA, 2015. [Google Scholar]
  14. Hemenway, M.; Barber, J.O.; Goodwin, S.; Saxton, M.; Beal, T. Bible as Interface: Reading Bible with Machines. In Theology of the Digital Workshop Projects; 2019; Available online: https://cursor.pubpub.org/pub/hemenway-bible-interface/release/3 (accessed on 10 December 2024).
  15. Varghese, N.; Punithavalli, M. Lexical and semantic analysis of sacred texts using machine learning and natural language processing. Int. J. Sci. Technol. Res. 2019, 8, 3133–3140. [Google Scholar]
  16. Ertel, W. Introduction to Artificial Intelligence, 2nd ed.; Springer: Weingarten, Germany, 2017. [Google Scholar]
  17. Konar, A. Artificial Intelligence and Soft Computing: Behavioral and Cognitive Modeling of the Human Brain; CRC Press: New York, NY, USA, 1999. [Google Scholar]
  18. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach; Pearson Education Ltd.: London, UK, 2002. [Google Scholar]
  19. Hunt, E.B. Artificial Intelligence; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  20. Zurada, J.M.; Marks, R.J.; Robinson, C.J. Computational Intelligence Imitating Life; IEEE Illustrated Edition: New York, NY, USA, 1994. [Google Scholar]
  21. de Castro, L.N. Fundamentals of natural computing: An overview. Phys. Life Rev. 2007, 4, 1–23. [Google Scholar] [CrossRef]
  22. Brabazon, A.; O’Neill, M.; McGarraghy, S. Natural Computing Algorithms; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  23. Kacprzyk, J.; Pedrycz, W. Handbook of Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  24. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar]
  25. Wang, H.; Lei, Z.; Zhang, X.; Zhou, B.; Peng, J. Machine learning basics. Deep Learn. 2016, 98, 164. [Google Scholar]
  26. Murphy, K.P. Machine Learning: A Probabilistic Perspective; The MIT Press: Cambridge, MA, USA; London, UK, 2012. [Google Scholar]
  27. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  28. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  29. Aggarwal, C.C. Data Mining: The Textbook. In Data Mining; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  30. Larose, D.T.; Larose, C.D. Discovering Knowledge in Data: An Introduction to Data Mining; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  31. de Castro, L.N.; Ferrari, D.G. An Introduction to Data Mining: Basic Concepts, Algorithms and Applications; Saraiva Educação SA: São Paulo, Brazil, 2017. [Google Scholar]
  32. Qamar, U.; Raza, M.S. Data Science: Concepts and Techniques with Applications, 2nd ed.; Springer: Cham, Switzerland, 2023. [Google Scholar]
  33. Kelleher, J.D.; Tierney, B. Data Science; The MIT Press: London, UK, 2018. [Google Scholar]
  34. Dhar, V. Data science and prediction. Commun. ACM 2013, 56, 64–73. [Google Scholar] [CrossRef]
  35. Agarwal, R.; Dhar, V. Big data, data science, and analytics: The opportunity and challenge for IS research. Inf. Syst. Res. 2014, 25, 443–448. [Google Scholar] [CrossRef]
  36. Berry, M.W.; Kogan, J. Text Mining: Applications and Theory; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  37. Weiss, S.M.; Indurkhya, N.; Zhang, T.; Damerau, F.J. Text Mining: Predictive Methods for Analyzing Unstructured Information; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  38. Chowdhary, K.R. Natural Language Processing. Fundamentals of Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  39. Kitchenham, B. Procedures for Performing Systematic Reviews. Keele Univ. 2004, 33, 1–26. [Google Scholar]
  40. Moher, D.; Shamseer, L.; Clarke, M.; Ghersi, D.; Liberati, A.; Petticrew, M.; Shekelle, P.; Stewart, L.A. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst. Rev. 2015, 4, 1. [Google Scholar] [CrossRef] [PubMed]
  41. Hoff, J. The Eclipse of Sacramental Realism in the Age of Reform: Re-thinking Luther’s Gutenberg Galaxy in a Post-Digital Age. In New Blackfriars; Wiley: Hoboken, NJ, USA, 2017. [Google Scholar]
  42. Sherwin, B.L. Golems in the Biotech Century. Zygon J. Relig. Sci. 2007, 42, 133–144. [Google Scholar]
  43. Asefa Ashengo, Y.; Tsegaye Aga, R.; Lemma Abebe, S. Context-based machine translation with recurrent neural network for English–Amharic translation. Mach. Transl. 2021, 35, 19–36. [Google Scholar] [CrossRef]
  44. Bria, A.; Cilia, N.D.; De Stefano, C.; Fontanella, F.; Marrocco, C.; Molinara, M.; Scotto di Freca, A.; Tortorella, F. Deep transfer learning for writer identification in medieval books. In Proceedings of the 2018 Metrology for Archaeology and Cultural Heritage (MetroArchaeo), Rome, Italy, 26–28 September 2018; pp. 455–460. [Google Scholar]
  45. Eder, M. Digital Scholarship in the Humanities. Digit. Scholarsh. 2016, 31, 457–469. [Google Scholar] [CrossRef]
  46. Cilia, N.D.; De Stefano, C.; Fontanella, F.; Marrocco, C.; Molinara, M.; Scotto Di Freca, A. An end-to-end deep learning system for medieval writer identification. Pattern Recognit. Lett. 2020, 129, 137–143. [Google Scholar] [CrossRef]
  47. Cilia, N.D.; De Stefano, C.; Fontanella, F.; Marrocco, C.; Molinara, M.; Scotto di Freca, A. An experimental comparison between deep learning and classical machine learning approaches for writer identification in medieval documents. J. Imaging 2020, 6, 89. [Google Scholar] [CrossRef]
  48. Esan, A.; Oladosu, J.; Oyeleye, C.; Adeyanju, I.; Olaniyan, O.; Okomba, N.; Omodunbi, B.; Adanigbo, O. Development of a recurrent neural network model for English to Yoruba machine translation. Int. Adv. Comput. Sci. Appl. 2020, 11, 602–609. [Google Scholar] [CrossRef]
  49. Faigenbaum-Golovin, S.; Shaus, A.; Sober, B.; Finkelstein, I. Algorithmic handwriting analysis of Judah’s military correspondence sheds light on composition of biblical texts. Proc. Natl. Acad. Sci. USA 2016, 113, 4664–4669. [Google Scholar] [CrossRef]
  50. De Stefano, C.; Maniaci, M.; Fontanella, F.; Scotto di Freca, A. Reliable writer identification in medieval manuscripts through page layout features: The “Avila” Bible case. Eng. Appl. Artif. Intell. 2018, 72, 99–110. [Google Scholar] [CrossRef]
  51. Tschuggnall, M.; Specht, G. From plagiarism detection to Bible analysis: The potential of machine learning for grammar-based text analysis. Lect. Notes Comput. Sci. Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinform. 2016, 9853, 245–248. [Google Scholar]
  52. Ziran, Z.; Pic, X.; Innocenti, S.U.; Mugnai, D.; Marinai, S. Text alignment in early printed books combining deep learning and dynamic programming. Pattern Recognit. Lett. 2020, 133, 109–115. [Google Scholar] [CrossRef]
  53. Oni, O.J.; Asahiah, F.O. Computational modelling of an optical character recognition system for Yoruba printed text images. Sci. Afr. 2020, 7, e00415. [Google Scholar] [CrossRef]
  54. Covington, M.A.; Potter, I.; Snodgrass, T. Stylometric Classif. Differ. Transl. Same Text Same Language. Digit. Scholarsh. Humanit. 2020, 30, 322–325. [Google Scholar] [CrossRef]
  55. Rista, A.; Kadriu, A. CASR: A corpus for Albanian speech recognition. In Proceedings of the International Convention on Information, Communication and Electronic Technology (MIPRO), Opatija, Croatia, 19–22 May 2021; pp. 438–441. [Google Scholar]
  56. Thomas, D.; Valenzuela, R.O. Concerns and implications for ESL readers: Text mining analysis of the King James Version and New International Version. J. Res. Christ. Educ. 2020, 29, 259–271. [Google Scholar] [CrossRef]
  57. Vinotheni, C.; Lakshmana Pandian, S. A state of art approaches on handwriting recognition models. In Proceedings of the International Conference on Science Technology Engineering and Mathematics (ICONSTEM), Chennai, India, 21–23 August 2019; pp. 98–103. [Google Scholar]
  58. Krishna, K.; Velu, D.S.; Mahalingappa, D.R.; Srivastva, S. Avila Categorization Using Machine Learning. In Proceedings of the 2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT), Kharagpur, India, 3–5 October 2022; pp. 1–5. [Google Scholar] [CrossRef]
  59. Popović, M.; Dhali, M.A.; Schomaker, L. Artificial Intelligence Based Writer Identification Generates New Evidence for the Unknown Scribes of the Dead Sea Scrolls Exemplified by the Great Isaiah Scroll (1QIsaa). PLoS ONE 2021, 16, 0249769. [Google Scholar] [CrossRef]
  60. Dione, C.M.B.; Kuhn, J.; Zarrieß, S. Design and development of part-of-speech-tagging resources for Wolof (Niger-Congo, spoken in Senegal). In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010), Valletta, Malta, 17–23 May 2010; pp. 2806–2813. [Google Scholar]
  61. Francis, M.; Ramachandran Nair, K.N. Hybrid part of speech tagger for Malayalam. In Proceedings of the 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Jaipur, India, 24–27 September 2014; pp. 1744–1750. [Google Scholar]
  62. Coeckelbergs, M.; van Hooland, S. Modeling the Hebrew Bible: Potential of topic modeling techniques for semantic annotation and historical analysis. In Proceedings of the CEUR Workshop Proceedings, Paris, France, 10–12 June 2016; Volume 1595, pp. 47–52. [Google Scholar]
  63. Yu, Z.; Mareček, D.; Žabokrtský, Z.; Zeman, D. If you even don’t have a bit of Bible: Learning delexicalized POS taggers. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC), Portorož, Slovenia, 23–28 May 2016; pp. 96–103. [Google Scholar]
  64. Östling, R.; Tiedemann, J. Continuous multilinguality with language vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Valencia, Spain, 3–7 April 2017; Volume 2, pp. 644–649. [Google Scholar]
  65. Al Azawi, M.; Afzal, M.Z.; Breuel, T.M. Normalizing historical orthography for OCR historical documents using LSTM. In Proceedings of the ACM International Conference, Dresden, Germany, 9–12 December 2013; pp. 80–85. [Google Scholar]
  66. Cernanský, M.; Makula, M.; Trebatický, P.; Lacko, P. Text correction using approaches based on Markovian architectural bias. In Proceedings of the CEUR Workshop Proceedings, Budapest, Hungary, 19–21 September 2007; Volume 2, pp. 644–649. [Google Scholar]
  67. Bilovich, A.; Bryson, J.J. Detecting the evolution of semantics and individual beliefs through statistical analysis of language use. In Proceedings of the AAAI Fall Symposium—Technical Report, Arlington, VA, USA, 7–9 November 2008; pp. 21–26. [Google Scholar]
  68. Jaenisch, H.M.; Handley, J.W.; Case, C.T.; Songy, C.G. Graphics-based intelligent search and abstracting using data modeling. Soc. Photo-Opt. Instrum. Eng. SPIE 2002, 4788, 135–146. [Google Scholar]
  69. Abramov, A.V.; Ivanov, V.V.; Solovyev, V.D. Lexical Complexity Evaluation Based on Context for Russian Language. Comput. Sist. 2023, 27, 127–139. [Google Scholar] [CrossRef]
  70. Kann, A. Massively Multilingual Token-Based Typology Using the Parallel Bible Corpus. In Proceedings of the LREC-COLING 2024, Torino, Italy, 20–25 May 2024; ELRA Language Resource Association: Paris, France, 2024; pp. 11070–11079. [Google Scholar]
  71. Tirosh-Becker, O.; Becker, O.M. TAJA Corpus: Corpus Judaico-Árabe Argelino Escrito com Marcação Linguística. J. Jew. Lang. 2022, 10, 24–53. [Google Scholar] [CrossRef]
  72. Östling, R.; Kurfalı, M. Language Embeddings Sometimes Contain Typological Generalizations. Comput. Linguist. 2023, 49, 1003–1051. [Google Scholar] [CrossRef]
  73. Janetzki, J.; de Melo, G.; Nemecek, J.; Whitenack, D. GUIDE: Creating Semantic Domain Dictionaries for Low-Resource Languages. In Proceedings of the 6th Workshop on Research in Computational Linguistic Typology and Multilingual NLP (SIGTYP 2024), St. Julian’s, Malta, 22 March 2024; pp. 10–24. [Google Scholar]
  74. Martin-Valdivia, M.T.; Garcia-Vega, M.; Ureña-López, L.A. LVQ for text categorization using a multilingual linguistic resource. Neurocomputing 2003, 55, 665–679. [Google Scholar] [CrossRef]
  75. Widdows, D.; Cohen, T. Semantic vector combinations and the synoptic gospels. Lect. Notes Comput. Sci. Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinform. 2009, 5494, 251–265. [Google Scholar]
  76. Popa, R.C.; Goga, N.; Goga, M. Extracting knowledge from the Bible: A comparison between the Old and the New Testament. In Proceedings of the International Conference on Automation, Computational and Technology Management (ICACTM), London, UK, 24–26 April 2019; pp. 505–510. [Google Scholar]
  77. Geßner, A.; Kötteritzsch, C.; Lauer, G. Biblical intertextuality in a digital world: The tool GERTRUDE. In Proceedings of the ACM International Conference, Dresden, Germany, 9–12 December 2013; Volume 6, pp. 1–5. [Google Scholar]
  78. Schrader, S.R.; Gultepe, E. Analyzing Indo-European Language Similarities Using Document Vectors. Informatics 2023, 10, 76. [Google Scholar] [CrossRef]
  79. Visa, A.; Toivonen, J.; Vanharanta, H.; Back, B. Prototype matching—Finding meaning in the books of the Bible. In Proceedings of the 34th Annual Hawaii International Conference on System Sciences, Maui, HI, USA, 6 January 2001; Volume 6, pp. 505–510. [Google Scholar]
  80. Visa, A.; Toivonen, J.; Vanharanta, H.; Back, B. Contents matching defined by prototypes: Methodology verification with books of the bible. J. Manag. Inf. Syst. 2014, 18, 87–100. [Google Scholar] [CrossRef]
  81. Kang, J.; Kim, S. A Study on the Analysis of the Interrelationship between the Epic of Gilgamesh and the Bible Using Text Mining: Focusing on the Episode of the Great Flood. Jahr–Eur. J. Bioeth. 2022, 13, 371–392. [Google Scholar] [CrossRef]
  82. Martinjak, M.; Lauc, D.; Skelac, I. Towards Analysis of Biblical Entities and Names Using Deep Learning. Int. J. Adv. Comput. Sci. Appl. IJACSA 2023, 14, 491–497. [Google Scholar] [CrossRef]
  83. Bade, G.Y.; Kolesnikova, O.; Oropeza, J.L.; Sidorov, G. Lexicon-Based Language Relatedness Analysis. Procedia Comput. Sci. 2024, 244, 268–277. [Google Scholar] [CrossRef]
  84. Campbell, N.J. Counting the Jeremiahs: Machine Learning and the Jeremiah Narratives. Old Testam. Essays 2021, 34, 718–740. [Google Scholar] [CrossRef]
  85. Murai, H. Exegetical science for the interpretation of the bible: Algorithms and software for quantitative analysis of christian documents. Stud. Comput. Intell. 2013, 492, 67–86. [Google Scholar]
  86. Hu, W. Unsupervised learning of two bible books: Proverbs and psalms. Sociol. Mind 2012, 2, 325–334. [Google Scholar] [CrossRef]
  87. Zhao, H.J.; Liu, J. Finding answers from the word of God: Domain adaptation for neural networks in biblical question answering. In Proceedings of the International Joint Conference on Neural Networks, Rio de Janeiro, Brazil, 8–13 July 2018; pp. 475–482. [Google Scholar]
  88. Loekito, J.A.; Tjahyanto, A.; Indraswari, R. Design Science Research in Developing a Bible-Based Chatbot for Holistic Counseling. In Proceedings of the 3rd International Conference on Creative Communication and Innovative Technology (ICCIT), Tangerang, Indonesia, 7–8 August 2024. [Google Scholar]
  89. Samosir, F.V.P.; Mendrofa, S. IndoBerea: Evolving Semantic Search in Theological Context. In Proceedings of the 2023 Eighth International Conference on Informatics and Computing (ICIC), Manado, Indonesia, 8–9 December 2023. [Google Scholar]
  90. Chrostowski, M.; Najda, A.J. ChatGPT as a modern tool for Bible teaching in confessional religious education: A German view. J. Relig. Educ. 2024, 73, 75–94. [Google Scholar] [CrossRef]
  91. Chandra, R.; Tiwari, A.; Jain, N.; Badhe, S. Large Language Models for Metaphor Detection: Bhagavad Gita and Sermon on the Mount. IEEE Access 2024, 12, 84452–84469. [Google Scholar] [CrossRef]
  92. Mishra, K.; Shaikh, A.; Chauhan, J.; Kanojia, M. Sanskrit to English Translation: A Comprehensive Survey and Implementation Using Transformer-Based Model. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 2023, 15, 9. [Google Scholar]
Figure 1. PRISMA Protocol.
Figure 1. PRISMA Protocol.
Analytics 04 00013 g001
Figure 2. Bigrams generated from the selected papers.
Figure 2. Bigrams generated from the selected papers.
Analytics 04 00013 g002
Figure 3. Trigrams generated from the selected papers.
Figure 3. Trigrams generated from the selected papers.
Analytics 04 00013 g003
Figure 4. Bigrams generated from the excluded papers.
Figure 4. Bigrams generated from the excluded papers.
Analytics 04 00013 g004
Table 1. Search terms and papers retrieved from each search engine.
Table 1. Search terms and papers retrieved from each search engine.
KeywordsScopusWeb of ScienceSelected Papers
Bible AND Artificial Intelligence21728
Bible AND Text Mining628
Bible AND Neural Network14418
Bible AND NLP19625
Bible AND Machine Learning20727
Bible AND Computation Intelligence000
Bible AND Data Science202
Bible AND Data Mining10212
Bible AND Deep Learning151227
Number of selected papers 147
Table 2. Inclusion and exclusion criteria.
Table 2. Inclusion and exclusion criteria.
Inclusion CriteriaExclusion Criteria
Original papersDuplicates
Papers written in EnglishNon-English languages
Complete TextAbstract only (partial content)
Application of AI techniques in the Holy BibleLack of utilization of AI techniques
in the Holy Bible
Use of AI to interpret the Biblical textPurely philosophical papers
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lima, B.C.; Omar, N.; Avansi, I.; de Castro, L.N. Artificial Intelligence Applied to the Analysis of Biblical Scriptures: A Systematic Review. Analytics 2025, 4, 13. https://doi.org/10.3390/analytics4020013

AMA Style

Lima BC, Omar N, Avansi I, de Castro LN. Artificial Intelligence Applied to the Analysis of Biblical Scriptures: A Systematic Review. Analytics. 2025; 4(2):13. https://doi.org/10.3390/analytics4020013

Chicago/Turabian Style

Lima, Bruno Cesar, Nizam Omar, Israel Avansi, and Leandro Nunes de Castro. 2025. "Artificial Intelligence Applied to the Analysis of Biblical Scriptures: A Systematic Review" Analytics 4, no. 2: 13. https://doi.org/10.3390/analytics4020013

APA Style

Lima, B. C., Omar, N., Avansi, I., & de Castro, L. N. (2025). Artificial Intelligence Applied to the Analysis of Biblical Scriptures: A Systematic Review. Analytics, 4(2), 13. https://doi.org/10.3390/analytics4020013

Article Metrics

Back to TopTop