Next Article in Journal
Chebyshev Pseudospectral Method for Fractional Differential Equations in Non-Overlapping Partitioned Domains
Previous Article in Journal
Solving Complex Optimisation Problems by Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi–Dimensional Data Analysis of Deep Language in J.R.R. Tolkien and C.S. Lewis Reveals Tight Mathematical Connections

by
Emilio Matricciani
Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, 20133 Milan, Italy
AppliedMath 2024, 4(3), 927-949; https://doi.org/10.3390/appliedmath4030050
Submission received: 17 June 2024 / Revised: 29 July 2024 / Accepted: 31 July 2024 / Published: 1 August 2024

Abstract

:
Scholars of English Literature unanimously say that J.R.R. Tolkien influenced C.S. Lewis’s writings. For the first time, we have investigated this issue mathematically by using an original multi-dimensional analysis of linguistic parameters, based on surface deep language variables and linguistic channels. To set our investigation in the framework of English Literature, we have considered some novels written by earlier authors, such as C. Dickens, G. MacDonald and others. The deep language variables and the linguistic channels, discussed in the paper, are likely due to writers’ unconscious design and reveal connections between texts far beyond the writers’ awareness. In summary, the capacity of the extended short-term memory required to readers, the universal readability index of texts, the geometrical representation of texts and the fine tuning of linguistic channels within texts—all tools largely discussed in the paper—revealed strong connections between The Lord of the Rings (Tolkien), The Chronicles of Narnia, The Space Trilogy (Lewis) and novels by MacDonald, therefore agreeing with what the scholars of English Literature say.

1. Introduction

Unanimously, in a large number of papers—some of which are recalled here [1,2,3,4,5,6,7,8] from the vast literature on the topic—scholars of English Literature state that J.R.R. Tolkien influenced C.S. Lewis’s writings. The purpose of the present paper is not to review the large wealth of literature based on the typical approach used by scholars of literature—which is not our specialty—but to investigate this issue mathematically and statistically—a study that has never been conducted before—by using recent methods devised by researching the impact of the surface deep language variables [9,10] and linguistic channels [11] in literary texts. Since scholars mention the influence of George MacDonald on both, we consider some novels written by this earlier author. To set all these novels in the framework of English Literature, we consider some novels written by other earlier authors, such as C. Dickens and others.
After this introduction, in Section 2, we introduce the literary texts (novels) considered. In Section 3, we report the series of words, sentences and interpunctions versus chapters for some novels, and define an index useful to synthetically describe regularity due to what we think is a conscious design by authors. In Section 4, we start exploring the four deep language variables; to avoid misunderstanding, these variables, and the linguistic channels derived from them, refer to the “surface” structure of texts, not to the “deep” structure mentioned in cognitive theory. In Section 5, we report results concerning the extended short-term memory and a universal readability index; both topics address human short-term memory buffers. In Section 6, we represent literary texts geometrically in the Cartesian plane by defining linear combinations of deep language variables and calculate the probability that a text can be confused with another. In Section 7, we show the linear relationships existing between linguistic variables in the novels considered. In Section 8, we report the theory of linguistic channels. In Section 9, we apply it to the novels presently studied. Finally, in Section 10, we summarize the main findings and conclude. Several Appendices report numerical data.

2. Database of Literary Texts (Novels)

Let us first introduce the database of literary texts used in the present paper. Table 1 lists some basic statistics of the novels by Tolkien, Lewis and MacDonald. To set these texts in the framework of earlier English Literature, we consider novels by Charles Dickens (Table 2) and other authors (Table 3).
We have used the digital text of a novel (WinWord file) and counted, for each chapter, the number of characters, words, sentences and interpunctions (punctuation marks). Before doing so, we have deleted the titles, footnotes and other extraneous material present in the digital texts, a burdensome work. The count is very simple, although time-consuming. Winword directly provides the number of characters and words. The number of sentences was calculated by using WinWord to replace every full stop with a full stop: of course, this action does not change the text, but it gives the number of these substitutions and therefore the number of full stops. The same procedure was repeated for question marks and exclamation marks. The sum of the three totals gives the total number of sentences in the text analyzed. The same procedure gives the total number of commas, colons and semicolons. The sum of these latter values with the total number of sentences gives the total number of interpunctions.
Some homogeneity can be noted in novels of the same author. The stories in The Space Trilogy and The Chronicles of Narnia, by Lewis are told with about the same number of chapters, words and sentences, as is also for a couple of MacDonald’s novels, such as At the Back of the North Wind and Lilith: A Romance. Some homogeneity can be found in David Copperfield, Bleak House and Our Mutual Friend (by Dickens) and in The Adventures of Oliver Twist and A Tale of Two Cities. These numerical values, we think, are not due to chance but consciously managed by the authors, which is a topic we purse more in the next section.

3. Conscious Design of Texts: Words, Sentences and Interpunctions versus Chapters

First, we study the linguistic variables which we think the authors deliberately designed. In the specifics, we show the series of words, sentences and interpunctions versus chapter.
Let us consider a literary work (a novel) and its subdivision into disjointed blocks of text long enough to give reliable average values. Let n S be the number of sentences contained in a text block, n W the number of words contained in the n S sentences, n C the number of characters contained in the n W words and n I the number of punctuation marks (interpunctions) contained in the n S sentences.
Figure 1 shows the series n W versus the normalized chapter number for The Lord of the Rings, The Chronicles of Narnia, The Space Trilogy.
For example, the normalized value of chapter 10 in The Chronicles of Narnia, is 10 / 110 = 0.09 in the x scale of Figure 1. This normalization allows the synoptic showing of novels with a different number of chapters.
In The Chronicles of Narnia (in the following, Narnia, for brevity), we can notice a practically constant value n W compared to The Lord of the Rings (Lord) and The Space Trilogy (Trilogy).
Let us define a synthetic index to describe the series drawn in Figure 1, namely the coefficient of variation δ , given by the standard deviation σ n W   divided by the mean value < n W >
δ = σ n W < n W >
Table 4 and Table 5 report δ for n W , n S and n I . Since n S and n I are very well correlated with n S , the three coefficients of dispersion are about the same.
In Narnia δ = 0.16 , in Lord δ = 0.34 and in Trilogy δ = 0.60 . Let us also notice the minimum value δ = 0.07 in The Screwtape Letters (Screwtape).
The overall (words, sentences and interpunctions mixed together) mean value is < δ > = 0.44 and the standard deviation σ δ = 0.18 . Therefore, Screwtape is practically more than 2 × σ δ from the mean, as also is Silmarillion on the other side, and Narnia is at about 1.5 × σ δ . In contrast, Trilogy, Lord and The Hobbit (Hobbit) are within 1 × σ δ .
From these results, it seems that Lewis designed the chapters of Narnia and Screwtape with an almost uniform distribution of words, sentences and interpunctions, very likely because of the intended audience in Narnia (i.e., kids) and the “letters” fiction tool used in Screwtape. In Trilogy the design seems very different ( δ = 0.60 , well within 1 × σ δ ) likely due to the development of the science fiction story narrated.
Tolkien acted differently from Lewis, because he seems to have designed chapters more randomly and within 1 × σ δ , as Hobbit and Lord show. An exception is The Silmarillion, published posthumously, which is a text far from being a “novel”.
Finally, notice that the novels by MacDonald show more homogeneous values, very similar to Hobbit and Trilogy and to the other novels listed in Table 5.
In conclusion, the analysis of series of words, sentences and interpunctions per chapter does not indicate likely connections between Tolkien, Lewis and MacDonald. Each author structured their use of words, sentences and punctuation according to distinct plans, which varied not only between authors but also between different novels by the same author.
There are, however, linguistic variables that—as we have reported for modern and ancient literary texts—are not consciously designed/managed by authors; therefore, these variables are the best candidates to reveal hidden mathematical/statistical connections between texts. In the next section, we start dealing with these variables, with the specific purpose of comparing Tolkien and Lewis, although this comparison is set in the more general framework of the authors mentioned in Section 2.

4. Surface Deep Language Variables

We start exploring the four stochastic variables we called deep language variables, following our general statistical theory on alphabetical languages [9,10,11]. To avoid possible misunderstandings, these variables, and the linguistic channels derived from them, refer to the “surface” structure of texts, not to the “deep” structure mentioned in cognitive theory.
Contrarily to the variables studied in Section 3, the deep language variables are likely due to unconscious design. As shown in [9,10,11], they reveal connections between texts far beyond writers’ awareness; therefore, the geometrical representation of texts [10] and the fine tuning of linguistic channels [11] are tools better suited to reveal connections. They can also likely indicate the influence of an author on another.
We defined the number of characters per chapter n C and the number of I P s per chapter n I P , and the four deep language variables are [9] the number of characters C P :
C P = n C n W
the number of words per sentence P F :
P F = n W n S
the number of interpunctions per word, referred to as the word interval, I P :
I P = n I n W
the number of word intervals per sentence M F :
M F = n I P n S
Equation (5) can be written also as M F = P F / I P .
Table 6, Table 7, Table 8 and Table 9 reports the mean and standard deviation of these variables. Notice that these values have been calculated by weighing each chapter with its number of words to avoid the short chapters weighing as much as long ones. For example, chapter 1 of Lord has 10097 words; therefore, its statistical weight is 10097 / 472173 0.021 , not 1 / 62 0.016 . Notice, also, that the coefficient of dispersion used in Section 2 was calculated by weighing each chapter 1 / 62 , not 10097 / 472173 , to visually agree with the series drawn in Figure 1.
Specifically, let M be the number of samples (i.e., chapters), then the mean value < P F > is given by
< P F > = k = 1 M P F , k × n W , k / k = 1 M n W , k
Therefore, notice, for not being misled, that < P F > 1 M k = 1 M P F , k k = 1 M n W , k / k = 1 M n S , k = W / S . In other words, < P F > is not given by the total number of words W divided by the total number of sentences S , or by assigning the weight 1 / M to every chapter. The three values coincide only if all the text blocks contain the same number of words and the same number of sentences, which did not occur. The same observations apply to all other variables.
The following characteristics can be observed from Table 6, Table 7, Table 8 and Table 9. Lord and Narnia share the same < P F > . Silmarillion is distinctly different from Lord and Hobbit, which is in agreement with the different coefficient of dispersion. Screwtape is distinctly different from Narnia and Trilogy. There is a great homogeneity in Dicken’s novels and a large homogeneity in < C P > in all novels.
In the next sections, we use < P F > , < I P > and < M F > to calculate interesting indices connected to the short-term memory of readers.

5. Extended Short-Term Memory of Writers/Readers and Universal Readability Index

In this section, we deal with the linguistic variables that, very likely, are not consciously managed by writers who, of course, act also as readers of their own text. We first report findings concerning the extended short-term memory and then those concerning a universal readability index. Both topics address human short-term memory buffers.

5.1. Extended Short-Term Memory and Multiplicity Factor

In [12,13], we have conjectured that the human short-term memory is sensitive to two independent variables, which apparently engage two short-term memory buffers in series, constituents of what we have called the extended short-term memory (E–STM). The first buffer is modeled according to the number of words between two consecutive interpunctions, i.e., the variable I P , the word interval, which follows Miller’s 7 ± 2 law [14]; the second buffer is modeled according to the number of word intervals, I P s , contained in a sentence—i.e., the variable M F —ranging approximately from 1 to 7.
In [13], we studied the patterns (which depend on the size of the two buffers) that determine the number of sentences that theoretically can be recorded in the E–STM of a given capacity. These patterns were then compared with the number of sentences actually found in novels of Italian and English literature. We have found that most authors write for readers with short memory buffers and, consequently, are forced to reuse sentence patterns to convey multiple meanings. This behavior is quantified by the multiplicity factor α , defined as the ratio between the number of sentences in a novel and the number of sentences theoretically allowed by the two buffers, a function of I P and M F .
We found that α > 1 is more likely than α < 1 and often α 1 . In the latter case, writers reuse many times the same pattern of number of words. Few novels show α < 1 ; in this case, writers do not use some or most of them. The values of α found in the novels presently studied are reported in Table 10 and Table 11.

5.2. Universal Readability Index

In Reference [14], we have proposed a universal readability index given by
G U = 89 10 k C P + 300 / P F 6 I P 6
k = < C P , I T A > / < C P , E n g >
In Equation (8), < C p , I T A > = 4.48 , < C p , E N G > = 4.24 . By using Equations (7) and (8), the average value < k C P > of any language is forced to be equal to that found in Italian, namely 4.48 . The rationale for this choice is that C P is a parameter typical of a language which, if not scaled, would bias G U without really quantifying the reading difficulty for readers, who in their language are used, on average, to reading shorter or longer words than in Italian. This scaling, therefore, avoids changing G U for the only reason that a language has, on average, words shorter (as English) or longer than Italian. In any case, C p affects Equation (7) much less than P F or I P .
The values of < G U > —calculated as the other linguistic variables, i.e., by weighing chapters (samples) according to the number of words – are reported in Table 10 and Table 11. The reader may be tempted to calculate Equation (7) by introducing the mean values reported in Table 6, Table 7, Table 8 and Table 9. This, of course, can be performed but it should be noted that the values so obtained are always less or equal (hence they are lower bounds) to the means calculated from the samples (see Appendix A). For example, for Lord, instead of 64.9, we would obtain 61.9.
It is interesting to “decode” these mean values into the minimum number of school years, Y necessary to make a novel “easy” to read, according to the Italian school system, which is assumed as the reference, see Figure 1 of [15]. The results are also listed in Table 10 and Table 11.

5.3. Discussion

Several intriguing observations can be drawn from the results presented in the preceding subsections.
(a).
Silmarillion with α = 0.2 is quite diverse from other Tolkien’s writings. Mathematically, this is due to its large < M F > = 3.62 and < I P > = 8.58 . In practice, the number of theoretical sentences allowed by the E–STM to read this text is only 1 / α = 5 times the number of sentence patterns actully used in the text. The reader needs a powerful E–STM and reading ability, since G U = 38.7 and Y > 13 . This does not occur for Hobbit ( α = 39.4 ,   G U = 52.4 , Y = 9.9) and Lord ( α = 368.1 , G U = 64.2 ,   Y = 7.4 ) in which Tolkien reuses patterns many times, especially in Lord.
(b).
Lord and Narnia show very large values, α = 368.1 and α = 297.7 , and very similar G U s and school years: G U = 64.2 , Y = 7.4 and G U = 61.1 , Y = 7.9 , respectively. Sentence patterns are reused many times by Lewis in this novel, but not in Screwtape ( α = 1.4 ) , which is more difficult to read ( G U = 33.5 ) and requires more years of schooling, Y > 13 . Moreover, Lord and Narnia have practically the same < P F > 14 .
(c).
In general, Narnia is closer to Lord than to Trilogy, although the number of words and sentences in Trilogy and Narnia are quite similar (Table 1). This difference between Trilogy ( G U = 56.2 , Y = 9 ) and Narnia ( G U = 61.1 , Y = 7.9 ) might depend on the different readers addressed, kids for Narnia and adults for Trilogy, with different reading ability, as G U indicates.
(d).
The novels by MacDonald show values of α and G U very similar to those of the other English novels.
(e).
Notice the homogeneity in Dicken’s novels, which require about Y = 7 ~ 8 years of school and readability index < G U > = 59 ~ 65 .
In conclusion, Lord and Narnia are the novels that address readers with very similar E–STM buffers, reuse sentence patterns in similar ways, contain the same number of words per sentence, and require the same reading ability and school years compared to other novels by Tolkien and Lewis. The mathematical connections between Lord and Narnia will be further pursued in the next section, where the four deep language parameters are used to represent texts geometrically.

6. Geometrical Representation of Texts

The mean values of Table 6, Table 7, Table 8 and Table 9 can be used to assess how texts are “close”, or mathematically similar, in the Cartesian coordinate plane, by defining linear combinations of deep–language variables. Texts are then modeled as vectors; the representation is discussed in detail in [9,10] and briefly recalled here. An extension of this geometrical representation of texts allows the calculation of the probability that a text may be confused with another one, an extension in two dimensions of the problem discussed in [16]. The values of the conditional probability between two texts (authors) can be considered an index indicating who influenced who.

6.1. Vector Representation of Texts

Let us consider the following six vectors of the indicated components of deep language variables :   R 1 = ( < C P > , < P F > ), R 2 = ( < M F > , < P F > ), R 3 = ( < I P > , < P F > ), R 4 = ( < C P > , < M F > ), R 5 = ( < I P > , < M F > ), R 6 = ( < I P > , < C P > ) and their resulting vector sum:
R = k = 1 6 R k = x i + y j
The choice of which parameter represents the component in the abscissa and ordinate axes is not important because, once the choice is made, the numerical results will depend on it, but not the relative comparisons and general conclusions.
In the first quadrant of the Cartesian coordinate plane, two texts are likely mathematically connected—they show close ending points of vector (9)—if their relative Pythagorean distance is small. A small distance means that texts share a similar mathematical structure, according to the four deep language variables.
By considering the vector components x and y of Equation (9), we obtain the scatterplot shown in Figure 2 where X and Y are normalized coordinates calculated by setting Lord at the origin ( X = 0 ,   Y = 0 ) and Silmarillion at ( X = 1 ,   Y = 1 ) , according to the linear tranformations:
X = x x L o r d x S i l m a x L o r d
X = y y L o r d y S i l m a y L o r d
From Figure 2, we can notice that Silmarillion and Screwtape are distinctly very far from all other texts examined, marking their striking diversity, as already remarked; therefore, in the following analyses, we neglect them. Moreover, Pride, Vanity, Moby and Floss are grouped together and far from Trilogy, Narnia and Lord; therefore, in the following analyses, we will not consider them further.
The complete set of the Pythagorean distance d between pairs of texts is reported in Appendix B. These data synthetically describe proximity of texts and may indicate to scholars of literature connections between texts not considered before.
Figure 3 shows example of these distances concerning Lord, Narnia and Trilogy. By referring to the cases in which d < 0.2 , we can observe the following:
(a).
The closest texts to Lord are Narnia, Back, Lilith, Mutual and Peter.
(b).
The closest texts to Narnia are Lord, Lilith, Bleak, Martin and Peter.
(c).
The closest texts to Trilogy are Hobbit, Martin and Peter.
Besides the proximity with earlier novels, Lord and Narnia show close proximity with each other and with two novels by MacDonald.
These remarks, however, refer to the “average” display of vectors whose ending point depends only on mean values. The standard deviation of the four deep language variables, reported in Table 6, Table 7, Table 8 and Table 9, do introduce data scattering; therefore, in the next subsection, we study and discuss this issue by calculating the probability (called “error” probability) that a text may be mathematically confused with another one.

6.2. Error Probability: An Index to Assess Who Influenced Who

Besides the vector R of Equation (9)—due to mean values—we can consider another vector ρ , due to the standard deviation of the four deep language variables that adds to R . In this case, the final random vector describing a text is given by
T = R + ρ
Now, to obtain some insight into this new description, we consider the area of a circle centered at the ending point of R .
We fix the magnitude (radius)   ρ as follows. First, we add the variances of the deep language variables that determine the components x and y of R , let them be σ x 2 , σ y 2 . Then, we calculate the average value σ ρ 2 = 0.5 × ( σ x 2 + σ y 2 ) and finally, we set
ρ = σ ρ
Now, since in calculating the coordinates x and y of R a deep language variable can be summed twice or more, we add its standard deviation (referred to as sigma) twice or more times before squaring. For example, in the x component, I P appears three times; therefore, its contribution to the total variance in the x a x i s is 9 times the variance calculated from the standard deviation reported in Table 6, Table 7, Table 8 and Table 9. For Lord, for example, it is 9 × 0.51 2 . After these calculations, the values of the 1–sigma circle are transformed into the normalized coordinates X ,   Y according to Equations (10) and (11).
Figure 4 shows a significant example involving Lord, Narnia, Trilogy, Back and Peter. We see that Lord can be almost fully confused with Narnia, and partially with Trilogy, but not vice versa. Lord can also be confused with Peter and Back, therefore indicating strong connections with these earlier novels.
Now, we can estimate the (conditional) probability that a text is confused with another by calculating the ratio of areas. This procedure is correct if we assume that the bivariate density of the normalized coordinates ρ X ,   ρ Y , centered at R , is uniform. By assuming this hypothesis, we can calculate probabilities as the ratio of areas [17,18].
The hypothesis of substantial uniformity around R should be justified by noting that the coordinates X ,   Y are likely distributed according to a log-normal bivariate density because the logarithm of the four deep language variables, which combine in Equation (9) linearly, can be modeled as a Gaussian. For the central limit theorem, we should expect approximately a Gaussian model on the linear values, but with a significantly larger standard deviation that that of the single variables. Therefore, in the area close to R , the bivariate density function should not be peaked, hence the uniform density modeling.
Now, we can calculate the following probabilities. Let A be the common area of two 1–sigma circles (i.e., the area proportional to the joint probability of two texts), let A 1 be the area of 1–sigma circle of text 1 and A 2 the area of 1–sigma circle of text 2. Now, since probabilities are proportional to areas, we obtain the following relationships:
A A 1 = P ( A 1 , A 2 ) P ( A 1 ) = P ( A 2 / A 1 ) P ( A 1 ) P ( A 1 ) = P ( A 2 / A 1 )
A A 2 = P ( A 1 , A 2 ) P ( A 2 ) = P ( A 1 / A 2 ) P ( A 2 ) P ( A 2 ) = P ( A 1 / A 2 )
In other words, A / A 1 gives the conditional probability P ( A 2 / A 1 ) that part of text 2 can be confused (or “contained”) with text 1; A / A 2 gives the conditional probability P ( A 1 / A 2 ) that part of text 1 can be confused with text 2. Notice that these conditional probabilities depend on the distance between two texts and on the 1–sigma radii (Appendix C).
Of course, these joint probabilities can be extended to three or more texts, e.g., in Figure 4 we could calculate the area shared by Lord, Narnia and Trilogy and the corresponding joint probability, which is not conducted in the present paper.
We think that the conditional probabilities and the visual display of 1–sigma circles give useful clues to establish possible hidden connections between texts and, maybe, even between authors, because the variables involved are not consciously managed by them.
In Table 12, the conditional probability P ( A 2 / A 1 ) is reported in the columns; therefore, A 1 refers to the text indicated in the upper row. P ( A 1 / A 2 ) is reported in the rows; therefore, A 2 refers to the text indicated in the left column.
Notice that P ( A 2 / A 1 ) = 1 means A = A 1 ; therefore, text 1 can be fully confused with text 2. P ( A 1 / A 2 ) = 1 means A = A 2 ; therefore, text 2 can be fully confused with text 1.
For example, assuming Lord as text 1 (column 1 of Table 12) and Narnia as text 2 (row 3), we find P ( A 2 / A 1 ) = 0.974 and vice versa. If we assume Narnia as text 1 (column 3) and Lord as text 2 (row 1), we find P ( A 2 / A 1 ) = 0.356 . These data indicate that Lord can be confused with Narnia with a probability close to 1, but not vice versa. In other words, in the data bank considered in this paper, if a machine randomly extracts a chapter from Lord, another machine, unaware of this choice, could attribute it to Lord, but also with decreasing probability to Back, Peter, Narnia and Lilith.
On the contrary, if the text is extracted from Narnia, then it is more likely attributed to Peter or Trilogy than to Lord or other texts.
We think that these conditional probabilities indicate who influenced who more. In other words, Tolkien influenced more Lewis that the opposite.
Now, we can define a synthetic parameter which highlights how much, on the average, two texts can be erroneously confused with each other. The parameter is the average conditional probability (see [16] for a similar problem):
p e = P ( A 2 / A 1 ) P ( A 1 ) + P ( A 1 / A 2 ) P ( A 2 )
Now, since in comparing two texts we can assume P ( A 1 ) = P ( A 2 ) = 0.5 , we receive
p e = 0.5 × [ P ( A 2 / A 1 ) + P ( A 1 / A 2 ) ]
If p e = 0 , there is no intersection between the two 1–sigma circles. The two texts cannot be each other confused; therefore, there is no mathematical connection involving the deep language parameters (this happens for Screwtape and Silmarillion, which can be each other confused, but not with the other texts). If p e = 1 , the two texts can be totally confused, and the two 1–sigma circles coincide. Appendix D reports the values of p e for all the pairs of novels.
Now, just to allow some rough analysis, it is reasonable to assume p e = 0.5 as a reference threshold, i.e., the probability of obtaining heads or tails in flipping a fair coin. If p e > 0.5 , then two texts can be confused not by chance; if p e 0.5 , then two texts cannot likely be confused.
To visualize p e , Figure 5 draws p e when text 1 is Lord (column 1 of Table 12), Narnia (column 3) or Trilogy (column 4). We notice that   p e > 0.5 in the following cases:
(a).
Lord as text 1: Narnia, Back, Lilith, Mutual, Peter.
(b).
Narnia as text 1: Lord, Trilogy, Back, Lilith, Bleak, Mutual, Martin, Peter.
(c).
Trilogy as text 1: Hobbit, Narnia, Bleak, Martin, Bask.
We can reiterate that Tolkien (Lord) appears significantly connected to Lewis (Narnia), to MacDonald (Back, Lilith) and Barrie (Peter), but not to Dicken’s novels where, on the contrary, Lewis appears connected.
In the next section, the four deep language variables are singled out to consider linguistic channels existing in texts. This is the analysis we have called the “fine tuning” of texts [11].

7. Linear Relationships in Literary Texts

The theory of linguistic channels, which we will be revisited in the next section, is based on the regression line between linguistic variables:
y = m x  
Therefore, we show examples of these linear relationships found in Lord and Narnia.
Figure 6a shows the scatterplot of n S versus n W of Lord and Narnia. In Narnia, the slope of the regression line is m = 0.0729 and the correlation coefficient r = 0.7610 . In Lord,  m = 0.0731 and r = 0.9199 . Since the average relationships—i.e., Equation (18)—are practically identical—see also the values of < P F > in Table 6 and Table 7—while the correlation coefficients—i.e., the scattering of the data—are not, this fact will impact the sentence channel discussed in Section 9.
Similar observations can be carried out for Figure 6b, which shows n I versus n S in Lord and Narnia. We find m = 2.0372 ,   r = 0.9609 in Lord, and m = 1.9520 and r = 0.9384 in Narnia. Appendix E reports the complete set of these parameters.
Figure 7 shows the scatterplots of Lord and Trilogy. In Trilogy, for n S versus n W   m = 0.0672 , r = 0.9325 ; for n I versus n S   m = 1.9664 , r = 0.9830 .
Figure 8 shows the scatterplots for Lord and Back or Lilith. We see similar regression lines and data scattering. In Back (left panel), the regression line between n S and n W gives m = 0.0681 , r = 0.9416 ; in Lilith (right panel), m = 0.0676 , r = 0.8890. These results likely indicate the influence of MacDonald on Tolkien’s writings because they are different from most other novels.
In conclusion, the regression lines of Lord, Narnia and Trilogy are very similar, but they can differ in the scattering of the data. Regression lines, however, describe only one aspect of the relationship, namely the relationship between conditional average values in Equation (18); they do not consider the other aspect of the relationship, namely the scattering of data, which may not be the same even when two regression lines almost coincide, as shown above. The theory of linguistic channels, discussed in the next section, on the contrary, considers both slopes and correlation coefficients and provides a “fine tuning” tool to compare two sets of data by singling out each of the four deep language parameters.

8. Theory of Linguistic Channels

In this section, we recall the general theory of linguistic channels [11]. In a literary work, an independent (reference) variable x (e.g., n W ) and a dependent variable y (e.g., n S ) can be related by the regression line given by Equation (18).
Let us consider two different text blocks Y k and Y j , e.g., the chapters of work k and work j . Equation (18) does not give the full relationship between two variables because it links only conditional average values. We can write more general linear relationships, which take care of the scattering of the data—measured by the correlation coefficients r k and r j , respectively—around the average values (measured by the slopes m k and m j ):
y k = m k x + n k
y j = m j x + n j
The linear models Equations (19) and (20) introduce additive “noise” through the stochastic variables n k and n j   , with zero mean value [9,11,15]. The noise is due to the correlation coefficient r 1 .
We can compare two literary works by eliminating x ; therefore, we compare the output variable y for the same number of the input variable x . For example, we can compare the number of sentences in two novels—for an equal number of words—by considering not only the average relationship, Equation (18), but also the scattering of the data, measured by the correlation coefficient, Equations (19) and (20). We refer to this communication channel as the “sentences channel”, S–channel, and to this processing as “fine tuning” because it deepens the analysis of the data and can provide more insight into the relationship between two literary works or any other texts.
By eliminating x from Equations (19) and (20), we obtain the linear relationship between the input number of sentences in work Y k (now the reference, input text) and the number of sentences in text Y j (now the output text):
y j = m j m k y k m j m k n k + n j
Compared to the new reference work Y k , the slope m j k is given by
m j k = m j / m k
The noise source that produces the correlation coefficient between Y k and Y j is given by
n j k = m j m k n k + n j = m j k n k + n j
The “regression noise–to–signal ratio”, R m , due to m j k 1 , of the new channel is given by
R m = ( m j k 1 ) 2
The unknown correlation coefficient r j k between y j and y k is given by
r j k = c o s a r c o s ( r j ) a r c o s ( r k )
The “correlation noise–to–signal ratio”, R r , due to r j k < 1 , of the new channel from text Y k to text Y j is given by
R r = 1 r j k 2 r j k 2 m j k 2
Because the two noise sources are disjoint and additive, the total noise-to-signal ratio of the channel connecting text Y k to text Y j is given by
R = R m + R r
Notice that Equation (27) can be represented graphically [10]. Finally, the total and the partial signal-to-noise ratios are given by
Γ d B = 10 × l o g 10 R
Γ m , d B = 10 × l o g 10 R m
Γ r , d B = 10 × l o g 10 R r
Of course, we expect that no channel can yield r j k = 1 and m j k = 1 ; therefore, Γ d B = , a case referred to as the ideal channel, unless a text is compared with itself. In practice, we always find r j k < 1 and m j k 1 . The slope m j k measures the multiplicative “bias” of the dependent variable compared to the independent variable; the correlation coefficient r j k measures how “precise” the linear best fit is.
In conclusion, the slope m j k is the source of the regression noise R m , and the correlation coefficient r j k is mostly the source of the correlation noise of the channel R r .

9. Linguistic Channels

In long texts (such as novels, essays, etc.), we can define at least four linguistic linear channels [11], namely:
(a).
Sentence channel (S–channel)
(b).
Interpunctions channel (I–channel)
(c).
Word interval channel, WI–channel
(d).
Characters channel (C–channel).
In S–channels, the number of sentences of two texts is compared to the same number of words. These channels describe how many sentences the author of text j writes, compared to the writer of text k (reference text), by using the same number of words. Therefore, these channels are more linked to P F than to other parameters. It is very likely they reflect the style of the writer.
In I–channels, the number of word intervals of two texts is compared for the same number of sentences. These channels describe how many short texts between two contiguous punctuation marks (of length I P ) two authors use; therefore, these channels are more linked to M F than to other parameters. Since M F is very likely connected with the E–STM, I–channels are more related to the second buffer of readers’ E–STM than to the style of the writer.
In WI–channels, the number of words contained in a word interval (i.e., I P ) is compared for the same number of interpunctions. These channels are more linked to I P than to other parameters. Since I P is very likely connected with the E–STM, WI–channels are more related to the first buffer of readers’ E–STM than to the style of the writer.
In C–channels, the number of characters of two texts is compared to the same number of words. They are more related to the language used, e.g., English, than to the other parameters, unless essays or scientific/academic texts are considered because these latter texts use, on average, longer words [9].
As an example, Table 13 reports the total and the partial signal-to-noise ratios Γ d B , Γ m , d B , Γ r , d B in the four channels by considering Lord as reference (input) text. In other words, text j is compared to text k (reference text, i.e., Lord).
Appendix F reports Γ d B for all novels considered in the paper.
Let us make some fundamental remarks on Table 13, applicable to whichever is the reference text. The signal-to-noise ratios of C–channels are practically the largest ones, ranging from 19.17 dB (Lilith) to 31.19 dB (Back). These results are simply saying that all authors use the same language and write texts of the same kind, which is novels, not essays or scientific/academic papers. These channels are not apt to distinguish or assess large differences between texts or authors.
In the three other channels, we can notice that Trilogy, Back and Lilith have the largest signal-to-noise ratios, about ~ 19   t o   ~ 22 dB; therefore, these novels are very similar to Lord. In other words, these channels seem to confirm the likely influence by MacDonald on both Lord and Trilogy and the connection between Lord and Trilogy.
On the contrary, Narnia shows poor values in the S–Channel (10.12 dB) and WI–Channel (7.94 dB). These low values are determined by the correlation noise because R = R m + R r R r . If we consider only Γ m , d B —i.e., only the regression line—then we notice a strong connection with Lord since Γ m , d B = 51.26 dB. As we have already observed regarding Figure 6, the regression lines are practically identical but the spreading of the data is not. Lewis in Narnia is less “regular” than in Trilogy or Tolkien in Lord in shaping (unconsciously) these two linguistic channels.

10. Summary and Conclusions

Scholars of English Literature unanimously say that J.R.R. Tolkien influenced C.S. Lewis’s writings. For the first time, we have investigated this issue mathematically by using an original multi-dimensional analysis of linguistic parameters, based on the surface deep language variables and linguistic channels.
To set our investigation in the framework of English Literature, we have also considered some novels written by earlier authors, such as Charles Dickens and others, including George MacDonald, because scholars mention his likely influence on Tolkien and Lewis.
In our multi-dimensional analysis, only the series of words, sentences and interpunctions per chapter, in our opinion, were consciously planned by the authors and, specifically, they do not indicate strong connections between Tolkien, Lewis and MacDonald. Each author distributed words, sentences and interpunctions differently from author to author and, sometimes, even from novel to novel by the same author.
On the contrary, the deep language variables and the linguistic channels, discussed in the paper, are likely due to unconscious design and can reveal connections between texts far beyond writers’ awareness.
In summary, the buffers of the extended short-term memory required to readers, the universal readability index of texts, the geometrical representation of texts and the fine tuning of linguistic channels—all tools largely discussed in the paper—have revealed strong connections between The Lord of the Rings (Tolkien), The Chronicles of Narnia and The Space Trilogy (Lewis) on one side, and the strong connection also with some novels by MacDonald on the other side, therefore substantially agreeing with what scholars of English Literature say.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The author wishes to thank the many scholars who, with great care and love, maintain digital texts available to readers and scholars of different academic disciplines, such as Perseus Digital Library and Project Gutenberg.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A. Universal Readability Mean Index Lower Bound

The mean value of G U is given by
< G U > = 89 10 k < C P > + 300 < 1 / P F > 6 < I P > 6
The value calculated by introducing the mean of the variables is given by
G U , m e a n = 89 10 k < C P > + 300 / < P F > 6 < I P > 6
Therefore
< G U > G U , m e a n = 300 × 1 < P F > < 1 P F >
Now, it can be proved with the Cauchy–Schwarz inequality that 1 / < x > < 1 / x > ; therefore < G U > G U , m e a n 0 ; hence < G U > G U , m e a n .

Appendix B. Pythagorean Distance d between Pairs of Texts

Table A1. Pythagorean distance d between pairs of texts.
Table A1. Pythagorean distance d between pairs of texts.
NovelLordHobbitNarniaTrilogyBackLilithOliverDavidBleakTaleMutualMartinBaskPeter
Lord0
Hobbit0.4880
Narnia0.1500.3480
Trilogy0.3550.1400.2110
Back0.0990.4990.2030.3790
Lililth0.1120.4530.1740.3360.0480
Oliver0.2440.5180.3070.4260.1460.1410
David0.3200.6200.4070.5320.2220.2340.1060
Bleak0.2310.3120.1700.2170.2010.1530.2110.3160
Tale0.2670.3810.2520.3050.2010.1610.1430.2390.0960
Mutual0.1460.4790.2180.3690.0550.0440.0990.1900.1700.1510
Martin0.2300.26120.1090.1390.2410.1970.2940.4000.0960.1920.2300
Bask0.4240.09640.2770.0720.4510.4080.4960.6020.2860.3710.4410.2110
Peter0.0980.47440.1830.3550.0250.0240.1460.2320.1770.1820.0480.2170.4270

Appendix C. Common Area between Circles

We list the Matlab code to calculate the common area between text, downloaded from https://it.mathworks.com/matlabcentral/answers/273066–overlapping–area–between–two–circles (accessed on 15 June 2024).
Let the distance between the centers of two circles be d and their two radii be r1 and r2. Then, the area, A, of the overlap region of the two circles can be calculated as follows using Matlab’s a‘tan2’ function:
t = sqrt((d+r1+r2)*(d+r1–r2)*(d–r1+r2)*(–d+r1+r2));
A = r1^2*atan2(t,d^2+r1^2–r2^2)+r2^2*atan2(t,d^2–r1^2+r2^2)–t/2;

Appendix D. Conditional Error Probability

Table A2. Error probability between the indicated texts.
Table A2. Error probability between the indicated texts.
LordHobbitNarniaTrilogyBackLilithOliverDavidBleakTaleMutualMartinBaskPeter
Lord––
Hobbit0.065––
Narnia0.6650.387––
Trilogy0.3200.7590.627––
Back0.7110.1480.6030.335––
Lililth0.7010.1730.6390.3820.866––
Oliver0.3140.0510.3580.2030.6660.641––
David0.03300.10100.4960.3720.664––
Bleak0.3070.3870.6330.5930.5430.5600.3970.040––
Tale0.2800.2700.4750.4280.5540.6010.6150.3000.725––
Mutual0.5050.0360.5450.2790.6330.6810.6860.2730.4280.558––
Martin0.3950.4990.7570.7160.4780.5280.2730.0030.7320.5170.367––
Bask0.1640.8300.5040.8730.2110.2410.07700.4390.2860.0950.591––
Peter0.6490.2310.6690.4120.8530.7610.6870.5680.6420.6440.5940.5870.294––

Appendix E. Slope and Correlation Coefficient of the Regression Lines

Table A3. Slope/correlation coefficient of the regression line   y = m x , Equation (18), modeling the indicated variables (dependent/independent). We keep four digits because some novels differ only at the third and fourth digit.
Table A3. Slope/correlation coefficient of the regression line   y = m x , Equation (18), modeling the indicated variables (dependent/independent). We keep four digits because some novels differ only at the third and fourth digit.
NovelSentences/WordsInterpunctions/SentencesWords/InterpunctionsCharacters/Words
Lord0.0731/0.91992.0372/0.96096.6134/0.96094.0367/0.9982
Hobbit0.0608/0.95322.1010/0.99367.6902/0.95324.1014/0.9996
Narnia0.0729/0.76101.9520/0.93846.9062/0.79914.0907/0.9919
Trilogy0.0672/0.93251.9664/0.98307.3380/0.96964.2002/0.9976
Back0.0681/0.94162.1640/0.97596.6045/0.97993.8496/0.9976
Lililth0.0676/0.88902.2619/0.94886.2926/0.98004.1174/0.9863
Oliver0.0566/0.90593.0893/0.95445.6302/0.96384.2248/0.9977
David0.0537/0.93903.2949/0.96575.5775/0.98824.0474/0.9966
Bleak0.0600/0.92582.5324/0.97156.51250.96944.2235/0.9923
Tale0.0573/0.95742.7972/0.97856.1323/0.99124.2417/0.9983
Mutual0.0618/0.92992.6814/0.95495.9766/0.97404.2197/0.9940
Martin0.0658/0.85832.2243/0.93646.6785/0.95144.3171/0.9939
Bask0.0684/0.77061.8517/0.93667.6984/0.90054.1320/0.9949
Peter0.0687/0.86862.3080/0.96796.1018/0.91524.1117/0.9968

Appendix F. Total Signal-to-Noise Ratios Γ d B , in the Four Linguistic Channels

Table A4, Table A5, Table A6 and Table A7 report the signal-to-noise ratio Γ d B in the channels between the input text k (reference) reported in the first row, and the output text j reported in the left column. For example, in Table A4, if the input text is Lord and the output text is Trilogy then Γ d B = 21.27 dB; vice versa, Γ d B = 20.44 . A slight asymmetry is typical of linguistic channels [12,15].
Table A4. Total signal-to-noise ratios Γ d B , S–Channels.
Table A4. Total signal-to-noise ratios Γ d B , S–Channels.
NovelLordHobbitNarniaTrilogyBackLilithOliverDavidBleakTaleMutualMartinBaskPeter
Lord12.6510.0820.4420.2318.9210.618.6813.1910.1814.6314.519.7917.14
Hobbit14.608.2119.1119.0214.7515.9717.0021.6424.0223.0512.738.5013.08
Narnia10.125.308.217.7011.546.814.196.854.157.1113.3223.3913.54
Trilogy21.2718.009.5830.7719.4913.8111.9618.2914.2121.1415.109.6916.57
Back21.1017.948.8730.5617.4512.6711.4316.8314.0719.3113.658.8615.12
Lililth19.9213.1612.7919.3917.5813.9910.3715.8610.9816.8623.0313.2926.92
Oliver12.8717.0710.1915.5114.6015.6119.5022.6716.8319.9315.6511.2014.50
David11.4318.208.4213.9213.4912.8320.2919.1721.6017.5312.389.0911.86
Bleak14.9121.869.7919.3018.0517.2721.9518.1219.1730.1715.6710.4315.34
Tale12.6624.577.8415.8515.6913.2116.6120.7819.8919.4511.918.2311.93
Mutual16.1322.789.7021.8720.2418.0718.9216.2729.8818.4315.6410.1915.76
Martin16.0011.4314.8615.4614.2323.4613.939.7914.299.7814.6216.3426.65
Bask10.926.5323.979.388.7813.098.495.568.335.388.5115.6915.21
Peter18.1011.2114.5216.1914.9626.6612.569.0413.599.3814.2426.2015.13
Table A5. Total signal-to-noise ratios Γ d B , I–Channels.
Table A5. Total signal-to-noise ratios Γ d B , I–Channels.
NovelLordHobbitNarniaTrilogyBackLilithOliverDavidBleakTaleMutualMartinBaskPeter
Lord15.5721.1819.5021.7419.509.358.3614.0511.1612.3719.1417.6118.44
Hobbit15.0411.2519.7719.3313.619.238.5213.9711.7411.6012.2910.0716.07
Narnia21.8312.4815.4616.1817.108.657.7212.209.9311.2318.2425.2815.11
Trilogy20.0720.6615.3320.2815.038.617.8312.8610.5311.0514.2514.1615.95
Back20.9618.8314.7219.3519.4510.319.2616.6912.9013.8117.1812.6123.09
Lililth18.4712.4015.7613.2218.7411.4310.0017.7513.4516.0627.6512.9223.22
Oliver5.725.214.614.427.068.7222.7312.6416.5616.368.043.429.25
David4.184.223.043.255.576.6622.0110.3814.4512.635.991.967.38
Bleak12.1011.849.5710.5615.3016.4214.5312.6920.1021.8414.237.9120.13
Tale8.269.016.367.4610.6611.2017.8416.0219.1519.409.885.0413.18
Mutual9.978.688.408.0211.7114.5717.5914.4821.0820.0713.286.8315.34
Martin18.0411.3217.1012.4616.7127.9210.979.5915.8712.4815.0413.9319.38
Bask18.7712.0425.7415.1414.5714.717.927.1110.989.0610.1215.5213.36
Peter17.3114.6913.2814.3422.4022.8811.8510.4720.9514.9416.7718.7711.10
Table A6. Total signal-to-noise ratios Γ d B , WI–Channels.
Table A6. Total signal-to-noise ratios Γ d B , WI–Channels.
NovelLordHobbitNarniaTrilogyBackLilithOliverDavidBleakTaleMutualMartinBaskPeter
Lord16.968.6919.7221.9420.1115.1412.4228.7514.9618.3329.4713.8315.46
Hobbit15.617.8022.0313.7211.778.657.2414.259.4610.3416.3816.8610.64
Narnia7.949.597.966.035.485.463.056.963.805.448.9213.7710.57
Trilogy18.7422.696.9218.2415.1410.329.4017.9412.4012.8118.2413.9210.84
Back21.9615.486.8019.3026.1014.3714.3226.0319.4619.2219.0211.6912.07
Lililth20.8613.907.0716.5926.5217.0117.1724.8622.5924.3418.3211.1512.89
Oliver16.5411.408.6312.6415.9818.2518.5517.2816.4323.1215.8610.4616.18
David14.4310.556.5012.0315.8918.3518.7215.6620.6820.2913.459.0411.74
Bleak29.0015.867.9718.9826.2724.3615.9913.9517.2220.7123.3312.7014.34
Tale16.1312.165.8214.4120.4023.0115.1519.8218.1619.8314.609.6010.71
Mutual19.4012.727.8314.6020.1524.9022.4319.3721.4920.2617.7510.9414.62
Martin29.3017.619.5019.3318.8217.4014.2811.1922.9213.2516.4314.9716.69
Bask11.7716.8412.1013.119.388.327.064.9210.386.297.7713.1411.52
Peter16.6713.0112.4813.1713.3613.4214.9410.2615.4010.8014.2717.9213.59
Table A7. Total signal-to-noise ratios Γ d B , C–Channels.
Table A7. Total signal-to-noise ratios Γ d B , C–Channels.
NovelLordHobbitNarniaTrilogyBackLilithOliverDavidBleakTaleMutualMartinBaskPeter
Lord29.1223.3727.9726.1019.5126.9132.9322.4226.3123.8421.9026.7031.43
Hobbit28.8720.0326.6622.0817.2126.3324.9520.2027.1321.5120.6422.7725.73
Narnia23.1420.0724.1121.1128.2523.6326.6330.0122.4329.1025.2231.0826.48
Trilogy27.6126.3023.6920.8119.9444.4427.9225.2136.7027.8726.3028.8132.34
Back26.5222.8021.8921.5719.0621.0325.9419.8520.6320.4418.8322.5623.80
Lililth19.1717.1528.1420.2818.0920.0921.2526.4319.2824.4523.0823.7921.31
Oliver26.5025.9123.1344.3820.2219.6526.6724.9739.7627.5626.5727.8130.39
David32.8825.1726.8128.2825.4821.5427.0824.7725.8026.3023.3831.2536.01
Bleak21.7619.7129.7325.1118.8426.0524.9824.2223.6336.7031.8829.7925.53
Tale25.8826.7121.8736.5719.7818.7839.6925.3123.5625.7025.3225.8028.24
Mutual23.2421.0428.7627.7919.5124.0527.5925.8336.7225.7932.9332.7827.93
Martin21.1119.8824.7125.9017.7222.4326.2322.7331.6325.0332.7326.7724.53
Bask26.3422.6430.9229.0721.8423.7328.1330.9930.0826.1932.9927.1733.27
Peter31.1925.6826.3932.5623.2221.3330.6735.8725.9328.5928.2825.0833.36

References

  1. Carpenter, H. The Inklings: C. S. Lewis, J.R.R. Tolkien, Charles Williams, and Their Friends; Houghton Mifflin: Boston, MA, USA, 1979. [Google Scholar]
  2. Glyer, D.P. The Company They Keep. C. S. Lewis and J. R. R. Tolkien as Writers in Community; Kent State University Press: Kent, OH, USA, 2007. [Google Scholar]
  3. Duriez, C.; Porter, D. The Inklings Handbook: A Comprehensive Guide to the Lives, Thought, and Writings of C.S. Lewis, J.R.R. Tolkien, Charles Williams, Owen Barfield, and Their Friends; Chalice Press: Nashville, TN, USA, 2001. [Google Scholar]
  4. Isley, W.L.C.S. Lewis on Friendship. Inklings Forever 2008, 6, 9. Available online: https://pillars.taylor.edu/inklings_forever/vol6/iss1/9 (accessed on 30 May 2024).
  5. Sammons, M.C. War of the Fantasy Worlds: C.S. Lewis and J.R.R. Tolkien on Art and Imagination; Praeger: Westport, CT, USA, 2010. [Google Scholar]
  6. Duriez, C. The Oxford Inklings. Lewis, Tolkien, and Their Circle; Lion Books: Oxford, UK, 2015. [Google Scholar]
  7. Hooper, W. The Inklings. In C. S. Lewis and His Circle. Essays and Memoirs from the Oxford C. S. Lewis Society; White, R., Wolfe, J., Wolfe, B.N., Eds.; Oxford University Press: New York, NY, USA; Oxford, UK, 2015; pp. 197–213. [Google Scholar]
  8. Gokulapriya, T.J.R.R. Tolkien’s Literary Works: A Review. Int. Rev. Lit. Stud. 2022, 4, 31–39. [Google Scholar]
  9. Matricciani, E. Deep Language Statistics of Italian throughout Seven Centuries of Literature and Empirical Connections with Miller’s 7 ∓ 2 Law and Short–Term Memory. Open J. Stat. 2019, 9, 373–406. [Google Scholar] [CrossRef]
  10. Matricciani, E. A Statistical Theory of Language Translation Based on Communication Theory. Open J. Stat. 2020, 10, 936–997. [Google Scholar] [CrossRef]
  11. Matricciani, E. Multiple Communication Channels in Literary Texts. Open J. Stat. 2022, 12, 486–520. [Google Scholar] [CrossRef]
  12. Matricciani, E. Is Short–Term Memory Made of Two Processing Units? Clues from Italian and English Literatures down Several Centuries. Information 2024, 15, 6. [Google Scholar] [CrossRef]
  13. Matricciani, E. A Mathematical Structure Underlying Sentences and Its Connection with Short-Term Memory. Appl. Math 2024, 4, 120–142. [Google Scholar] [CrossRef]
  14. Miller, G.A. The Magical Number Seven, Plus or Minus Two. Some Limits on Our Capacity for Processing Information. Psychol. Rev. 1956, 63, 81–97. [Google Scholar] [CrossRef] [PubMed]
  15. Matricciani, E. Readability Indices Do Not Say It All on a Text Readability. Analytics 2023, 2, 296–314. [Google Scholar] [CrossRef]
  16. Matricciani, E. Linguistic Mathematical Relationships Saved or Lost in Translating Texts: Extension of the Statistical Theory of Translation and Its Application to the New Testament. Information 2022, 13, 20. [Google Scholar] [CrossRef]
  17. Papoulis Papoulis, A. Probability & Statistics; Prentice Hall: Hoboken, NJ, USA, 1990. [Google Scholar]
  18. Lindgren, B.W. Statistical Theory, 2nd ed.; MacMillan Company: New York, NY, USA, 1968. [Google Scholar]
Figure 1. Series of words versus the normalized chapter number. Blue line: The Lord of the Rings (Lord); red line: The Chronicles of Narnia (Narnia); green line: The Space Trilogy (Trilogy).
Figure 1. Series of words versus the normalized chapter number. Blue line: The Lord of the Rings (Lord); red line: The Chronicles of Narnia (Narnia); green line: The Space Trilogy (Trilogy).
Appliedmath 04 00050 g001
Figure 2. Normalized coordinates X and Y of the ending point of vector (5) such that Lord, blue square, is at (0,0) and Silmarillion, blue triangle pointing left, is (1,1). Narnia: red square; Trilogy: red circle; Hobbit: blue triangle pointing right; Screwtape: red triangle pointing upward; Back: cyan triangle pointing left; Lilith: cyan triangle pointing downward; Back: cyan triangle pointing left; Phantastes: cyan triangle pointing right; Princess: cyan triangle pointing upward; Oliver: blue circle; David: green circle; Tale: cyan circle; Bleak: magenta circle; Mutual: black circle; Pride: magenta triangle pointing right; Vanity: magenta triangle pointing left; Moby: magenta triangle pointing downward; Mill: magenta triangle pointing upward; Alice: yellow triangle pointing right; Jungle: yellow triangle pointing downward; War: yellow triangle pointing right; Oz: green triangle pointing left; Bask: green triangle pointing right; Peter: green triangle pointing upward; Martin: green square; Finn: black triangle pointing right.
Figure 2. Normalized coordinates X and Y of the ending point of vector (5) such that Lord, blue square, is at (0,0) and Silmarillion, blue triangle pointing left, is (1,1). Narnia: red square; Trilogy: red circle; Hobbit: blue triangle pointing right; Screwtape: red triangle pointing upward; Back: cyan triangle pointing left; Lilith: cyan triangle pointing downward; Back: cyan triangle pointing left; Phantastes: cyan triangle pointing right; Princess: cyan triangle pointing upward; Oliver: blue circle; David: green circle; Tale: cyan circle; Bleak: magenta circle; Mutual: black circle; Pride: magenta triangle pointing right; Vanity: magenta triangle pointing left; Moby: magenta triangle pointing downward; Mill: magenta triangle pointing upward; Alice: yellow triangle pointing right; Jungle: yellow triangle pointing downward; War: yellow triangle pointing right; Oz: green triangle pointing left; Bask: green triangle pointing right; Peter: green triangle pointing upward; Martin: green square; Finn: black triangle pointing right.
Appliedmath 04 00050 g002
Figure 3. Pythagorean distance d between pairs of texts considering Lord (the distances referring to this case are labeled with blue circles), Narnia (red squares) and Trilogy (red circles). Key: Lord 1, Hobbit 2, Narnia 3, Trilogy 4, Back 5, Lilith 6, Oliver 7, David 8, Bleak 9, Tale 10, Mutual 11, Martin 12, Bask 13, Peter 14.
Figure 3. Pythagorean distance d between pairs of texts considering Lord (the distances referring to this case are labeled with blue circles), Narnia (red squares) and Trilogy (red circles). Key: Lord 1, Hobbit 2, Narnia 3, Trilogy 4, Back 5, Lilith 6, Oliver 7, David 8, Bleak 9, Tale 10, Mutual 11, Martin 12, Bask 13, Peter 14.
Appliedmath 04 00050 g003
Figure 4. Normalized coordinates X and Y of the ending point of vector (5) and 1–sigma circles, such that Lord, blue square, is at (0,0) and Silmarillion, blue triangle pointing left, is (1,1). Lord: blue square (blue 1–sigma circle); Narnia: red square (red 1–sigma circle); Trilogy: red circle (dashed red 1–sigma circle); Back: cyan triangle pointing left (cyan 1–sigma circle); Peter: green triangle pointing upward (green 1–sigma circle).
Figure 4. Normalized coordinates X and Y of the ending point of vector (5) and 1–sigma circles, such that Lord, blue square, is at (0,0) and Silmarillion, blue triangle pointing left, is (1,1). Lord: blue square (blue 1–sigma circle); Narnia: red square (red 1–sigma circle); Trilogy: red circle (dashed red 1–sigma circle); Back: cyan triangle pointing left (cyan 1–sigma circle); Peter: green triangle pointing upward (green 1–sigma circle).
Appliedmath 04 00050 g004
Figure 5. Error probability p e versus text 2. Lord (the probabilities referring to this case are labeled with blue circles), Narnia (red squares) and Trilogy (red circles). Text key: Lord 1, Hobbit 2, Narnia 3, Trilogy 4, Back 5, Lilith 6, Oliver 7, David 8, Bleak 9, Tale 10, Mutual 11, Martin 12, Bask 13, Peter 14.
Figure 5. Error probability p e versus text 2. Lord (the probabilities referring to this case are labeled with blue circles), Narnia (red squares) and Trilogy (red circles). Text key: Lord 1, Hobbit 2, Narnia 3, Trilogy 4, Back 5, Lilith 6, Oliver 7, David 8, Bleak 9, Tale 10, Mutual 11, Martin 12, Bask 13, Peter 14.
Appliedmath 04 00050 g005
Figure 6. (a) Scatterplot of n S versus n W in Lord (blue) and Narnia (red); (b) n I versus n S in Lord (blue) and Narnia (red).
Figure 6. (a) Scatterplot of n S versus n W in Lord (blue) and Narnia (red); (b) n I versus n S in Lord (blue) and Narnia (red).
Appliedmath 04 00050 g006
Figure 7. (a) Scatterplot of n S versus n W in Lord (blue) and Trilogy (red); (b) n I versus n S in in Lord (blue) and Trilogy (red).
Figure 7. (a) Scatterplot of n S versus n W in Lord (blue) and Trilogy (red); (b) n I versus n S in in Lord (blue) and Trilogy (red).
Appliedmath 04 00050 g007
Figure 8. Scatterplot of the number of sentences n S versus the number of words n W : (a) Lord (blue) and Back (cyan); (b) Lord (blue) and Lilith (cyan).
Figure 8. Scatterplot of the number of sentences n S versus the number of words n W : (a) Lord (blue) and Back (cyan); (b) Lord (blue) and Lilith (cyan).
Appliedmath 04 00050 g008
Table 1. Novels written by Tolkien, Lewis and MacDonald, with year of publication. Number of chapters (i.e., the number of samples considered in calculating the regression lines reported below), total number of characters contained in the words ( C ) , total number of words W and sentences ( S ). Titles, footnotes and other extraneous material present in the digital texts have been deleted.
Table 1. Novels written by Tolkien, Lewis and MacDonald, with year of publication. Number of chapters (i.e., the number of samples considered in calculating the regression lines reported below), total number of characters contained in the words ( C ) , total number of words W and sentences ( S ). Titles, footnotes and other extraneous material present in the digital texts have been deleted.
John R.R. Tolkien (1892–1973) Chapters   ( M ) Characters   ( C ) Words   ( W ) Sentences   ( S )
The Hobbit (1937)19394,15495,9145890
The Lord of the Rings (1954–1955)621,906,531472,17334,601
The Silmarillion (posthumous, 1977)24429,639101,6273346
Clive S. Lewis (1898–1963)
The Screwtape Letters (1942)31135,20431,0401330
The Space Trilogy (1938–1945)1231,243,141295,24020,124
The Chronicles of Narnia (1950–1956)1101,318,482322,54423,515
George MacDonald (1824–1905)
Phantastes: A Fairie Romance for Men and Women (1858)25283,67667,5513274
At the Back of the North Wind (1871)38349,04190,6975017
The Princess and the Goblin (1872)32208,32551,0903205
Lilith: A Romance (1895)47386,52294,1276271
Table 2. Novels by Charles Dickens, with year of publication. Number of chapters ( M , i.e., the number of samples considered in calculating the regression lines reported below), total number of characters contained in the words ( C ) , total number of words W and sentences ( S ).
Table 2. Novels by Charles Dickens, with year of publication. Number of chapters ( M , i.e., the number of samples considered in calculating the regression lines reported below), total number of characters contained in the words ( C ) , total number of words W and sentences ( S ).
Novel (Year of Publication) Chapters   ( M ) Characters   ( C ) Words   ( W ) Sentences   ( S )
The Adventures of Oliver Twist (1837–1839)53679,008160,6049121
David Copperfield (1849–1850)641,469,251363,28419,610
Bleak House (1852–1853)641,480,523350,02020,967
A Tale of Two Cities (1859)45607,424142,7628098
Our Mutual Friend (1864–1865)671,394,753330,59317,409
Table 3. Novels by authors of English Literature, with year of publication. Number of chapters ( M , i.e., the number of samples considered in calculating the regression lines reported below), total number of characters contained in the words ( C ) , total number of words W and sentences ( S ).
Table 3. Novels by authors of English Literature, with year of publication. Number of chapters ( M , i.e., the number of samples considered in calculating the regression lines reported below), total number of characters contained in the words ( C ) , total number of words W and sentences ( S ).
Novel (Author, Year) Chapters   ( M ) Characters   ( C ) Words   ( W ) Sentences   ( S )
Pride and Prejudice (J. Austen, 1813)61537,005121,9346013
Vanity Fair (W. Thackeray, 1847–1848)661,285,688277,71613,007
Moby Dick (H. Melville, 1851) 13292,2351203,9839582
The Mill On The Floss (G. Eliot, 1860) 57888,867207,3589018
Alice’s Adventures in Wonderland (L. Carroll, 1865)12107,45227,1701629
Adventures of Huckleberry Finn (M. Twain, 1884)42427,473110,9975887
The Jungle Book (R. Kipling, 1894)9209,93551,0903214
The War of the Worlds (H.G. Wells, 1897)27265,49960,5563306
The Wonderful Wizard of Oz (L.F. Baum, 1900)22156,97339,0742219
The Hound of The Baskervilles (A.C. Doyle, 1901–1902)15245,32759,1324080
Peter Pan (J.M. Barrie, 1902)17194,10547,09731,77
Martin Eden (J. London, 1908–1909)45601,672139,2819173
Table 4. The coefficient of dispersion in the series of words, sentences and interpunctions in the indicated novels by Tolkien, Lewis and MacDonald.
Table 4. The coefficient of dispersion in the series of words, sentences and interpunctions in the indicated novels by Tolkien, Lewis and MacDonald.
NovelWordsSentencesInterpunctionsAverage
The Hobbit0.490.480.500.49
The Lord of the Rings0.340.360.340.35
The Silmarillion0.730.860.800.80
The Screwtape Letters0.070.170.140.13
The Space Trilogy0.600.610.580.59
The Chronicles of Narnia0.160.200.200.19
At the Back of the North Wind0.540.610.560.57
Phantastes: A Fairie Romance for Men and Women0.660.730.630.67
Lilith: A Romance0.430.530.460.47
The Princess and the Goblin0.530.750.620.64
Table 5. The coefficient of dispersion in the series of words, sentences and interpunctions in the indicated novels.
Table 5. The coefficient of dispersion in the series of words, sentences and interpunctions in the indicated novels.
NovelWordsSentencesInterpunctionsAverage
The Adventures of Oliver Twist0.310.330.320.32
David Copperfield0.370.380.370.37
Bleak House0.280.310.300.30
A Tale of Two Cities0.520.570.520.54
Our Mutual Friend0.260.290.270.27
Martin Eden0.290.330.290.31
The Hound of The Baskervilles0.260.290.250.27
Peter Pan0.290.410.330.34
Table 6. John R.R. Tolkien. Mean value and standard deviation (in parentheses) of < C P > , < P F > , < I P > , < M F > in the indicated novels. Mean and standard deviation have been calculated by weighing each chapter with its number of words.
Table 6. John R.R. Tolkien. Mean value and standard deviation (in parentheses) of < C P > , < P F > , < I P > , < M F > in the indicated novels. Mean and standard deviation have been calculated by weighing each chapter with its number of words.
Novel C p P F I P M F
The Hobbit4.11 (0.06)16.54 (2.03)7.93 (0.98)2.09 (0.12)
The Lord of the Rings4.04 (0.08)13.92 (1.98)6.68 (0.51)2.08 (0.20)
The Silmarillion4.23 (0.08)31.21 (5.32)8.58 (0.58)3.62 (0.42)
Table 7. Clive S. Lewis. Mean value and standard deviation (in parentheses) of < C P > , < P F > , < I P > , < M F > in the indicated novels. Mean and standard deviation have been calculated by weighing each chapter with its number of words.
Table 7. Clive S. Lewis. Mean value and standard deviation (in parentheses) of < C P > , < P F > , < I P > , < M F > in the indicated novels. Mean and standard deviation have been calculated by weighing each chapter with its number of words.
Novel C p P F I P M F
The Screwtape Letters4.36 (0.12)23.95 (3.82)9.72 (1.00)2.47 (032)
The Space Trilogy4.21 (0.16)15.25 (3.05)7.47 (0.98)2.03 (0.22)
The Chronicles of Narnia4.09 (0.09)13.97 (1.94)7.10 (0.89)1.97 (0.15)
Table 8. George MacDonald. Mean value and standard deviation (in parentheses) of < C P > , < P F > , < I P > , < M F > in the indicated novels. Mean and standard deviation have been calculated by weighing each chapter with its number of words.
Table 8. George MacDonald. Mean value and standard deviation (in parentheses) of < C P > , < P F > , < I P > , < M F > in the indicated novels. Mean and standard deviation have been calculated by weighing each chapter with its number of words.
At the Back of the North Wind3.85 (0.11)15.52 (3.34)6.76 (0.77)2.29 (0.35)
Phantastes: A Fairie Romance for Men and Women4.20 (0.12)21.15 (3.58)6.43 (0.49)3.28 (0.45)
Lilith: A Romance4.11 (0.25)15.87 (3.89)6.43 (0.57)2.45 (0.42)
The Princess and the Goblin4.08 (0.14)17.81 (4.05)7.09 (1.22)2.46 (0.53)
Table 9. Other authors. Mean value and standard deviation (in parentheses) of < C P > , < P F > , < I P > , < M F > in the indicated novels. Mean and standard deviation have been calculated by weighing each chapter with its number of words.
Table 9. Other authors. Mean value and standard deviation (in parentheses) of < C P > , < P F > , < I P > , < M F > in the indicated novels. Mean and standard deviation have been calculated by weighing each chapter with its number of words.
Novel < C P > < P F > < I P > < M F >
The Adventures of Oliver Twist4.23 (0.09)18.04 (3.13)5.70 (0.52)3.16 (0.34)
David Copperfield4.04 (0.12)18.83 (2.50)5.61 (0.30)3.35 (0.33)
Bleak House4.23 (0.14)16.95 (2.21)6.59 (0.49)2.57 (0.21)
A Tale of Two Cities4.26 (0.12)18.27 (4.24)6.19 (0.46)2.93 (0.46)
Our Mutual Friend4.22 (0.12)16.46 (2.01)6.03 (0.37)2.73 (0.27)
Pride and Prejudice4.40 (0.14)21.31 (5.02)7.16 (0.46)2.95 (0.46)
Vanity Fair4.63 (0.08)21.95 (3.67)6.73 (0.63)3.25 (0.39)
Moby Dick4.52 (0.16)23.82 (7.44)6.45 (0.99)3.64 (0.80)
The Mill On The Floss4.29 (0.13)23.84 (4.99)7.09 (0.69)3.35 (0.48)
Alice’s Adventures in Wonderland3.96 (0.08)17.19 (3.20)5.79 (0.55)2.95 (0.28)
Adventures of Huckleberry Finn3.85 (0.10)19.39 (3.12)6.63 (0.67)2.94 (0.48)
The Jungle Book4.11 (0.09)16.46 (3.09)7.14 (0.53)2.29 (0.30)
The War of the Worlds4.38 (0.18)19.22 (4.13)7.67 (0.92)2.48 (0.31)
The Wonderful Wizard of Oz4.02 (0.10)17.90 (2.23)7.63 (0.64)2.34 (0.15)
The Hound of The Baskervilles4.15 (0.12)15.07 (3.16)7.83 (0.94)1.91 (0.22)
Peter Pan4.12 (0.09)15.65 (3.98)6.35 (0.92)2.44 (0.37)
Martin Eden4.32 (0.13)15.61 (2.71)6.76 (0.64)2.30 (0.26)
Table 10. Multiplicity factor α , universal readability index < G U > and number of school years Y in the indicated novels by Tolkien, Lewis, MacDonald.
Table 10. Multiplicity factor α , universal readability index < G U > and number of school years Y in the indicated novels by Tolkien, Lewis, MacDonald.
Novel α < G U > Y
The Hobbit39.452.49.9
The Lord of the Rings368.164.27.4
The Silmarillion0.238.7>13
The Screwtape Letters1.433.5>13
The Space Trilogy186.356.29.0
The Chronicles of Narnia297.761.17.9
At the Back of the North Wind26.363.97.5
Phantastes: A Fairie Romance for Men and Women24.356.68.9
Lilith: A Romance1.463.07.5
The Princess and the Goblin8.558.28.4
Table 11. Multiplicity factor α , universal readability index < G U > and number of school years in the indicated novels of English Literature.
Table 11. Multiplicity factor α , universal readability index < G U > and number of school years in the indicated novels of English Literature.
Novel α < G U > Y
The Adventures of Oliver Twist9.4663.197.5
David Copperfield12.6364.787.2
Bleak House56.9858.748.3
A Tale of Two Cities11.8959.918.0
Our Mutual Friend43.4162.687.6
Pride and Prejudice5.2050.3310.4
Vanity Fair5.2649.7410.5
Moby Dick1.5652.639.9
The Mill On The Floss2.1750.2210.5
Alice’s Adventures in Wonderland2.9059.278.1
Adventures of Huckleberry Finn7.0560.428.0
The Jungle Book14.1057.598.6
The War of the Worlds6.7249.0510.8
The Wonderful Wizard of Oz7.0253.839.5
The Hound of The Baskervilles43.8754.879.2
Peter Pan13.0763.607.5
Martin Eden46.3358.538.2
Table 12. Conditional probability between the indicated novels. P ( A 2 / A 1 ) is reported in the columns; therefore, A 1 refers to the text indicated in the upper row. P ( A 1 / A 2 ) is reported in the rows; therefore, A 2 refers to the text indicated the left column. For example, assuming Lord as text 1 (column 1 of Table 12) and Narnia as text 2 (row 3), we find P ( A 2 / A 1 ) = 0.974 and vice versa. If we assume Narnia as text 1 (column 3) and Lord as text 2 (row 1), we find P ( A 2 / A 1 ) = 0.356 .
Table 12. Conditional probability between the indicated novels. P ( A 2 / A 1 ) is reported in the columns; therefore, A 1 refers to the text indicated in the upper row. P ( A 1 / A 2 ) is reported in the rows; therefore, A 2 refers to the text indicated the left column. For example, assuming Lord as text 1 (column 1 of Table 12) and Narnia as text 2 (row 3), we find P ( A 2 / A 1 ) = 0.974 and vice versa. If we assume Narnia as text 1 (column 3) and Lord as text 2 (row 1), we find P ( A 2 / A 1 ) = 0.356 .
NovelLordHobbitNarniaTrilogyBackLilithOliverDavidBleakTaleMutualMartinBaskPeter
1Lord10.0310.3560.1420.4230.5110.2770.0410.3070.2310.6190.3010.0780.299
2Hobbit0.09910.4210.7310.1710.2250.07400.5920.3760.0600.6650.8330.227
3Narnia0.9740.35410.5500.6470.7810.4890.1660.9270.6250.8860.9490.4620.602
4Trilogy0.4980.7860.70410.4000.5100.29700.9210.6080.4730.9780.9080.421
5Back10.1240.5590.27010.9970.8660.7960.7630.69210.5660.1790.707
6Lililth0.8910.1210.4980.2540.73510.7410.5590.7620.66110.5460.1690.521
7Oliver0.3520.0290.2270.1080.4660.54010.9130.4440.5790.9170.2390.0430.378
8David0.02400.03500.1950.1860.41610.0290.1730.2620.00100.168
9Bleak0.3070.1820.3390.2640.3230.4370.3500.05110.5980.5260.5580.2080.296
10Tale0.3300.1650.3270.2480.4170.5410.6500.4270.85210.7740.4840.1760.385
11Mutual0.3900.0120.2040.0850.2660.3610.4540.2850.3310.34210.2050.0310.188
12Martin0.4900.3330.5650.4550.3890.5090.3070.0040.9060.5510.52910.3960.384
13Bask0.2500.8260.5450.8380.2440.3120.11000.6690.3970.1600.78510.289
14Peter10.2340.7360.403110.9960.9680.9880.90410.7900.3001
Table 13. Total and the partial signal-to-noise ratios Γ d B , Γ m , d B , Γ r , d B in the four channels by considering Lord as reference (input) text.
Table 13. Total and the partial signal-to-noise ratios Γ d B , Γ m , d B , Γ r , d B in the four channels by considering Lord as reference (input) text.
S–ChannelI–ChannelWI–ChannelC–Channel
Novel Γ d B Γ m , d B Γ r , d B Γ d B Γ m , d B Γ r , d B Γ d B Γ m , d B Γ r , d B Γ d B Γ m , d B Γ r , d B
Hobbit14.6015.4821.9415.0430.0815.1815.6115.7730.1928.8735.9029.83
Narnia10.1251.2610.1221.8327.5723.187.9427.087.9923.1437.4723.30
Trilogy21.2721.8630.2420.0729.1820.6418.7419.2128.6327.6127.8540.30
Back21.1023.3025.1120.9624.1223.8221.9657.4221.9626.5226.6841.05
Lililth19.9222.4723.4418.4719.1526.8720.8626.2822.3319.1733.9819.31
Oliver12.8712.9331.515.725.7429.3016.5416.5640.8326.5026.6341.73
David11.4311.5228.374.184.1930.7714.4316.1019.3732.8851.5332.94
Bleak14.9114.9338.0112.1012.2925.8029.0036.3329.8821.7626.6923.45
Tale12.6613.3121.258.268.5619.9916.1322.7617.2025.8825.8955.01
Mutual16.1316.2233.059.9710.0031.2019.4020.3326.5523.2426.8725.70
Martin16.0020.0118.2018.0420.7421.3829.3040.1429.6821.1123.1625.34
Bask10.9223.8411.1418.7720.8123.0311.7715.7014.0226.3432.5427.53
Peter18.1024.4119.2517.3117.5330.4516.6722.2318.0931.1934.6233.81
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matricciani, E. Multi–Dimensional Data Analysis of Deep Language in J.R.R. Tolkien and C.S. Lewis Reveals Tight Mathematical Connections. AppliedMath 2024, 4, 927-949. https://doi.org/10.3390/appliedmath4030050

AMA Style

Matricciani E. Multi–Dimensional Data Analysis of Deep Language in J.R.R. Tolkien and C.S. Lewis Reveals Tight Mathematical Connections. AppliedMath. 2024; 4(3):927-949. https://doi.org/10.3390/appliedmath4030050

Chicago/Turabian Style

Matricciani, Emilio. 2024. "Multi–Dimensional Data Analysis of Deep Language in J.R.R. Tolkien and C.S. Lewis Reveals Tight Mathematical Connections" AppliedMath 4, no. 3: 927-949. https://doi.org/10.3390/appliedmath4030050

APA Style

Matricciani, E. (2024). Multi–Dimensional Data Analysis of Deep Language in J.R.R. Tolkien and C.S. Lewis Reveals Tight Mathematical Connections. AppliedMath, 4(3), 927-949. https://doi.org/10.3390/appliedmath4030050

Article Metrics

Back to TopTop