Next Article in Journal
Convergence and Stability Improvement of Quasi-Newton Methods by Full-Rank Update of the Jacobian Approximates
Previous Article in Journal
A Co-Infection Model for Onchocerciasis and Lassa Fever with Optimal Control Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Mathematical Structure Underlying Sentences and Its Connection with Short–Term Memory

by
Emilio Matricciani
Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, 20133 Milan, Italy
AppliedMath 2024, 4(1), 120-142; https://doi.org/10.3390/appliedmath4010007
Submission received: 9 December 2023 / Revised: 12 January 2024 / Accepted: 15 January 2024 / Published: 18 January 2024

Abstract

:
The purpose of the present paper is to further investigate the mathematical structure of sentences—proposed in a recent paper—and its connections with human short–term memory. This structure is defined by two independent variables which apparently engage two short–term memory buffers in a series. The first buffer is modelled according to the number of words between two consecutive interpunctions—variable referred to as the word interval, I P —which follows Miller’s 7 ± 2 law; the second buffer is modelled by the number of word intervals contained in a sentence, M F , ranging approximately for one to seven. These values result from studying a large number of literary texts belonging to ancient and modern alphabetical languages. After studying the numerical patterns (combinations of I P and M F ) that determine the number of sentences that theoretically can be recorded in the two memory buffers—which increases with the use of I P and M F —we compare the theoretical results with those that are actually found in novels from Italian and English literature. We have found that most writers, in both languages, write for readers with small memory buffers and, consequently, are forced to reuse sentence patterns to convey multiple meanings.

1. Does the Short–Term Memory Process Words with Two Independent Buffers in Series?

Recently [1], we proposed a well–grounded conjecture that a sentence—read or pronounced as the two activities are similarly processed by the brain [2]—is elaborated by the short–term memory (STM), with two independent processing units in series that have similar buffer size. The clues for conjecturing this model have emerged from considering many novels belonging to Italian and English literature. In [1], we have shown that there are no significant mathematical/statistical differences between the two literary corpora, according to surface deep–language variables. In other words, the mathematical surface structure of alphabetical languages—a creation of the human mind—seems to be deeply rooted in humans, independent of the particular language used.
A two–unit STM processing can be justified according to how a human mind seems to memorize “chunks” of information written in a sentence. Although simple and related to the surface of language, the model seems to describe mathematically the input–output characteristics of a complex mental process largely unknown.
According to [1], the first processing unit is linked to the number of words between two contiguous interpunctions, the variable for which is indicated by I p —termed the word interval (Appendix A lists the mathematical symbols used in the present paper)—approximately ranging within Miller’s 7 ± 2 law range [3,4,5,6,7,8,9,10,11,12]. The second unit is linked to the number M F   of   I p s contained in a sentence, referred to as the extended STM, or E–STM, ranging approximately from one to six. We have shown that the capacity (expressed in words) required to process a sentence ranges from 8.3 to 61.2 words, values that can be converted into time by assuming a reading speed. This conversion gives the range 2.6 ~ 19.5 s for a fast reader [13], and 5.3 ~ 30.1 s for an average reader of novels, values that are well-supported by the experiments reported in the literature [14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29].
The E–STM must not be confused with the intermediate memory [30,31]. It is not modelled by studying neuronal activity, but by studying the surface aspects of human communication, such as words and interpunctions, whose effects writers and readers have experienced since the invention of writing.
The modeling of the STM processing by two units in a series has never been considered in the literature before [1,32]. The reader is very likely aware that the literature on the STM and its various aspects is very large and multidisciplinary, but nobody—as far as we know—has never considered the connections we have found and discussed in [1,32]. Moreover, a sentence conveys meaning; therefore, the theory we are further developing in the present paper might be a starting point to arrive at the information theory that includes meaning.
Currently, some attempts are being made by many scholars to arrive at a “semantic communication” theory or a “semantic information” theory, but the results are still, in our opinion, in their infancies [33,34,35,36,37,38,39,40,41]. These theories, as those concerning the STM, have not considered the main “ingredients” of our theory—namely I P and P F —as a starting point for including meaning, which is still a very open issue.
Figure 1 sketches the flowchart of the two processing units [1]. The words p 1 , p 2 ,…   p j are stored in the first buffer up to j items—approximately in Miller’s range—until an interpunction is introduced to fix the length of I P . The word interval I P is then stored in the second buffer up to k items, from about one to six, until the sentence ends. The process is then repeated for the next sentence.
The purpose of the present paper is to further investigate the mathematical structure underlying sentences, both theoretically and experimentally, by considering the novels previously mentioned [1] listed in Table A1 for Italian literature and in Table A2 for English literature.
After this introduction, in Section 2, we study the probability distribution function (PDF) of sentence size—measured in words—that is recordable by an E–STM buffer made of C F cells (this parameter plays the role of M F ). In other words, in this section, we study and discuss the length of sentences that humans can possibly conceive with an E–STM made of C F memory cells.
In Section 3, we study the number of sentences, with the same number of words, that C F cells can process. In this section, we study and discuss the complementary issue of Section 2, namely, how many sentences with a constant number of words humans can conceive, based solely on the E–STM of C F cells.
In Section 4, we compare the number of sentences that authors of Italian and English literature actually wrote for their novels to the number of sentences theoretically available to them, by defining a multiplicity factor. In Section 5, we define a mismatch index, which synthetically measures to what extent a writer uses the number of sentences that are theoretically available. In Section 6, we show that the parameters studied increase with the year of novel publication. Finally, in Section 7, we summarize the main results and propose future work.

2. Probability Distribution of Sentence Length versus E–STM Buffer Size

First, we study the conditional PDF of sentence length, measured in words W —i.e., the parameter which in long texts, such as chapters, gives P F of each chapter—recordable in an E–STM buffer made of C F cells, i.e., the parameter which gives M F in chapters. Second, we study the overlap of the PDFs because this overlap gives interesting indications.

2.1. Probability Distribution of Sentence Length

To estimate the PDF of sentence length, we run a Monte Carlo simulation based on the PDF of I P obtained in [1] by merging the two literatures listed in Section 1.
In [1], we have shown that the PDF of I P , P F and M F —as previously mentioned, these averages refer to single chapters of the novels—can be modelled with a three-parameter log–normal density function [42] (natural logs):
f x = 1 2 π σ x x 1 exp 1 2 ln x 1 ) μ x σ x 2 x 1
In Equation (1), μ x and σ x are, respectively, the mean value and the standard deviation the log–normal PDF. Table 1 reports these values for the three deep-language variables.
The Monte Carlo simulation steps are as follows:
  • Consider a buffer made of C F cells. The sentence contains C F word intervals: for example, if C F = 3 , the sentence contains two interpunctions followed by a full stop, a question mark, or an exclamation mark.
  • Generate C F independent values of I P according to the log–normal model given by Equation (1) and Table 1. The independence of I P from a cell to another cell is reasonable [1]. In detail, from a random number generator of standard Gaussian density variables X i (zero mean and unit standard deviation), we get the relationship X i = y i μ x / σ x ; therefore, the three-parameter log–normal variable I P , i 1 is then given by I P , i = exp y i + 1 = exp σ x X i + μ x + 1 .
  • Add the number of words contained in the C F cells to obtain W :
    W = i = 1 C F I p , i
  • Repeat steps one through three many times (we repeated these steps 100,000 times, i.e., we simulated 100,000 sentences of different length) to obtain a stable conditional PDF of W .
  • Repeat steps one through four for another C F and obtain another PDF.
Figure 2 shows the conditional PDF for several values of C F . Each PDF can be very well-modelled by a Gaussian PDF f C F x   because the probability of getting unacceptable negative values is negligible in any of the PDFs shown in Figure 2. For example, for C F = 3   , the mean value and the standard deviation are, respectively, m 3 = 18.00 words and s 3 = 1.79 words.
In general terms [42], the mean value of Equation (2) is given by:
m C F = W = i = 1 C F I p , i = i = 1 C F I P , i = C F I p
Therefore, m C F is proportional to C F . As for the standard deviation of W , if the I p , i ’s are independent—as we assume—then the variance s C F 2 of W is given by:
s C F 2 = i = 1 C F σ I P , i 2 = C F × σ I P 2
Therefore, the standard deviation s C F is proportional to C F . Finally, according to the central limit theory [42], the PDF can be modelled as Gaussian in a significant range about the mean.
In conclusion, the Monte Carlo simulation produces a Gaussian PDF with a mean value proportional to C F and a standard deviation proportional to C F . These findings are clearly evident in the PDFs shown in Figure 2, in which m C F and s C F increase as theoretically expected; therefore, the mean values and standard deviations of the other PDFs can be calculated by scaling the values found for C F = 3 . For example, for C F = 6 , m 6 = 2 × 18.00 = 36.00 words and s 6 = 2 × 1.79 = 2.53 words.
Figure 3 shows the histograms corresponding to Figure 2. The number of samples for each conditional PDF, out of 100,000 considered in the Monte Carlo simulation, is obtained by distributing the samples according to the PDF of M F given by Equation (1) and Table 1. The case C F = 3 gives the largest sample size.
The results shown above have an experimental basis because the relationship between P F , the average of words per sentence for the entire novel which is calculated by averaging the P F of single chapters via weighting single chapters with the fraction of novel total word, as discussed in [32], versus < M F > , the average of M F of a novel calculated as P F , is linear, as Figure 4 shows by drawing P F versus M F concerning the Italian and English novels mentioned above.

2.2. Overlap of the Conditional Probability Distributions

Figure 2 shows that the conditional PDFs overlap; therefore, some sentences can be processed by buffers of diverse C F size, either larger or smaller. Let us define the probability of these overlaps.
Let W t h be the intersection of two contiguous Gaussian PDFs, for example, f C F 1 x   and f C F x ; therefore, the probability p H L   that a sentence length can be found in the nearest lower Gaussian PDF (going from C F C F 1 ) is given by [42]:
p H L = W t h f C F x d x 0 W t h f C F x d x
Similarly, the probability that a sentence length can be found in the nearest higher Gaussian PDF (going from C F 1 C F ) is given by:
p L H = W t h f C F 1 x d x
For example, the threshold value between f C F = 3 x and f C F = 4 x is W t h = 20.9 words and p H L = 6.6 % , while p L H = 5.5 % .
Figure 5 draws these probabilities (%) versus C F 1 (lower C F ). Because s C F increases with C F , therefore p H L > p L H . However, this is not only a mathematically obvious result, but it also meaningful because it indicates that: (a) a human mind can process sentences of lengths belonging to the contiguous lower or higher M F (the probability of going to more distant PDFs is negligible) and (b) the number of these sentences is larger in the case C F C F 1 , which simply means that an E–STM buffer can process to a larger extent data matched to a smaller capacity buffer than data matched to a larger capacity buffer.
Finally, notice that each sentence conveys meaning—theoretically, any sequence of words might be meaningful, although this may not always be the case, but we do not know the proportion—therefore, the PDFs found above are also the PDFs associated with meaning. Moreover, the same numerical sequence of W words can carry different meanings, according to the words used. Multiplicity of meaning, therefore, is “built in” in a sequence of W words. We will further explore this issue in the next sections by considering the number of sentences that authors of Italian and English literature actually wrote.
So far, we have explored the processing of the words of a sentence by simulating sentences of diverse length that are conditioned to the E–STM buffer size. In the next section, we explore the complementary processing concerning the number of sentences that contain the same number of words.

3. Theoretical Number of Sentences Recordable in C F Cells

We study the number of sentences of W words that an E–STM buffer, made of C F cells, can theoretically process. In summary, we ask the following question: how many sentences S W C F containing the same number of words W (Equation (2)) can be theoretically written in C F cells?
Table 2 reports these numbers as a function of W and C F . We calculated these data first by running a code and then by finding the mathematical recursive formula that generates them, given by the following:
S W C F = S W 1 C F + S W 1 C F 1
For example, if W = 20 words and C F = 4 , we read S 19 C F = 4 = 816 and S 19 C F 1 = 3 = 153 ; therefore, S 20 C F = 4 = 816 + 153 = 969 .
Figure 6 draws the data reported in some lines of Table 2 for a quick overview. We see how fast the number of sentences changes with C F for constant W . For example, if W = 20 words, then S W = 20 ranges from 1 ( C F = 1 ) to 52,698 sentences ( C F = 8 ) . Maxima are clearly visible for W = 5 and W = 10 words at C F = 3 and C F = 5   or   6 , respectively. Values become fantastically large for larger W and C F , well beyond the ability and creativity of single writers, as we will show in Section 4.
Figure 7 draws the data reported in some columns of Table 2, i.e., the number of sentences S W C F versus W , for fixed C F . In this case, it is useful to adopt an efficiency factor ε , which is defined as the ratio between S W C F and W for a given C F :
ε = S W C F W
This factor explains, summarily, how a buffer of C F cells is efficient in providing sentences with a given number of words, its units being sentences per word.
Figure 8 shows ε versus W . It is interesting to note that for W 10 words, the buffer C F = 2 can be more efficient than the others. Beyond W = 10 , the larger buffers become very efficient with very large ε .
If a writer uses short buffers—e.g., deliberately because of his/her style, or necessarily because of the reader’s E–STM memory size—then he/she has to repeat the same numerical sequence of words many times, according to the number of meanings conveyed. For example, if C F = 2 and W = 10 , the writer has only nine different choices, or patterns, of two numbers whose sum is 10 (Table 2). Therefore, Table 2 gives the minimum number of meanings that can be conveyed. The larger the C F , the larger is the variety of sentences that can be written with W words.
The following question naturally arises: How many sentences authors do write in their texts as compared to the theoretical number available to them? In the next section, we will compare these two sets of data by studying the novels taken from the Italian and English literature listed in Appendix B, by assuming their average values P F and M F and by defining a multiplicity factor.

4. Experimental Multiplicity Factor of Sentences

We compare the number of sentences that authors of Italian and English literature actually wrote for each novel to the number of sentences theoretically available to them, according to the P F and M F of each novel. In this analysis, we do not consider the values of P F and M F of each chapter of a novel because the detail would be so fine as to miss the general trend given by the average values P F , M F of the complete novel.
As is well known, the average value and the standard deviation of integers very likely are not integers, as is always the case for the linguistic parameters; therefore, to apply the mathematical theory of the previous sections, we must do some interpolations and only at the end of the calculation consider the integers.
Let us compare the experimental number of sentences S P F M F in a novel, as reported in Table A1 and Table A2, to the theoretical number S W C F available to the author, according to the experimental values < P F > (which plays the role of W ) and < M F > (which plays the role of C F ) of the novel.
By referring to Figure 7, the interpolation between the integers of Table 2 to find the curve of constant C F —given by the real number M F —is linear along both axes. At the intersection of the vertical line (corresponding to the real number P F ) and the new curve (corresponding to the real number M F ), we find the theoretical S W C F by rounding the value to the nearest integer toward zero. For example, for David Copperfield, in Table A2 we read S P F M F = 19,610 , and the interpolation gives S W C F = 1553 . Figure 9 shows the result of this exercise. We see that S W C F increases rapidly with M F . The most displaced (red) circle is due to Robinson Crusoe.
The comparison between S P F M F and S W C F is performed by defining a multiplicity factor α , defined as the ratio between S P F M F (experimental value) and S W C F (theoretical value):
α = S P F M F S W C F
The values of α for each novel are reported in Table A1 and Table A2. For example, for David Copperfield,  α = 19,610 / 1553 = 12.63 . Figure 10 shows α versus S P F M F . We notice a fairly significant increasing trend of α with S P F M F .
Figure 11 shows α versus S W C F . An inverse relation power law is a good fit:
α = 3886 / S W C F Italian
α = 6028 / S W C F English
The correlation coefficient of log values is 0.9873 for Italian and 0.9710 for English.
Based on Equations (10) and (11), α = 1 when S W C F = 3886 for Italian novels and S W C F = 6028 for English novels; therefore, novels with sentences in the range 4000 ~ 6000 use, on average, the number of sentences theoretically available for their averages P F and M F .
Figure 12 shows α versus < M F > . In this case, an exponential law is a good fit:
α = 26,027 × e 2.923 × M F Italian
α = 15,855 × e 2.649 × M F English
For the Italian literature in question, (correlation coefficient of linear–log values is 0.9697 ) α = 1 when M F = 3.48 ; for the English literature (correlation coefficient of linear–log values is 0.9603 ), α = 1 when M F = 3.65 . Therefore, novels with sentences in the range 4000 ~ 6000 use, on average, the same E–STM buffer size of M F 3.5 cells.
From Figure 10, Figure 11 and Figure 12, we can draw the following conclusion: in general, α > 1 is more likely than α < 1 and often α 1 . When α 1 , the writer reuses the same pattern of number of words many times. The multiplicity factor, therefore, indicates also the minimum multiplicity of meaning conveyed by an E–STM besides, of course, the many diverse meanings conveyed by the same sequence of I p s obtainable by only changing words. Few novels show α < 1 . In these cases, the writer has enough diverse patterns to convey meaning, but most of them are not used.
Finally, it is interesting to relate α to a universal readability factor G U , which is a function of both P F and I P   [43].
The universal readability index, as compared to the current readability indices for the few languages for which they are available (mainly for English [43]), considers also the reader’s short-term memory processing capacity. It can be used to assess the readability of texts written in any alphabetical language, as described in [43].
Figure 13 shows α versus G U . Because the readability of a text increases as G U increases, we can see that the novels with α < 1 tend to be less readable than those with α > 1 . The less-readable novels have, in general, large values of P F and therefore may contain more E–STM cells (large M F ).
In conclusion, if a writer does use the full variety of sentence patterns available, or even overuses them, then he/she writes texts that are easier to read. On the other hand, if a writer does not use the full variety of sentence patterns available, then he/she tends to write texts that are more difficult to read. In the next section, we define a useful index, the mismatch index, which describes these cases.

5. Mismatch Index

We define a useful index, the mismatch index, which measures to what extent a writer uses the number of sentences that are theoretically available according to the averages P F and M F of the novel. For this purpose, we define the mismatch index:
I M = S P F M F S W C F S P F M F + S W C F = α 1 α + 1
According to Equation (14), I M = 0 when S P F M F = S W C F ; hence, α = 1 , and in this case the experiment and theory are perfectly matched. They are overmatched when I M > 0 ( α > 1 ) and undermatched when I M < 0 ( α < 1 ) .
Figure 14 shows the scatterplot of I M versus M F . The mathematical models drawn are calculated by substituting Equations (12) and (13) in Equation (14). We can reiterate that when I M > 0 (overmatching, M F 3.5 ), the writer repeats sentence patterns because there are not enough diverse patterns to convey all the meanings. The texts are easier to read. When I M < 0 (undermatching, M F 3.5 ), the writer has theoretically many sentence patterns to choose from, but he/she uses only a few or very few of them. The texts are more difficult to read.
Figure 15 shows the scatterplot of I M versus S W C F . The mathematical models drawn were calculated by substituting Equations (10) and (11) in Equation (14). Overmatching was found for S W C F < 3886   for Italian and S W C F < 6028   for English.
Finally, Figure 16 shows I M versus α , Equation (14), a picture that summarizes the entire analysis of mismatch.
As we can see by reading the years of publication in Table A1 and Table A2, the novels span a long period. Do the parameters studied depend on time? In the next section, we show that the answer to this question is positive.

6. Time Dependence

The novels considered in Table A1 and Table A2 were published in a period spanning several centuries. We show that the multiplicity factor α and the mismatch index I M do depend on time.
Figure 17 shows the multiplicity factors versus the years of publication of the novels since 1800. It is evident that writers tend to use larger values of α —therefore the E–STM buffers are of smaller sizes—as we approach the present epoch and a possible saturation at α 100 . The English literature shows a stable increasing pattern while the Italian literature seems to contain samples that come from two diverse sets of data, one of which evolved in agreement with English literature, the other (given by the novels labelled with “*” in Table A1) is always increasing with time but with a diverse slope.
Figure 18 shows the mismatch index versus the year of novel publication.
Figure 19 shows the universal readability index versus time. In both Figure 18 and Figure 19, we can observe the same trends shown in Figure 17, which therefore reinforces the conjecture that: (a) the writers are partially changing their style with time by making their novels more readable, i.e., more matched to less-educated readers according to the relationship between G U and the schooling years in the Italian school system, as discussed in [43]; (b) a saturation seems to occur in all parameters in the novels written in the second half of the XX century, at least according to the novels of Appendix B.

7. Summary and Future Work

In the present paper, we have further investigated the mathematical structure of sentences and its connections with human short–term memory. This structure is defined by two independent variables which apparently engage two short-term memory buffers in series. The first buffer is modelled according to the number of words between two consecutive interpunctions—variable-termed word interval I P —which follows Miller’s 7 ± 2 law; the second buffer is modelled by the number of word intervals contained in a sentence, M F , ranging approximately from one to seven. These values arise from an extensive analysis of alphabetical texts [44].
We have studied the numerical patterns (combinations of I P and M F ) that determine the number of sentences that theoretically can be recorded in the two memory buffers—which increases with I P and M F —and we have compared the theoretical results with those that are actually found in novels from Italian and English literature. We have found that most writers, in both languages, write for readers with small memory buffers and, consequently, are forced to reuse sentence patterns to convey multiple meanings. In this case, texts are easier to read, according to the universal readability index.
Future work should consider other literatures to confirm what, in our opinion, is general because the topic is connected to the human mind. The same analysis performed on ancient languages, such as Greek and Latin—for which there are large literary corpora—would show whether these ancient writers/readers displayed similar short–term memory buffers.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The author wishes to thank the many scholars who, with great care and love, maintain digital texts to be available to readers and scholars of different academic disciplines, such as Perseus Digital Library and Project Gutenberg.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A. List of Mathematical Symbols

SymbolDefinition
C F Cells of E–STM buffer
G U Universal readability index
I M Mismatch index
  I p Word interval
M F   Word intervals in a sentence, chapter average
  M F   Word intervals in a sentence, novel average
P F Words in a sentence, chapter average
< P F > Words in a sentence, novel average
S P F M F Experimental sentences
S W C F Theoretical sentences written in C F cells
W Words in a sentence
f x Three–parameter log–normal density function
f C F x Gaussian PDF
m C F Mean value of Gaussian PDF
s C F standard deviation of Gaussian PDF
α Multiplicity factor
ε Efficiency factor
μ x Mean value of log–normal PDF
σ x standard deviation of log–normal PDF

Appendix B. List of the Novels Considered from Italian and English Literature

Table A1 and Table A2 list the authors, the titles of the novels, and their years of publication in either Italian and English literature as considered in the paper, with deep–language average statistics, multiplicity factor α , and mismatch index I M . The averages < C P > , < P F > , < I P >   < M F > have been calculated by weighting each chapter value with its fraction of the total number of words in the novel, as described in [32].
Table A1. Authors of novels of Italian literature. Number of total sentences (sentences ending with full stops, question marks, or exclamation marks); average number of characters per word, < C P > ; average number of words pr sentence, < P F > ; average number of word intervals, < I P > ; average number word intervals per sentence, < M F > ; multiplicity factor α ; and mismatch index I M .
Table A1. Authors of novels of Italian literature. Number of total sentences (sentences ending with full stops, question marks, or exclamation marks); average number of characters per word, < C P > ; average number of words pr sentence, < P F > ; average number of word intervals, < I P > ; average number word intervals per sentence, < M F > ; multiplicity factor α ; and mismatch index I M .
Author (Literary Work, Year)Sentences < C P > < P F > < I P > < M F > α I M
Anonymous (I Fioretti di San Francesco, 1476)10644.6537.708.244.560.004−0.99
Bembo Pietro (Prose, 1525)19254.3737.916.425.920.001−1.00
Boccaccio Giovanni (Decameron, 1353)61474.4844.277.795.690.001−1.00
Buzzati Dino (Il deserto dei tartari, 1940)33115.1017.756.632.676.900.75
Buzzati Dino (La boutique del mistero, 1968 *)42194.8215.456.372.4118.830.90
Calvino (Il barone rampante, 1957 *)38644.6319.876.732.914.370.63
Calvino Italo (Marcovaldo, 1963 *)20004.7417.606.592.674.280.62
Cassola Carlo (La ragazza di Bube, 1960 *)58734.4811.935.642.1187.660.98
Collodi Carlo (Pinocchio, 1883)25124.6016.926.192.725.740.70
Da Ponte Lorenzo (Vita, 1823)54594.7126.156.913.780.51−0.32
Deledda Grazia (Canne al vento, 1913, Nobel Prize 1926)41844.5115.086.062.4818.350.90
D’Azeglio Massimo (Ettore Fieramosca, 1833)31824.6429.777.364.030.12−0.78
De Amicis Edmondo (Cuore, 1886)47754.5519.435.613.412.480.42
De Marchi Emilio (Demetrio Panelli, 1890)53634.7018.957.062.688.950.80
D’Annunzio Gabriele (Le novelle delle Pescara, 1902)30274.9117.996.382.795.350.68
Eco Umberto (Il nome della rosa, 1980 *)84904.8121.087.462.818.700.79
Fogazzaro (Il santo, 1905)66374.7914.846.332.3437.080.95
Fogazzaro (Piccolo mondo antico, 1895)70694.7916.086.102.6420.980.91
Gadda (Quer pasticciaccio brutto… 1957 *)55964.7618.434.983.682.690.46
Grossi Tommaso (Marco Visconti, 1834)53014.5928.076.564.230.16−0.72
Leopardi Giacomo (Operette morali, 1827)26944.7031.786.904.540.03−0.95
Levi Primo (Cristo si è fermato a Eboli, 1945 *)36114.7322.945.704.020.47−0.36
Machiavelli Niccolò (Il principe, 1532)7024.7140.176.456.230.0001−1.00
Manzoni Alessandro (I promessi sposi, 1840)97664.6024.835.304.630.33−0.51
Manzoni Alessandro (Fermo e Lucia, 1821)74964.7530.987.174.300.12−0.78
Moravia Alberto (Gli indifferenti, 1929 *)28304.8136.006.745.340.003−0.99
Moravia Alberto (La ciociara, 1957 *)42714.5629.937.284.120.12−0.78
Pavese Cesare (La bella estate, 1940)21214.5412.375.972.0631.190.94
Pavese Cesare (La luna e i falò, 1949 *)25444.4717.836.832.605.640.70
Pellico Silvio (Le mie prigioni, 1832)31484.8017.276.502.697.000.75
Pirandello Luigi (Il fu Mattia Pascal, 1904, Nobel Prize 1934)52844.6314.574.942.9316.720.89
Sacchetti Franco (Trecentonovelle, 1392)80604.3722.435.823.831.410.17
Salernitano Masuccio (Il Novellino, 1525) 19654.4019.205.143.680.79−0.12
Salgari Emilio (Il corsaro nero, 1899)66864.9915.096.362.3634.460.94
Salgari Emilio (I minatori dell’Alaska, 1900)60945.0115.246.252.4427.210.93
Svevo Italo (Senilità, 1898)42364.8616.047.752.0732.340.94
Tomasi di Lampedusa (Il gattopardo, 1958 *)28934.9926.427.903.330.47−0.36
Verga (I Malavoglia, 1881)44014.4620.456.823.004.210.62
Table A2. Authors of the novels of English literature. Number of total sentences; average number of characters per word, < C P > ; average number of words pr sentence, < P F > ; average number of word intervals, < I P > ; average number word intervals per sentence, < M F > ; multiplicity factor α ; and mismatch index I M . Notice that for Dickens’ novels, Table 1 of [45] reported the number of sentences ending only with full stops; sentences ending with question marks and exclamation marks were not reported, contrarily to all other literary texts there reported. Moreover, the analysis conducted in [45] was performed by considering only the sentences ending with full stops; this is why the values of P F and M F there reported are larger (upper bounds) than those listed below.
Table A2. Authors of the novels of English literature. Number of total sentences; average number of characters per word, < C P > ; average number of words pr sentence, < P F > ; average number of word intervals, < I P > ; average number word intervals per sentence, < M F > ; multiplicity factor α ; and mismatch index I M . Notice that for Dickens’ novels, Table 1 of [45] reported the number of sentences ending only with full stops; sentences ending with question marks and exclamation marks were not reported, contrarily to all other literary texts there reported. Moreover, the analysis conducted in [45] was performed by considering only the sentences ending with full stops; this is why the values of P F and M F there reported are larger (upper bounds) than those listed below.
Literary Work (Author, Year)Sentences < C P > P F < I P > M F α I M
The Adventures of Oliver Twist (C. Dickens, 1837–1839)91214.2318.045.703.16 9.460.81
David Copperfield (C. Dickens, 1849–1850)19,6104.0418.835.613.3512.630.85
Bleak House (C. Dickens, 1852–1853)20,9674.2316.956.592.5756.980.97
A Tale of Two Cities (C. Dickens, 1859)80984.2618.276.192.9311.890.84
Our Mutual Friend (C. Dickens, 1864–1865)17,4094.2216.466.032.7343.410.95
Matthew King James (1611)10404.2722.965.903.900.15−0.73
Robinson Crusoe (D. Defoe, 1719)23933.9452.907.127.400.00002−1.00
Pride and Prejudice (J. Austen, 1813)60134.4021.317.162.955.200.68
Wuthering Heights (E. Brontë, 1845–1846)63524.2717.785.972.979.830.82
Vanity Fair (W. Thackeray, 1847–1848)13,0074.6321.956.733.255.260.68
Moby Dick (H. Melville, 1851) 95824.5223.826.453.641.560.22
The Mill On The Floss (G. Eliot, 1860) 90184.2923.847.093.352.170.37
Alice’s Adventures in Wonderland (L. Carroll, 1865)16293.9617.195.792.952.900.49
Little Women (L.M. Alcott, 1868–1869)10,5934.1818.096.302.8517.340.89
Treasure Island (R. L. Stevenson, 1881–1882)38244.0218.936.053.093.790.58
Adventures of Huckleberry Finn (M. Twain, 1884)58873.8519.396.632.947.050.75
Three Men in a Boat (J.K. Jerome, 1889)53414.2510.556.141.72130.270.98
The Picture of Dorian Gray (O. Wilde, 1890)42924.1914.306.292.2133.020.94
The Jungle Book (R. Kipling, 1894)32144.1116.467.142.2914.100.87
The War of the Worlds (H.G. Wells, 1897)33064.3819.227.672.486.720.74
The Wonderful Wizard of Oz (L.F. Baum, 1900)22194.01717.907.632.347.020.75
The Hound of The Baskervilles (A.C. Doyle, 1901–1902)40804.1515.077.831.9143.870.96
Peter Pan (J.M. Barrie, 1902)31774.1215.656.352.4413.070.86
A Little Princess (F.H. Burnett, 1902–1905)48384.1814.266.792.0946.970.96
Martin Eden (J. London, 1908–1909)91734.3215.616.762.3046.330.96
Women in love (D.H. Lawrence, 1920)16,0484.2611.625.222.22216.860.99
The Secret Adversary (A. Christie, 1922)85364.288.975.521.62294.340.99
The Sun Also Rises (E. Hemingway, 1926)76143.929.436.021.56237.940.99
A Farewell to Arms (H. Hemingway,1929)10,3243.949.056.801.32356.000.99
Of Mice and Men (J. Steinbeck, 1937)34634.028.635.611.54133.190.99

References

  1. Matricciani, E. Is Short-Term Memory Made of Two Processing Units? Clues from Italian and English Literatures down Several Centuries. Information 2024, 15, 6. [Google Scholar] [CrossRef]
  2. Deniz, F.; Nunez-Elizalde, A.O.; Huth, A.G.; Gallant Jack, L. The Representation of Semantic Information Across Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus Modality. J. Neurosci. 2019, 39, 7722–7736. [Google Scholar] [CrossRef]
  3. Miller, G.A. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol. Rev. 1956, 63, 81–97. [Google Scholar] [CrossRef]
  4. Crowder, R.G. Short-term memory: Where do we stand? Mem. Cogn. 1993, 21, 142–145. [Google Scholar] [CrossRef]
  5. Lisman, J.E.; Idiart, M.A.P. Storage of 7 ± 2 Short-Term Memories in Oscillatory Subcycles. Science 1995, 267, 1512–1515. [Google Scholar] [CrossRef]
  6. Cowan, N. The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behav. Brain Sci. 2001, 24, 87–114. [Google Scholar] [CrossRef]
  7. Bachelder, B.L. The Magical Number 7 ± 2: Span Theory on Capacity Limitations. Behav. Brain Sci. 2001, 24, 116–117. [Google Scholar] [CrossRef]
  8. Saaty, T.L.; Ozdemir, M.S. Why the Magic Number Seven Plus or Minus Two. Math. Comput. Model. 2003, 38, 233–244. [Google Scholar] [CrossRef]
  9. Burgess, N.; Hitch, G.J. A revised model of short-term memory and long-term learning of verbal sequences. J. Mem. Lang. 2006, 55, 627–652. [Google Scholar] [CrossRef]
  10. Richardson, J.T.E. Measures of short-term memory: A historical review. Cortex 2007, 43, 635–650. [Google Scholar] [CrossRef]
  11. Mathy, F.; Feldman, J. What’s magic about magic numbers? Chunking and data compression in short-term memory. Cognition 2012, 122, 346–362. [Google Scholar] [CrossRef] [PubMed]
  12. Gignac, G.E. The Magical Numbers 7 and 4 Are Resistant to the Flynn Effect: No Evidence for Increases in Forward or Backward Recall across 85 Years of Data. Intelligence 2015, 48, 85–95. [Google Scholar] [CrossRef]
  13. Trauzettel-Klosinski, S.; Dietz, K. Standardized Assessment of Reading Performance: The New International Reading Speed Texts IreST. Investig. Ophthalmol. Vis. Sci. 2012, 53, 5452–5461. [Google Scholar] [CrossRef] [PubMed]
  14. Melton, A.W. Implications of Short-Term Memory for a General Theory of Memory. J. Verbal Learn. Verbal Behav. 1963, 2, 1–21. [Google Scholar] [CrossRef]
  15. Atkinson, R.C.; Shiffrin, R.M. The Control of Short-Term Memory. Sci. Am. 1971, 225, 82–91. [Google Scholar] [CrossRef]
  16. Murdock, B.B. Short-Term Memory. Psychol. Learn. Motiv. 1972, 5, 67–127. [Google Scholar]
  17. Baddeley, A.D.; Thomson, N.; Buchanan, M. Word Length and the Structure of Short-Term Memory. J. Verbal Learn. Verbal Behav. 1975, 14, 575–589. [Google Scholar] [CrossRef]
  18. Case, R.; Midian Kurland, D.; Goldberg, J. Operational efficiency and the growth of short-term memory span. J. Exp. Child Psychol. 1982, 33, 386–404. [Google Scholar] [CrossRef]
  19. Grondin, S. A temporal account of the limited processing capacity. Behav. Brain Sci. 2000, 24, 122–123. [Google Scholar] [CrossRef]
  20. Pothos, E.M.; Joula, P. Linguistic structure and short-term memory. Behav. Brain Sci. 2000, 138–139. [Google Scholar] [CrossRef]
  21. Conway, A.R.A.; Cowan, N.; Michael, F.; Bunting, M.F.; Therriaulta, D.J.; Minkoff, S.R.B. A latent variable analysis of working memory capacity, short-term memory capacity, processing speed, and general fluid intelligence. Intelligence 2002, 30, 163–183. [Google Scholar] [CrossRef]
  22. Jonides, J.; Lewis, R.L.; Nee, D.E.; Lustig, C.A.; Berman, M.G.; Moore, K.S. The Mind and Brain of Short-Term Memory. Annu. Rev. Psychol. 2008, 69, 193–224. [Google Scholar] [CrossRef] [PubMed]
  23. Barrouillest, P.; Camos, V. As Time Goes by: Temporal Constraints in Working Memory. Curr. Dir. Psychol. Sci. 2012, 21, 413–419. [Google Scholar] [CrossRef]
  24. Potter, M.C. Conceptual short term memory in perception and thought. Front. Psychol. 2012, 3, 113. [Google Scholar] [CrossRef]
  25. Jones, G.; Macken, B. Questioning short-term memory and its measurements: Why digit span measures long-term associative learning. Cognition 2015, 144, 1–13. [Google Scholar] [CrossRef] [PubMed]
  26. Chekaf, M.; Cowan, N.; Mathy, F. Chunk formation in immediate memory and how it relates to data compression. Cognition 2016, 155, 96–107. [Google Scholar] [CrossRef]
  27. Norris, D. Short-Term Memory and Long-Term Memory Are Still Different. Psychol. Bull. 2017, 143, 992–1009. [Google Scholar] [CrossRef]
  28. Houdt, G.V.; Mosquera, C.; Napoles, G. A review on the long short-term memory model. Artif. Intell. Rev. 2020, 53, 5929–5955. [Google Scholar] [CrossRef]
  29. Islam, M.; Sarkar, A.; Hossain, M.; Ahmed, M.; Ferdous, A. Prediction of Attention and Short-Term Memory Loss by EEG Workload Estimation. J. Biosci. Med. 2023, 11, 304–318. [Google Scholar] [CrossRef]
  30. Rosenzweig, M.R.; Bennett, E.L.; Colombo, P.J.; Lee, P.D.W. Short-term, intermediate-term and Long-term memories. Behav. Brain Res. 1993, 57, 193–198. [Google Scholar] [CrossRef]
  31. Kaminski, J. Intermediate-Term Memory as a Bridge between Working and Long-Term Memory. J. Neurosci. 2017, 37, 5045–5047. [Google Scholar] [CrossRef]
  32. Matricciani, E. Deep Language Statistics of Italian throughout Seven Centuries of Literature and Empirical Connections with Miller’s 7 ∓ 2 Law and Short-Term Memory. Open J. Stat. 2019, 9, 373–406. [Google Scholar] [CrossRef]
  33. Strinati, E.C.; Barbarossa, S. 6G Networks: Beyond Shannon towards Semantic and Goal-Oriented Communications. Comput. Netw. 2021, 190, 107930. [Google Scholar] [CrossRef]
  34. Shi, G.; Xiao, Y.; Li, Y.; Xie, X. From semantic communication to semantic-aware networking: Model, architecture, and open problems. IEEE Commun. Mag. 2021, 59, 44–50. [Google Scholar] [CrossRef]
  35. Xie, H.; Qin, Z.; Li, G.Y.; Juang, B.H. Deep learning enabled semantic communication systems. IEEE Trans. Signal Process. 2021, 69, 2663–2675. [Google Scholar] [CrossRef]
  36. Luo, X.; Chen, H.H.; Guo, Q. Semantic communications: Overview, open issues, and future research directions. IEEE Wirel. Commun. 2022, 29, 210–219. [Google Scholar] [CrossRef]
  37. Yang, W.; Du, H.; Liew, Z.Q.; Lim, W.Y.B.; Xiong, Z.; Niyato, D.; Chi, X.; Shen, X.; Miao, C. Semantic Communications for Future Internet: Fundamentals, Applications, and Challenges. IEEE Commun. Surv. Tutor. 2023, 25, 213–250. [Google Scholar] [CrossRef]
  38. Xie, H.; Qin, A. A lite distributed semantic communication system for internet of things. IEEE J. Sel. Areas Commun. 2021, 39, 142–153. [Google Scholar]
  39. Bellegarda, J.R. Exploiting Latent Semantic Information in Statistical Language Modeling. Proc. IEEE 2000, 88, 1279–1296. [Google Scholar] [CrossRef]
  40. D’Alfonso, S. On Quantifying Semantic Information. Information 2011, 2, 61–101. [Google Scholar] [CrossRef]
  41. Zhong, Y. A Theory of Semantic Information. China Commun. 2017, 14, 1–17. [Google Scholar] [CrossRef]
  42. Papoulis Papoulis, A. Probability & Statistics; Prentice Hall: Hoboken, NJ, USA, 1990. [Google Scholar]
  43. Matricciani, E. Readability Indices Do Not Say It All on a Text Readability. Analytics 2023, 2, 296–314. [Google Scholar] [CrossRef]
  44. Matricciani, E. The Theory of Linguistic Channels in Alphabetical Texts; Cambridge Scholars Publishing: Newcastle upon Tyne, UK, 2024. [Google Scholar]
  45. Matricciani, E. Capacity of Linguistic Communication Channels in Literary Texts: Application to Charles Dickens’ Novels. Information 2023, 14, 68. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the two processing units of a sentence. The words p 1 , p 2 ,…   p j are stored in the first buffer up to j items to complete a word interval I P , which is approximately in Miller’s range, when an interpunction is introduced. I P is then stored in the E–STM buffer, up to k items, i.e., in M F cells, approximately one to six, until the sentence ends.
Figure 1. Flowchart of the two processing units of a sentence. The words p 1 , p 2 ,…   p j are stored in the first buffer up to j items to complete a word interval I P , which is approximately in Miller’s range, when an interpunction is introduced. I P is then stored in the E–STM buffer, up to k items, i.e., in M F cells, approximately one to six, until the sentence ends.
Appliedmath 04 00007 g001
Figure 2. Conditional PDFs of words per sentence versus an E–STM buffer of C F cells from two to eight. Each PDF can be modelled with a Gaussian PDF f C F x with a mean value proportional to C F and a standard deviation proportional to C F .
Figure 2. Conditional PDFs of words per sentence versus an E–STM buffer of C F cells from two to eight. Each PDF can be modelled with a Gaussian PDF f C F x with a mean value proportional to C F and a standard deviation proportional to C F .
Appliedmath 04 00007 g002
Figure 3. Conditional histograms of words per sentence versus an E–STM buffer of C F cells from two to eight, obtained from Figure 1, by simulating 100,000 sentences weighted with the PDF of M F .
Figure 3. Conditional histograms of words per sentence versus an E–STM buffer of C F cells from two to eight, obtained from Figure 1, by simulating 100,000 sentences weighted with the PDF of M F .
Appliedmath 04 00007 g003
Figure 4. Scatterplot of < P F > versus < M F > of Italian novels (blue circles) and English novels (red circles).
Figure 4. Scatterplot of < P F > versus < M F > of Italian novels (blue circles) and English novels (red circles).
Appliedmath 04 00007 g004
Figure 5. Overlap probability (%) versus C F 1 (lower C F ); p H L and p L H are given by Equations (5) and (6).
Figure 5. Overlap probability (%) versus C F 1 (lower C F ); p H L and p L H are given by Equations (5) and (6).
Appliedmath 04 00007 g005
Figure 6. Number of sentences S W C F made of W words versus an E–STM buffer capacity of C F .
Figure 6. Number of sentences S W C F made of W words versus an E–STM buffer capacity of C F .
Appliedmath 04 00007 g006
Figure 7. Number of sentences S W C F recordable in an E–STM buffer capacity of C F versus words per sentence.
Figure 7. Number of sentences S W C F recordable in an E–STM buffer capacity of C F versus words per sentence.
Appliedmath 04 00007 g007
Figure 8. Efficiency ε , Equation (8), of an E–STM buffer of C F cells versus words per sentence W .
Figure 8. Efficiency ε , Equation (8), of an E–STM buffer of C F cells versus words per sentence W .
Appliedmath 04 00007 g008
Figure 9. Theoretical number of sentences S W C F versus M F for Italian (blue circles) and English (red circles) novels. The most displaced (red) circle is due to Robinson Crusoe.
Figure 9. Theoretical number of sentences S W C F versus M F for Italian (blue circles) and English (red circles) novels. The most displaced (red) circle is due to Robinson Crusoe.
Appliedmath 04 00007 g009
Figure 10. Multiplicity factor versus S P F M F for Italian (blue circles) and English (red circles) novels.
Figure 10. Multiplicity factor versus S P F M F for Italian (blue circles) and English (red circles) novels.
Appliedmath 04 00007 g010
Figure 11. Multiplicity factor α versus theoretical number of words S W C F for Italian (blue circles and blue line) and English (red circles and red line) novels.
Figure 11. Multiplicity factor α versus theoretical number of words S W C F for Italian (blue circles and blue line) and English (red circles and red line) novels.
Appliedmath 04 00007 g011
Figure 12. Multiplicity factor α versus M F for Italian (blue circles and blue line) and English (red circles and red line) novels.
Figure 12. Multiplicity factor α versus M F for Italian (blue circles and blue line) and English (red circles and red line) novels.
Appliedmath 04 00007 g012
Figure 13. Multiplicity factor α versus readability index G U for Italian (blue circles) and English (red circles) novels.
Figure 13. Multiplicity factor α versus readability index G U for Italian (blue circles) and English (red circles) novels.
Appliedmath 04 00007 g013
Figure 14. Mismatch index I M versus M F for Italian (blue circles and blue line) and English (red circles and red line) novels.
Figure 14. Mismatch index I M versus M F for Italian (blue circles and blue line) and English (red circles and red line) novels.
Appliedmath 04 00007 g014
Figure 15. Mismatch index I M versus the theoretical number of sentences S W C F for Italian (blue circles and blue line) and English (red circles and red line) novels.
Figure 15. Mismatch index I M versus the theoretical number of sentences S W C F for Italian (blue circles and blue line) and English (red circles and red line) novels.
Appliedmath 04 00007 g015
Figure 16. Mismatch index I M versus the multiplicity factor I M   for Italian (blue circles) and English (red circles) novels.
Figure 16. Mismatch index I M versus the multiplicity factor I M   for Italian (blue circles) and English (red circles) novels.
Appliedmath 04 00007 g016
Figure 17. Multiplicity factor I M versus the year of novel publication for Italian (blue circles) and English (red circles) novels.
Figure 17. Multiplicity factor I M versus the year of novel publication for Italian (blue circles) and English (red circles) novels.
Appliedmath 04 00007 g017
Figure 18. Mismatch factor α   versus the year of novel publication for Italian (blue circles) and English (red circles) novels.
Figure 18. Mismatch factor α   versus the year of novel publication for Italian (blue circles) and English (red circles) novels.
Appliedmath 04 00007 g018
Figure 19. Universal readability index G U versus the year of novel publication for Italian (blue circles) and English (red circles) novels.
Figure 19. Universal readability index G U versus the year of novel publication for Italian (blue circles) and English (red circles) novels.
Appliedmath 04 00007 g019
Table 1. Mean value μ x and standard deviation σ x of the log–normal PDF of the indicated variable [1].
Table 1. Mean value μ x and standard deviation σ x of the log–normal PDF of the indicated variable [1].
μ x σ x
IP1.6890.180
PF3.0380.441
MF0.8490.483
Table 2. Theoretical number of sentences S W C F   (columns) recordable in an E–STM buffer made of C F cells with the same number of words (items) indicated in the first column.
Table 2. Theoretical number of sentences S W C F   (columns) recordable in an E–STM buffer made of C F cells with the same number of words (items) indicated in the first column.
Words W (Items Storeable)E–STM Buffer Made of C F Cells
12345678
110000000
211000000
312100000
413310000
514641000
61510105100
716152015610
8172135352171
91828567056288
101936841261268436
1111045120210252210120
1211155165330462462330
13112662204957921254792
1411378286715128720462046
15114913641001200233334092
161151054551365300353357425
1711612056018204368833812,760
181171366802380618812,70621,098
191181538163060856818,89433,804
20119171969387611,62827,46252,698
211201901140484515,50439,09080,160
221212101330598520,34954,594119,250
231222311540731526,33474,943173,844
241232531771885533,649101,277248,787
25124276202410,62642,504134,926350,064
26125300230012,65053,130177,430484,990
27126325260014,95065,780230,560662,420
28127351292517,55080,730296,340892,980
29128378327620,47598,280377,0701,189,320
30129406365423,751118,755475,3501,566,390
31130435406027,405142,506594,1052,041,740
32131465449531,465173,971768,0762,635,845
33132496496035,960205,436973,5123,403,921
34133528545640,920241,3961,178,9484,377,433
35134561598446,376282,3161,461,2645,556,381
36135595654552,360328,6921,789,9567,017,645
37136630714058,905381,0522,118,6488,807,601
38137666777066,045439,9572,499,70010,926,249
39138703843673,815506,0022,939,65713,425,949
40139741913982,251579,8173,519,47416,365,606
41140780988091,390662,0684,099,29119,885,080
4214182010,660101,270753,4584,761,35923,984,371
4314286111,480111,930854,7285,514,81729,499,188
4414390312,341123,410966,6586,369,54535,014,005
4514494613,244135,7511,090,0687,336,20341,383,550
4614599014,190148,9951,225,8198,426,27148,719,753
47146103515,180163,1851,374,8149,652,09058,371,843
48147108116,215178,3651,537,99911,026,90468,023,933
49148112817,296194,5801,716,36412,564,90379,050,837
50149117618,424211,8761,910,94414,281,26791,615,740
51150122519,600230,3002,122,82016,192,211105,897,007
52151127520,825251,1252,353,12018,315,031122,089,218
53152132622,100273,2252,604,24520,668,151140,404,249
54153137823,426296,6512,877,47023,272,396161,072,400
55154143124,804320,0773,174,12126,149,866184,344,796
56155148526,235344,8813,494,19829,323,987210,494,662
57156154027,720371,1163,839,07932,818,185239,818,649
58157159629,260398,8364,210,19536,657,264272,636,834
59158165330,856428,0964,609,03140,867,459309,294,098
60159171132,509458,9525,037,12745,476,490350,161,557
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matricciani, E. A Mathematical Structure Underlying Sentences and Its Connection with Short–Term Memory. AppliedMath 2024, 4, 120-142. https://doi.org/10.3390/appliedmath4010007

AMA Style

Matricciani E. A Mathematical Structure Underlying Sentences and Its Connection with Short–Term Memory. AppliedMath. 2024; 4(1):120-142. https://doi.org/10.3390/appliedmath4010007

Chicago/Turabian Style

Matricciani, Emilio. 2024. "A Mathematical Structure Underlying Sentences and Its Connection with Short–Term Memory" AppliedMath 4, no. 1: 120-142. https://doi.org/10.3390/appliedmath4010007

APA Style

Matricciani, E. (2024). A Mathematical Structure Underlying Sentences and Its Connection with Short–Term Memory. AppliedMath, 4(1), 120-142. https://doi.org/10.3390/appliedmath4010007

Article Metrics

Back to TopTop