1. Introduction
Thermodynamics has the virtue of embedding a myriad of degrees of freedom into a simple collective response to system boundary conditions, based on maximum likelihood for equilibrium states. For very global systems, such as the economy or the planet’s ecological system, however, which are fed by external sources such as the sun, the challenge is different since non-equilibrium systems must be described, while the variety of their constituents is huge. The idea of a “simple collective response” thus becomes unlikely. Rather, a description of the largest events becomes the logical focus, hence a study of the role of stability and instability. It is now 50+ years since R.M. May [
1,
2] introduced the use of random matrices to study the stability of ecological systems, pointing out that the largest eigenvalues related to the system stability were an interesting “proxy” to tackle stability issues while ignoring much of the system’s detail (hence, a thermodynamic inspiration).
Nowadays, while May’s approach is still debated (i.e., whether more interactions beget instability in real ecological systems), the description of our own economic growth as actors of the Anthropocene relies on extreme simplifications. These are, for instance, the GDP (gross domestic product) or sectorial analysis limited to a few dozen sectors. Big data, on the other hand, are intensively exploited in the financial realm. Some prominent contributors (J.-P. Bouchaud [
3] and M. Smerlak [
4], for instance, among others) are exploring uncharted interdisciplinary territories with all the power of statistical physics and random matrices theory (RMT).
Almost all of the literature on RMT, however, studies matrices of a given size
and the asymptotics
. For instance, while it is well known that for real matrices
(symmetric or from the Ginibre set), the largest eigenvalue
is distributed in an interval whose width scales with
, given by the Tracy–Widom law (see [
5] for a good analytical approximation). But how the largest eigenvalues and eigenvectors evolve when adding an extra row and column (
is hardly documented (with a provision for the so-called “cavity method” [
6] used for the recursive deduction of matrix properties, in the case of
).
In a recent contribution [
7], the “matrix inflation” proposal was put forward to describe the growth of complex systems with a profusion of distinct “elements”. It was most notably aimed at describing their “punctuated” evolution according to the paradigm proposed by S.J. Gould for biological evolution: the occurrence of long stasis separated by short “quakes” [
8,
9]. The economy is another domain marked by a long series of technical disruptions. These disruptions were accelerated by the introduction of fossil energies two centuries ago, with such fuels becoming a conundrum for the future of societies. The core idea of the “matrix inflation” proposal is that the time-wise addition of a new row and column to an “iteration matrix” gives the representative vector a kind of “kick” that is adapted to model the role of successive innovations. The variable role of innovations, kicking large changes or not, is also a tenet of ecological systems’ evolution. The adoption of a new paradigm (for either case) occurs when the dominant eigenvalue and eigenvector make a leap instead of incrementally evolving under this action. This is possible in a privileged manner in a non-Hermitian setting because the moduli of extreme eigenvalues can leapfrog each other. In a Hermitian setting, on the contrary, the repulsion of neighbor eigenvalues translates into a no-leapfrog rule and an anti-crossing eigenvector exchange scenario (although, if not in an adiabatic limit, one could invoke Landau–Zener tunneling, for which a study within a non-Hermitian setting has recently appeared [
10]).
The issue of what is a “quake” in the economy is not obvious, because some global quantities still show some continuity in their trends. One example is the correlation of PEC (primary energy consumption) to GDP. It holds well on the 1820–2020 interval [
11] (linear until 1920, sublinear from 1920 on), in spite of major events (WWI and WWII, flu epidemics, collectivized economy in the USSR, and the Cold War) and major overhauls in energy choices. Wood, coal, and oil have been added to the mix, which is still made from mostly fossil-sourced and carbon-spewing energies. Some minor quakes are apparent [
11]; however, it is appropriate to emphasize that technical innovation has no privileged time scale: while, e.g., in France, cars caused the horse market to fully collapse within a decade pre-WWI, nuclear power plants and ammonia synthesis to mass-produce fertilizers were ramped up over three decades in the second half of the 20th century. Thus, it is likely that each technological disruption has had only a weak individual effect on our thermodynamic fate. To capture the scale of the changes to be made in our material civilization, and compare it to known “quakes”, a model of our growth and of its internal dynamics in terms of the profusion of objects and of the network of interaction would be enlightening. This was, remotely, the scope of the previous contribution [
7].
In this paper, I re-examine the operation of this model with the aim of establishing more thoroughly to what degree the succession of dominant eigenvalues/eigenvectors is its principle of operation. It is, thus, an exploration of a less-explored swath of random matrix theory, with the aim of consolidating its further use to describe large real systems. The emerging features could also trigger new explorations of our way to cast the complexity of our society’s path in innovation. This may be useful to tame the trends of energy use that have been emerging, for which current mitigation strategies have unclear perspectives.
Furthermore, I examine the case of entirely complex matrices, in which eigenvalues do not come in conjugate pairs, so that some tracking tools like the Rayleigh quotient have a simpler use. The methodology is, thus, to assess the role of a renewal strategy when “inflating” and “iterating” the model’s matrices by tracking the eigenvalues and the Rayleigh quotient. The latter is a good tool to capture the way an eigenvector evolves, with quakes and stasis, or with a more continuous evolution. One point of this methodology is to relate a discrete model of matrices, whereby a change in size is the key event within the more continuous frame of real-world evolution. Thus, I explore how the model works depending on the way it is iterated in time.
Once such a basis is consolidated, the use of the model to deal with the profusion of actual objects/goods in economy (or of species/individuals in ecosystems) could be developed. A possible perspective, in the spirit of thermodynamics, would be to apply various metrics of non-ergodicity (the subject has been interestingly linked to inequalities in the economy by O. Peters et al. [
12,
13,
14,
15]). This approach is complementary to the “microscopic” aspect of the renewal strategy. The issue of non-ergodicity of the evolution described by the model is certainly an important “macroscopic” aspect. It can be approached from the angle of the type of long-term memory the system possesses of its past, of the way in which its current trajectory is dictated by its previous states. I will propose an initial exploration of this question.
The knowledge gained through these explorations could then be integrated into a more general vision. A desirable objective is to find the proper scaling in terms of the efforts needed to modify growth in a way compatible with current IPCC reports, for example: how big are the “quakes” the economy needs to shift to an acceptable trajectory. If we refer to related scientific areas, in order to act in a highly entangled economy, the desired new “quakes” and the following “stasis” could be better defined in the grammar of network theory, which closely parallels that of non-Hermitian Hamiltonians and RMT. Physicists practicing non-equilibrium thermodynamics would thus have the opportunity to contribute to radical transformations. Econophysics has shown some aspects of this understanding, but, in my opinion, it has failed to describe the “network complexity” of the real world. In a related area, impressive studies of actual networks have provided many new insights (e.g., labor flows and firm size in the work of R. Axtell [
16]), but with more emphasis on the nature of the links between entities (firms, people) than in relation to goods and energy. Despite the substantial changes that have occurred in the capitalist era (from the single-earner family of the industrial era to the two-earner family of the post-industrial era for example), changes captured through such a prism might constitute only a partial picture of our material society, limiting its ability to foster large-scale changes.
After defining these aims and examining the related methodological issues, the paper proceeds as follows: In
Section 2, I recall the main basis of the model, and the quantities I have tracked to give a reasonable idea of the relevance of the central concept of “dominance of successive eigenvalues”, which I see as an asset in meeting the knowledge challenges suggested above. In
Section 3, I present the behavior of the model under different “iteration schemes” and show that only a subset of the eigenvalues is concerned if slower, “sluggish” iteration schemes are implemented. I show that there are only minor differences between the Ginibre set of (purely real) random matrices and the (fully complex) complex set with respect to the eigenvalue changes upon
“inflation”. A “pragmatic” sensitivity analysis concludes this section. I discuss the possible role of non-ergodicity in
Section 4 through some perspectives, and I conclude in
Section 5.
2. The Matrix Inflation Model and Its Typical Output
The two aspects I have combined in the matrix inflation model [
7] can be described as follows.
(A) First, in an iteration scheme
where
is an eventually large
matrix and
a discrete-time vector function (
, akin to those producing Krylov suites), the vector
tends toward the eigenvector with the largest modulus eigenvalue
(in short the DEV(
), dominant eigenvector at size
). Possible degeneracies appear for real nonsymmetric
(
) with a majority of complex eigenvalues coming as conjugate pairs, but only accidentally for complex eigenvalues (
). Moreover, for random matrices with all elements taken as the scaled centered normal law,
for the real case and
for the complex case, a majority of eigenvalues cluster asymptotically in the unit-radius disc. A small fraction remains beyond it, comprising the extreme ones, lying in the “tail” of the distribution according e.g., to various versions of the Tracy–Widom laws [
5,
17]. Note that this tail is substantially more extended towards the very largest outliers in the complex case than in the real case.
(B) Secondly, when the size of the matrix is increased (one new row and one new column, the rest unchanged), the new eigenvectors are generally far from orthogonal to the previous ones. Thus, the DEV() has a sufficiently large projection on DEV() acting as a large seed and resulting in making the subsequent iterations of at size converge to DEV(). The salt of the process is that the frequent case is a limited change in DEV() relative to DEV(), regardless of the exact change in relative to , whereas the rare case is that a widely different DEV() comes out: this rare case is logically associated (in the sense of “closer to” using an appropriate scalar product) with one of the large eigenvalues that was just below before inflation, and that took advantage of the extra row and column, so to speak, to leapfrog above the eigenvalue of DEV(). This rare event then corresponds to a quake.
The mechanism that consists of making
enough times (say
repetitions) at a given size to attain a vector close to DEV(
), and then, at later times
, to inflate the matrix by one new row and one new column, naturally provides a sequence of quakes and stasis. And it also implies that the dominant eigenvectors across the quakes are seemingly unrelated. This is exactly the feature of “growth with disruption”, which I see as a welcome, if highly stylized, representation of the actual Schumpeterian growth picture in economics known as destructive creation (when an innovation entirely displaces a previous dynamic equilibrium) [
18,
19]; or a representation of species evolution with punctuated equilibria à la Gould [
8,
9].
At this point, a few simple remarks can be made. Convergence when using a matrix of a given size,
depends on the difference between the dominant eigenvalue and the next (degeneracy allowing). The scaling of the convergence is
. The starting point depends how a previously converged vector
, which became an eigenvector pre-inflation (a right eigenvector, in my formalism), projects onto the dominant (still right) eigenvector post-inflation. With my normalized matrices,
could be considered to scale like
. But the inflation process itself and the fact that the system is considered just after a “leapfrog” of the previously second dominant eigenvector (I justified this assertion with an auxiliary drift-diffusion model leading to the same kind of
q-exponential law of quakes spacing as the law of spacing observed in the actual model [
7]) could lead to a smaller-than-normal spacing. Nevertheless, I observe that products reaching
already lead to bona fide model convergence for the ≲
quakes observed at the values
a few hundred. As for the projection, it is delicate to guess its scaling, as it demands careful tracking of the eigenvectors through the inflation stage to define it correctly. Indeed, this is typical of the subtleties of matrix inflation analysis.
I can now present the results of
Figure 1a–d, whereby I use parameters such that the convergence is good in case (a) with
and weak in case (b) with only
. Here,
is the final size of the matrix in the computation (done with Matlab
©). I discard the absolute growth of the eigenvector at this stage, as it is easily rescaled arbitrarily. The
Figure 1a,c thus essentially show, in the form of two color maps, the growth of the vector
with its stasis and quakes, through the moduli of its (nonzero) components. The final time
is normalized to 1 for simplicity. Below the figure, to find out whether the “local dominant eigenvalue” is adopted (if so, one can reasonably infer that the corresponding eigenvector is also adopted, although the non-Hermitian setting calls for certain provisions), I make use of the simple tool of the Rayleigh quotient [
20]
:
where
indicates that I restrict the operation to the active part (
) of the matrix (technically, I first draw a
random matrix, and use its subblock
at the appropriate time step).
Below the two color maps, I plot the corresponding real and imaginary parts of the Rayleigh quotient
and
, as well as those of the 6 largest eigenvalues
and
in
Figure 1c,d.
Comparing the two cases, it is clear that for
Figure 1c,d,
is largely insufficient to converge to the local eigenvalue. The pattern of eigenvalues themselves features either a smooth variation upon an inflation step, or in some case a more severe disruption. Just as in Francis Galton’s old (about as old as Darwin’s) polyhedron explanation for irreversible abrupt changes in evolution (see the figure, e.g., in Erwin’s review of Gould’s work [
21]), there are intrinsic tipping points in the inflation process, at least as far as the ranked eigenvalues result is concerned. In the case of
, it can be seen that the convergence to the eigenvalues is prevented by the ongoing inflation steps. In this case, the tipping effects are smoothed, but the dynamics may still exhibit abrupt sequences.
I am now in position to examine the various results of the next section.
4. Discussion
I have examined the extension of the growth model based on “matrix inflation” that I had previously introduced for the sole case of real matrices (non-symmetric, the Ginibre set) in [
7]. As could be anticipated from the basic idea of eigenvalues “leapfrogging” one another through an inflation step, while carrying their eigenvector only marginally modified with them, the resulting dynamics of the inflation process appear very similar. I did not carry out the further analysis leading to the
q-exponential law that I had found in [
7], but the results seem qualitatively quite analogous, from the point of view of the spacing of quakes and the kind of changes they induce through the successive dominant eigenvectors.
I did not take into account absolute growth, which may occur more freely here given that eigenvectors do not come in conjugate pairs (leading to “cosine” oscillations during iterations). However, it is much more difficult to imagine that the absolute growth can have a deep meaning in terms of the “punctuated equilibria”, if one thinks of the classical issues of ecological systems, whereby proliferation is generally tamed by the ecosystem as a whole. Weeds, for example, have an enormous growth potential if one counts the ability to grow more than 100 seeds from a single one per year. Human agriculture can therefore be conceived as a race against the weed in time and space to obtain plants that provide, relatively slowly, staple food rather than many new seeds. But it is also known from the same systems that weeds are constrained to a much lower effective growth rate, except in invasion sequences (often created by man). In the case of invasive sequences, the issues at stake are rather those of qualitative growth rather than merely quantitative growth. Plants that have some kinds of robust rhizome (bamboo) or worse, that have a rhizome robust and surviving to subdivision in small pieces (e.g., Japanese knotweed, which circulates when soil is removed during river, road, and house works and redeposited) lead to striking invasions, while brambles and nettles are tamed after a while, e.g., as the forests that host them evolve over the years.
Apart from these general considerations, I have shown that even a relatively simple change in the model, the “amount of renewing” of the new vector with the old one, has an important, subtle effect, in terms of eigenvalues. It will be interesting to see, looking at actual systems, what are the proper ways to understand their inertia, and whether it favors any of the eigenvalue patterns discussed above. I anticipate that forcing the eigenvalues to lie in a closer corner of the disc in the case of partial renewal (long inertia) would also lead to a different correlation pattern of successive eigenvectors, which will undoubtedly deserve further investigation.
A final consideration concerns non-ergodicity. The “pragmatic” sensitivity analysis in
Figure 5 has given an estimate of the ensemble deviation. But it is clear that, as the model is run, one can say that each sequence (in fact, each random matrix
and, in practice, each
) is a particular case. The ensemble average of these sequences would clearly, asymptotically, wash out the details, and yield “gray”, “average” vectors containing no useful information. Furthermore, in the perfectly nonstationary context of growth, it is apparently impossible to define a sensible time-average to perform an ergodic test. This is a concern given that the model deals with quakes and crises, for the understanding of which non-ergodicity could play an important role. Such a role could be investigated in the apparently simple case of a geometric Brownian motion (GBM) series [
12,
13,
14,
15].
However, it is maybe possible to play the same game that historians are often asked to play, i.e., giving “some kind” of previsions (it takes more than rhetoric dispositions, here for fairness, to say that historians, as agents of a professional community, warn rather starkly that however elaborate their understanding of the past may be, it has no direct scientifically-based predicting power). Transposed into my framework, this means asking, for each given case, how much does the evolution in the past times (thus of concern for the few basic components), used as a temporal average, makes sense for future changes (which, at the time of prediction, may look like an ensemble average). Of course, this is admittedly a quite skewed vision of ergodicity, but the conversation on this point may receive more attention as more data are collected that carry quantities of “historic” information, which could at least be used to inform a variety of scenarios.
A first step in the proposed direction is shown in
Figure 6. Here, the matrix is inflated only in one particular case up to “half-way” of the above simulation, hence
(here with
, enough to qualitatively follow the DEV). Then, the rest of the inflation is performed with variable random additions of lines and columns while keeping the same “core” of the matrix (core size
) which represents “the past”. By running different possible “futures”, one can see the role of “the past”, now in the ensemble manner as well as the time manner at will, thus somewhat addressing the tenets of ergodicity. More details can be grasped from the caption of
Figure 6a,b. The chosen process reveals that memory, under these conditions chosen for preliminary exploration, clearly displays two time scales, a fast one related to the spacing of quakes (as described e.g., by a
q-exponential law of [
7]), and a longer time scale, during which a more “sluggish” decay to the mean value of the scalar product (
0.072 for the case
) takes place. Thus, there are signs of long memory, with a possible appearance of non-ergodic aspects in the evolution of this model. I intend to address these in future work.
5. Conclusions and Perspectives
To conclude, I addressed some simple mathematical-physical issues related to the use of the matrix inflation model, bearing in mind its application to economic systems and ecological systems, in order to explain, among other things, their “quakes and stasis” patterns. I suggested that complex matrices, while unnatural for describing the usual quantities of economics, might be a tempting approach. The resulting evolution of eigenvectors obeys rules quite similar to those of a real matrix, and I defer to further work the full check that a q-exponential law also applies to the spacing of quakes in time, as in the real case [
7]. And the avoidance of certain complexities that arise with conjugate pairs of eigenvectors could be considered a simplification.
I discussed how “inertia”, by partial renewal, was introducing an interesting bias on the set of successive dominant eigenvalues of the underlying matrix , favoring those closer to the right of the disc () in the complex plane. The pattern of eigenvalue modulus evolution was also found to be very similar in appearance. There are now several ways to exploit the possible outcomes of such a domain.
Some look challenging, such as importing the notion of ergodicity in a non-stationary context (although the network context could be useful to examine the issue, see [
23] for an example of network evolution in a non-Hermitian context). Others are closer to what the existence of big data troves allows: exploring the growth dynamics as I have carried out earlier [
7] for the Web Of Science (WOS) bibliographic records (via the number of journals in a few tested WOS domains), and tracking whether changes in the patterns map those among different eigenvectors of an inflating matrix, as I proposed here. Other possible domains for such attempts could be in catalogs that combine hierarchy and curation to a sufficiently high level. The European REACH legislation from the ECHA (European Chemical Agency) comes with a catalog of chemicals that could serve this kind of purpose. The ecosystem of semiconductor products could also lend itself to the exercise, as this specific industry maintains high supply standards to ensure its viability in the face of the rapid evolution of company in the sector. These proposed uses of non-Hermitian random matrices would then acquire one more universal characteristic feature, which adds to the many features of random matrices in general.