Next Article in Journal
Modified Hilbert Curve for Rectangles and Cuboids and Its Application in Entropy Coding for Image and Video Compression
Previous Article in Journal
Dynamics Modeling and Experimental Investigation of Gear Mechanism with Coupled Small Clearances
Previous Article in Special Issue
Local Fractal Connections to Characterize the Spatial Processes of Deforestation in the Ecuadorian Amazon
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithmic Information Distortions in Node-Aligned and Node-Unaligned Multidimensional Networks

1
National Laboratory for Scientific Computing (LNCC), Petropolis 25651-075, RJ, Brazil
2
Laboratoire de Recherche Scientifique (LABORES) for the Natural and Digital Sciences, Algorithmic Nature Group, 75005 Paris, France
3
British Library, The Alan Turing Institute, 2QR, 96 Euston Rd, London NW1 2DB, UK
4
Oxford Immune Algorithmics, Reading RG1 3EU, UK
5
Center for Molecular Medicine, Algorithmic Dynamics Lab, Unit of Computational Medicine, Department of Medicine Solna, Karolinska Institute, SE-171 77 Stockholm, Sweden
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(7), 835; https://doi.org/10.3390/e23070835
Submission received: 27 February 2021 / Revised: 18 May 2021 / Accepted: 21 May 2021 / Published: 29 June 2021

Abstract

:
In this article, we investigate limitations of importing methods based on algorithmic information theory from monoplex networks into multidimensional networks (such as multilayer networks) that have a large number of extra dimensions (i.e., aspects). In the worst-case scenario, it has been previously shown that node-aligned multidimensional networks with non-uniform multidimensional spaces can display exponentially larger algorithmic information (or lossless compressibility) distortions with respect to their isomorphic monoplex networks, so that these distortions grow at least linearly with the number of extra dimensions. In the present article, we demonstrate that node-unaligned multidimensional networks, either with uniform or non-uniform multidimensional spaces, can also display exponentially larger algorithmic information distortions with respect to their isomorphic monoplex networks. However, unlike the node-aligned non-uniform case studied in previous work, these distortions in the node-unaligned case grow at least exponentially with the number of extra dimensions. On the other hand, for node-aligned multidimensional networks with uniform multidimensional spaces, we demonstrate that any distortion can only grow up to a logarithmic order of the number of extra dimensions. Thus, these results establish that isomorphisms between finite multidimensional networks and finite monoplex networks do not preserve algorithmic information in general and highlight that the algorithmic information of the multidimensional space itself needs to be taken into account in multidimensional network complexity analysis.

1. Introduction

Algorithmic information theory (AIT) [1,2,3,4] has been playing an important role in the investigation of network complexity. More comprehensive surveys of previous work regarding algorithmic information (or lossless compression) of networks or graphs, together with comparisons to entropy-based methods, can be found in [5,6,7]. For example, AIT presented contributions to the challenge of causality discovery in network modeling [8], network summarization [9,10], automorphism group size [11], network topological properties [11,12], and the principle of maximum entropy and network topology reprogrammability [13]. Beyond monoplex networks (or graphs), Santoro and Nicosia [14] investigated the reducibility problem of multiplex networks. As the study of multidimensional networks, such as multilayer networks and dynamic multilayer networks, has become one of the central topics in network science, further exploration of algorithmic information has become relevant.
In this sense, we show that the currently existing methods that are based on AIT applied to monoplex networks (or graphs) cannot be straightforwardly imported into the multidimensional case without a proper evaluation of the algorithmic information distortions that might be present. Our results show the importance of multidimensional network encodings into which the necessary information for determining the multidimensional space itself is also embedded.
This article explores the possible combinations of node alignment and uniformity that can generate algorithmic information distortions and establishes worst-case error margins for these distortions in multidimensional network complexity analyses. We present a theoretical investigation of worst-case algorithmic information content (or lossless compressibility) distortions in four types of multidimensional networks that have sufficiently large multidimensional spaces. We show that:
  • node-unaligned multidimensional networks with non-uniform multidimensional spaces display exponentially larger distortions with respect to their respective isomorphic monoplex networks and that these worst-case distortions in the node-unaligned non-uniform case grow at least exponentially with the number of extra node dimensions;
  • node-unaligned multidimensional networks with uniform multidimensional spaces also display exponentially larger distortions with respect to their respective isomorphic monoplex networks and that these worst-case distortions in the node-unaligned uniform case also grow at least exponentially with the number of extra node dimensions;
  • node-aligned multidimensional networks with non-uniform multidimensional spaces also display exponentially larger distortions with respect to their respective isomorphic monoplex networks, but these worst-case distortions in the node-aligned non-uniform grow at least linearly with the number of extra node dimensions;
  • node-aligned multidimensional networks with uniform multidimensional spaces can only display distortions up to a logarithmic order of the number of extra node dimensions.
These four results are demonstrated in our final Theorem 5 by combining previous results from [15] (which are briefly remembered in Section 2) and new results from Section 4 in this article. We highlight that, unlike the node-aligned non-uniform case studied in [15] (in which the worst-case distortions were shown to grow at least linearly with the number of extra node dimensions), we demonstrate that the worst-case distortions in the node-unaligned cases grow at least exponentially with the number of extra node dimensions. In addition, we highlight that the node-aligned uniform case is shown to be the one in which the algorithmic information content of any multidimensional network and the algorithmic information content of its isomorphic monoplex network is proved to be much less distorted as the number of node dimensions increases. This occurs because any algorithmic information distortion can only grow up to a logarithmic order of the number of extra node dimensions, which contrasts with the exponential growth in the node-unaligned cases and with the linear growth in the node-aligned non-uniform case.
The remainder of the article is organized as follows: In Section 2, we present the previous results achieved in [15], which correspond to the node-aligned non-uniform case. In Section 3, we study basic properties of encoded node-unaligned multidimensional networks. In Section 4, we demonstrate the new mathematical results. In Section 5, we discuss the limitations and conditions for importing monoplex network complexity to multidimensional network algorithmic information. Section 6 concludes the paper.
This article is an extended version of a previous conference paper [15], whose results correspond to the node-aligned non-uniform case presented in Section 2. A preprint version of the present article containing additional proofs is available at [16].

2. Previous Work: The Node-Aligned Non-Uniform Case

Theorem 1 and Corollary 1 below restate the results in [15], which show that, although for every node-aligned multidimensional network there is a monoplex network that is isomorphic to this multidimensional network [17], they are not always equivalent in terms of algorithmic information. In addition, they demonstrate that these distorted values of algorithmic information content grow at least linearly with the value of p (i.e., number of extra node dimensions).
Theorem 1.
There are arbitrarily large encodable simple node-aligned MAGs G c given τ ( G c ) with arbitrarily large non-uniform multidimensional spaces such that
K ( τ ( G c ) ) + O ( 1 ) K ( E ( G c ) | x ) K ( τ ( G c ) ) O log 2 K ( τ ( G c ) ) ,
with K ( E ( G c ) ) p O ( 1 ) and K ( x ) = O log 2 p , where x is the respective (node-aligned) characteristic string and p is the order of the MAG G c .
Corollary 1.
There are an infinite family F 1 of simple (node-aligned) MAGs and an infinite family F 2 of classical graphs, where every classical graph in F 2 is MAG-graph-isomorphic to at least one MAG in F 1 , such that, for every constant c N , there are G c F 1 and a G G c F 2 that is MAG-graph-isomorphic to G c , where
O log 2 K ( E ( G c ) ) > c + K E G G c .
Remember from [15] that the multidimensional networks are mathematically represented by multiaspect graphs (MAGs) G = A , E [17,18], where A is the list A of aspects and E is the composite edge set (i.e., the set of existent composite edges). This way, note that A = A ( G ) [ 1 ] , , A ( G ) [ i ] , , A ( G ) [ p ] is a list of sets, where each set A ( G ) [ i ] in this list is an aspect (or node dimension [19]). (In this article one can employ the terms “aspect” or “node dimension” interchangeably). The companion tuple of a MAG G is denoted by τ ( G ) [18], where
τ ( G ) = | A ( G ) [ 1 ] | , , | A ( G ) [ p ] |
and p is called the order of the MAG. τ ( G c ) denotes an arbitrary encoded form of the companion tuple. We refer to the discrete multidimensional space of a MAG as the discrete cartesian product i = 2 p A ( G ) [ i ] into which the nodes of the network are embedded. In the particular case A ( G ) [ i ] = A ( G ) [ j ] holds for every i , j p , we say the multidimensional space of the MAG is uniform.
A MAG is said to be node aligned iff  V ( G ) = i = 1 p A ( G ) [ i ] and E ( G ) = V ( G ) V ( G ) , where V ( G ) denotes the node-aligned set of all composite vertices v = ( a 1 , , a p ) of G , and E ( G ) denotes the set of all possible composite edges  e = ( ( a 1 , , a p ) , ( b 1 , , b p ) ) of G .
Following the common notation and nomenclature [20,21,22], G = ( V , E ) denotes a general (directed or undirected) graph, where V is the finite set of vertices and E V × V ; if a graph only contains undirected edges and does not contain self-loops, then it is called a simple graph. A graph G is said to be (vertex-)labeled when the members of V are distinguished from one another by labels such as v 1 , v 2 , , v V . If a simple graph is labeled this way by natural numbers, that is, V = { 1 , , n } with n N , then it is called a classical graph. In a direct analogy to classical graphs, a simple MAG G c = ( A , E ) is an undirected MAG without self-loops, so that the set E c of all possible undirected and non-self-loop composite edges is defined by E c ( G c ) := { { u , v } u , v V ( G c ) } and E G c E c ( G c ) always holds. Hence, if a simple MAG G c is (composite-vertex-)labeled with natural numbers (i.e., for every i p , A ( G ) [ i ] = { 1 , , A ( G ) [ i ] } N ), then we say that G c is a classical MAG. Note that, for classical MAGs, the companion tuples completely determine the discrete multidimensional space of the respective MAGs. For the present purposes of this article, all graphs G will be classical graphs and all MAGs will be simple MAGs (whether node-aligned or node-unaligned). Following the usual definition of encodings, a MAG is encodable given  τ ( G c ) iff there is a fixed program that, given τ ( G c ) as input, can univocally encode any possible E G c that shares the same companion tuple. In other words, if the companion tuple τ ( G c ) of the MAG G c is already known, then one can computably retrieve the position of any composite edge e = { u , v } in the chosen data representation of G c from both composites vertices u and v , and vice-versa.
E ( G c ) denotes the (node-aligned) composite edge set string  e 1 , z 1 , , e n , z n such that z i = 1 iff e i E ( G c ) , where z i { 0 , 1 } with 1 i n = | E c ( G c ) | . Note that a composite edge set string is an encoding of a composite edge list, which in turn is a generalization of edge lists [23] so as to deal with the encoding of a MAGs instead of graphs. Thus, the reader may also interchangeably call the composite edge set string by composite edge list encoding. A bit string x of length E c ( G c ) is said to be a (node-aligned) characteristic string of G c iff there is a fixed program that, given x as input, computes the composite edge set E ( G c ) and there is another fixed program that, given the encoded composite edge set E ( G c ) as input, returns the string x. Note that the characteristic string of a MAG differs from the composite edge set string by the fact that an encoding of the companion tuple is already embedded into the composite edge set string. On the other hand, Theorem 1 shows this is not always the case for characteristic strings, which in some cases do not have sufficient information for determining their respective companion tuples.
As established in [17], we say a (node-aligned) MAG G is MAG-graph-isomorphic to a graph G iff there is a bijective function f : V ( G ) V ( G ) such that e E ( G ) iff ( f ( π o ( e ) ) , f ( π d ( e ) ) ) E ( G ) , where π o is a function that returns the origin composite vertex of a composite edge and π d is a function that returns the destination composite vertex of a composite edge. This way, G G c denotes the classical graph which is MAG-graph isomorphic to the simple MAG G c . We know this classical graph always exists from [17]. In order to avoid ambiguities in the nomenclature with the classical isomorphism in graphs (which is usually a vertex label transformation) we call: such an isomorphism between a MAG and graph from [17] a MAG-graph isomorphism; the usual isomorphism between graphs [20,21] as graph isomorphism; and the isomorphism between two MAGs G and G as MAG isomorphism.
K ( w ) denotes the (unconditional) prefix algorithmic complexity, which is the length l ( w * ) of the shortest program w * such that U ( w * ) = w . The conditional prefix algorithmic complexity is denoted by K · | · .
More detailed notation, nomenclature and properties of encodable node-aligned simple MAGs can be found in [15,16].

Beyond the Node-Aligned Case Studied in Previous Work

The results in [15] demonstrate that node-aligned multidimensional networks with non-uniform multidimensional spaces can display exponentially larger algorithmic information distortions with respect to their isomorphic monoplex networks. On the other hand, one may want to also investigate the extent of those distortions when the multidimensional network is node unaligned and/or the multidimensional space is uniform. The results we will demonstrate in the following sections differ from the ones of the present Section 2 (and from [15]) by investigating algorithmic information distortions in node-unaligned multidimensional networks either with uniform or non-uniform multidimensional spaces. Indeed, in the forthcoming node-unaligned case, additional algorithmic information may be necessary in order to determine to which permutation α i = 2 p A ( G ) [ i ] a node does not belong, given that only the necessary and sufficient information to compute E (or E M ) is known a priori. We shall see that this leads to worst-case algorithmic information distortions that grow at least exponentially with the value of p, which differs from the linear growth presented by Theorem 1 from [15].

3. The Node-Unaligned Cases

With the purpose of addressing other variations of multidimensional networks in which a node not belonging to a certain α i = 2 p A ( G ) [ i ] has an important physical meaning, the node alignment can be relaxed. This gives rise to multidimensional networks that are not node aligned, such as node-unaligned multilayer networks [24] or node-unaligned multiplex networks [25].
As a multiplex network is usually understood as a particular case of a multilayer network [24,26,27] in which there is only one extra node dimension (i.e., d = 1 ), we may focus only on a mathematical formulation of multilayer networks that allows nodes to be not aligned, which is given by M = ( V M , E M , V , L ) [24], where:
  • V denotes the set of all possible vertices v;
  • L = { L a } a = 1 d denotes a collection of d N sets L a composed of elementary layers α L a ;
  • V M V × L 1 × × L d denotes the subset of all possible vertices paired to elements of L 1 × × L d ;
  • E M V M × V M denotes the set of interlayer and/or intralayer edges connecting two node-layer tuples v , α 1 , , α d V M .
In this regard, a multilayer network M is said to be node-aligned iff V M = V × L 1 × × L d . In the case each α i = 2 p A ( G ) [ i ] can be interpreted as (or is representing) a layer, it is important to note that there are some immediate equivalences between G and M [19]:
  • V is the usual set of vertices, where V ( G ) A ( G ) [ 1 ] ;
  • each set L a is the ( a 1 ) -th aspect A ( G ) [ a 1 ] of a MAG G ;
  • V M is a subset of the set V ( G ) of all composite vertices, where every node-layer tuple v , α 1 , , α d V M is a composite vertex v V ( G ) ;
  • E M E ( G ) is a subset of the set of all composite edges ( u , v ) for which u , v V M .
Thus, if V M = V × L 1 × × L d , then one will have that V M V ( G ) and E M E ( G ) . This way, besides notation distinctions, it directly follows that a node-aligned multilayer network M is totally equivalent to a MAG G ; and, therefore, every result in this paper holding for simple node-aligned MAGs G c automatically holds for node-aligned multilayer networks M that only have undirected edges and do not contain self-loops. Nevertheless, since the algorithmic information distortions are based on (and are limited by) the algorithmic information of the multidimensional space itself and the uniformity (or non uniformity) of the multidimensional space is determined by the aspects, we highlight that representing multidimensional networks with MAGs, aspects and companion tuples facilitates the achievement of our theoretical results.
With the purpose of extending our results to the node-unaligned case, we need to introduce a variation of MAGs so as to allow into the mathematical formalization the possibility of some vertices not being paired with some α ’s, where α i = 2 p A ( G ) [ i ] . Moreover, we need that node-aligned MAGs become particular cases of our new formalization such that the algorithmic information between the two formalizations becomes equivalent when the MAG is node-aligned, which we will show in Lemma 3. To this end, we introduce a modification on the definition of MAG so that the major differences are in the set of composite vertices and, consequentially, in the set of composite edges.
Definition 1.
We define a node-unaligned MAG as a triple G u a = A , V u a , E u a in which
A = A ( G u a ) [ 1 ] , , A ( G u a ) [ i ] , , A ( G u a ) [ p ]
is a list of sets (each of each is an aspect of G u a ),
V u a G u a V G u a = i = 1 p A ( G u a ) [ i ]
is the set of existing composite vertices, and
E u a E u a G u a = V u a G u a × V u a G u a
is the set of present composite edges u , v .
The definition of simple node-unaligned MAGs G u a c = A , V u a , E u a follows analogously to the aligned case by just restricting the set of all possible composite edges, so that E u a c ( G u a c ) := { { u , v } u , v V u a ( G u a c ) } and E u a G u a c E u a c ( G u a c ) hold. In addition, all other terminology of order, multidimensional space, uniformity, (composite-vertex-)labeling, and classical MAGs apply analogously as in Section 2 (see also [16]).
As in the node-aligned case, in order to define an unaligned version for the companion tuple, the latter should completely determine the size of the set E u a G u a and, if G u a is a classical MAG, then the companion tuple should completely determine the multidimenional space of G u a . In this sense, a node-unaligned version of the companion tuple also needs to carry the necessary and sufficient information for computably retrieving the set V u a G u a .
Definition 2.
The node-unaligned companion tuple τ u a of a MAG G u a is defined by the pair of tuples
τ u a ( G u a ) := | A ( G u a ) [ 1 ] | , , | A ( G u a ) [ p ] | , m 1 , , m V G u a ,
such that, for every v i V G u a in a previously chosen arbitrary computable enumeration of V G u a , v i V u a G u a iff m i = 1 , where m i { 0 , 1 } and 1 i V G u a .
As for encoding τ u a , one can also employ the recursive pairing function · , · as usual:
τ u a ( G u a ) := A ( G u a ) [ 1 ] , , A ( G u a ) [ p ] , m 1 , , m V G u a .
The MAG-graph isomorphism also suffers a slight modification:
Definition 3.
G u a is unaligning MAG-graph-isomorphic to a graph G iff there is a bijective function f : V u a ( G u a ) V ( G ) such that e E u a ( G u a ) E u a ( G u a ) iff ( f ( π o ( e ) ) , f ( π d ( e ) ) ) E ( G ) , where π o is a function that returns the origin composite vertex of a composite edge and π d is a function that returns the destination composite vertex of a composite edge.
This way, we can straightforwardly obtain the following Theorem 2 analogously to the proof of (Theorem 1, p. 54, [17]).
Theorem 2.
For every MAG G u a of order p > 0 , where all aspects are non-empty sets, there is a unique (up to a graph isomorphism) graph G G u a u a = V , E that is unaligning MAG-graph-isomorphic to G u a , where
| V ( G G u a u a ) | = V u a ( G u a ) .
The proof of Theorem 2 can be found in [16].
It is important to note our choice of notation distinction between the node-aligned and the node-unaligned case when a graph is MAG-graph-isomorphic to a MAG. The graph G G u a is said to be aligning MAG-graph-isomorphic to the MAG when the set of possible composite vertices is complete, that is, when it is taken from V ( G u a ) . This was the case in Section 2. On the other hand, the graph G G u a u a is said to be unaligning MAG-graph-isomorphic to the MAG (which is the case of Definition 3, Theorem 2, Corollary 2, and Theorem 5(I)(b)) when the possible composite vertices are taken from V u a ( G u a ) instead of V ( G u a ) .

Encoding Node-Unaligned Multiaspect Graphs

Encodability of node-unaligned MAGs given the companion tuple works in the same way as in the node-aligned case. That is, a node-unaligned simple MAG G u a c is encodable given τ u a ( G u a c ) iff there is a fixed program that, given τ u a ( G u a c ) as input, can univocally encode any possible E u a G u a c that shares the same (node-unaligned) companion tuple.
First, as in the node-aligned case, note that the encodability of classical node-unaligned MAGs given the companion tuple can be promptly proved to hold (the proof can be found in [16]):
Lemma 1.
Any arbitrary node-unaligned classical MAG G u a c is encodable given τ u a ( G u a c ) .
Secondly, note that the characteristic string of a node-unaligned MAG is defined in a similar way as in [15] (see also Section 2):
Definition 4.
Let e 1 , , e E u a c ( G u a c ) be any arbitrary ordering of all possible composite edges between existing composite vertices of a node-unaligned simple MAG G u a c . We say that a bit string x with l ( x ) = E u a c ( G u a c ) is a node-unaligned characteristic string of G u a c if, for every e j E u a c ( G u a c ) , one has e j E u a ( G u a c ) iff the j-th digit in x′ is 1, where 1 j l ( x ) .
Now, for the node-unaligned composite edge set string, the definition may seem not so straightforwardly translated from node-aligned case. As one can see below, it is based on a sequence of the E c ( G u a c ) composite edges, and not on the sequence of E u a c ( G u a c ) composite edges. This is because one needs to embed into node-unaligned composite edge set strings not only the characteristic function of the set E u a ( G u a c ) , as in the node-aligned case, but also the characteristic function of the set V u a G u a c (which becomes in turn determined by the k i ’s and h i ’s in the following definition):
Definition 5.
Let e 1 , , e E c ( G u a c ) be any arbitrary ordering of all possible composite edges of a node-unaligned simple MAG G u a c . Then, E u a ( G u a c ) denotes the node-unaligned composite edge set string e 1 , z 1 , k 1 , h 1 , , e n , z n , k n , h n such that:
z i = 1 e i E u a ( G u a c ) ,
k i = 1 e i = v , u v V u a G u a c
and
h i = 1 e i = v , u u V u a G u a c ,
where z i , k i , h i { 0 , 1 } with 1 i n = | E c ( G u a c ) | .
This way, we guarantee not only that the characteristic string x can always be computably extracted from E u a ( G u a c ) , but also that the set V u a G u a c can be computably extracted from E u a ( G u a c ) . This will be important in the proof of Theorem 3 later on. Moreover, once the ordering of E c ( G u a c ) assumed in Definition 5 is preserved by the subsequence that exactly corresponds to the ordering of E u a c ( G u a c ) assumed in Definition 4, we have in Lemma 2 below that both the node-unaligned simple MAG and its respective node-unaligned characteristic string become “equivalent” in terms of algorithmic information, but again (as occurred for the node-aligned case [15]) except for the minimum information necessary to encode the node-unaligned companion tuple:
Lemma 2.
Let x be a bit string. Let G u a c be an encodable node-unaligned simple MAG given τ u a ( G u a c ) such that x is the respective node-unaligned characteristic string. Then,
K ( E u a ( G u a c ) | x ) K ( τ u a ( G u a c ) ) + O ( 1 )
K ( x | E u a ( G u a c ) ) K ( τ u a ( G u a c ) ) + O ( 1 )
K ( x ) = K ( E u a ( G u a c ) ) ± O K τ u a ( G u a c ) .
The proof of Lemma 2 can be found in [16].
As we saw in the node-aligned case, we shall see in the next section that node-unaligned characteristic strings are not in general equivalent to composite edge set strings E u a ( G u a c ) . More formally, we shall show in the following section that there are cases in which the algorithmic information necessary for retrieving the encoded form of the node-unaligned simple MAG from its node-unaligned characteristic string is close (except for a logarithmic term) to the upper bound given by Equation (1) in Lemma 2.

4. Worst-Case Algorithmic Information Distortions

In this section, we investigate worst-case algorithmic information distortions for node-unaligned MAGs when the multidimensional space is arbitrarily large. In particular, we study large multidimensional spaces that are non-uniform or uniform.
Before heading toward the theorems themselves, it is important to show the two cases in Lemmas 6 or 7 for which the set V u a G u a c trivializes the problem either by reducing it back to the node-aligned case or by reducing it to a problem of just inserting empty nodes.
The first trivializing case guarantees the consistency of our definitions of node-aligned and node-unaligned MAGs:
Lemma 3.
Let G u a c be a node-unaligned simple MAG with V u a G u a c = V G u a c , where x is its node-aligned characteristic string and x is its node-unaligned characteristic string. Then,
K E u a ( G u a c ) = K E ( G u a c ) ± O ( 1 ) ,
K x = K x ± O ( 1 ) ,
and
K τ u a ( G u a c ) = K τ ( G u a c ) ± O ( 1 )
hold.
The proof of Lemma 3 can be found in [16]. In fact, if V u a G u a c = V G u a c , the same proof of Lemma 3 can be employed to show that the strings in the left side of the equations in Lemma 3 are respectively Turing equivalent to their counterparts in the right side. Therefore, for any G u a c satisfying Lemma 3, any algorithmic information distortion occurs in the same manner as in the node-aligned case.
The second one guarantees the consistency between network connectedness and empty nodes. An empty node [24] is a totally unconnected node that is added to the network in order to recover the node alignment of a former node-unaligned network. Thus, as expected, if all the composite vertices in V u a G u a c are connected to at least another composite vertex in V u a G u a c , then all the possible unconnected composite vertices are those that redundantly are empty nodes:
Lemma 4.
Let G u a c be a node-unaligned simple MAG in which every composite vertex in V u a G u a c is connected to at least another composite vertex in V u a G u a c . Then,
K E u a ( G u a c ) = K E ( G u a c ) ± O ( 1 )
holds and, additionally, E u a ( G u a c ) is in fact Turing equivalent to E ( G u a c ) .
The proof of Lemma 4 can be found in [16].
Note that in Lemma 4 one immediately has that E G G u a c u a can be computed from E G G u a c with a simple algorithm that identifies totally unconnected vertices. Furthermore, one has that E G G u a c can be computed from E G G u a c u a , if the value of | V G u a c V u a G u a c | is also given as input. Therefore, for any MAG satisfying Lemma 4, the algorithmic information distortion between the MAG G u a c and the unaligning MAG-graph-isomorphic classical graph G G u a c u a can only differ from the algorithmic information distortion between the MAG G u a c and the aligning MAG-graph-isomorphic classical graph G G u a c by O log 2 | V G u a c V u a G u a c | bits of algorithmic information.
Thus, as the reader might notice, the case in which the node nonalignment introduces more irreducible information into the composite edge set string is when V u a G u a c V G u a c and not every unconnected composite vertex is empty. Under these conditions that demand a more careful theoretical analysis than the trivializing cases in Lemmas 3 or 4, we study now that exponential algorithmic information distortions can occur in the node-unaligned case. An extended version of the proof of Theorem 3 can be found in [16].
Theorem 3.
There are encodable node-unaligned simple MAGs G u a c given τ u a ( G u a c ) with arbitrarily large non-uniform multidimensional spaces such that
K τ u a ( G u a c ) + O ( 1 ) K E u a ( G u a c ) | x K τ u a ( G u a c ) O log 2 K ( τ u a ( G u a c ) )
with
K ( x ) = O p = O log 2 K E u a ( G u a c ) ,
where x is the respective node-unaligned characteristic string and p is the order of G u a c .
Proof. 
Let G u a c be any node-unaligned simple MAG such that w 1 and w 2 are bit strings that, respectively, are finite initial segments of a 1-random real number y, where l w 1 = p , l w 2 = V G u a c , any | A ( G u a c ) [ i ] | can only assume values in { 1 , 2 } in accordance to the bits of w 1 , and the existence of a composite vertex in V G u a c is determined by the bits of w 2 . Remember that, if y is a 1-random real number [4], then K ( y n ) n O ( 1 ) , where n N is arbitrary. From Lemma 1, we have that G u a c is encodable given τ u a ( G u a c ) . Therefore, there is a program q such that
K ( τ u a ( G u a c ) ) l E u a ( G u a c ) * , q K ( E u a ( G u a c ) ) + O ( 1 )
holds by the minimality of K ( · ) and by our construction of q . In addition, by our construction of G u a c , we will have that
K ( τ u a ( G u a c ) ) p + V G u a c + O log 2 p
and, since y is 1-random,
V G u a c O ( 1 ) K ( τ u a ( G u a c ) ) + O ( 1 ) .
Since the exact number of 1’s appearing in w 2 is given by V G u a c 2 ± o V G u a c , which follows from the Borel normality [3,28] of w 2 , we will have that K l x = O p holds from the fact that l x = E u a c G u a c . Moreover, the Borel normality of w 1 also guarantees that V G u a c = 2 p 2 ± o p and, as a consequence, Ω p = log 2 V G u a c . Now, since E u a ( G u a c ) and p were arbitrary, we can choose any node-unaligned characteristic string x such that K x | l x = O ( 1 ) holds and that there are some composite vertices in V u a G u a c that are not connected to any other composite vertex in V u a G u a c . Thus, we will have that
K x K l x + O ( 1 ) O log 2 K τ u a ( G u a c ) O log 2 K E u a ( G u a c )
and
K τ u a ( G u a c ) K ( x ) + K ( E u a G u a c | x ) + O ( 1 ) O log 2 K ( τ u a ( G u a c ) ) + K ( E u a ( G u a c ) | x )
hold. Finally, the proof of K ( τ u a ( G u a c ) ) + O ( 1 ) K ( E u a ( G u a c ) | x ) follows directly from Lemma 2. □
For the purpose of comparison, the next immediate question arises from whether there might be such a worst-case distortions between composite edge set strings and characteristic strings when the multidimensional space is uniform and the network is node-aligned. As the reader might expect, we show in Lemma 5 below that node-aligned MAGs with uniform multidimensional spaces are more tightly associated to their characteristic strings in terms of the algorithmic information and, thus, they cannot display the same distortions as in Theorems 1, 3, and 4. In particular, the distortions in the node-aligned uniform case can only grow up to a logarithmic term of the order p; and this algorithmic information necessary to compute the value of p can only grow up to a double logarithmic term of the length of the node-aligned characteristic string:
Lemma 5.
Let G c be an arbitrary node-aligned classical MAG with arbitrarily large uniform multidimensional spaces, where V G c 3 and A ( G c ) [ i ] 2 for every i p . Then,
K ( x ) K E ( G c ) + O 1 K ( x ) + O log 2 p K ( x ) + O log 2 log 2 l ( x ) ,
where x is the respective node-aligned characteristic string and p is the order of G c .
Proof. 
Since the multidimensional space is uniform, there is a simple algorithm that always compute the integer value A ( G c ) [ i ] , for any i p , when i = 1 p A ( G c ) [ i ] and p are given as inputs. In addition, in this case, τ ( G c ) can be computably built if A ( G c ) [ i ] , for any i p , and the value of p are given as inputs. Moreover, by solving a simple quadratic equation with just one possible positive integer solution, there also is a simple algorithm that always returns the integer value i = 1 p A ( G c ) [ i ] when l ( x ) = E c ( G c ) is given as input. We have from (Lemma 2, p. 6, [15]) that E ( G c ) can always be computed if x * and τ ( G c ) are given as inputs. Thus, by combining all of these algorithms, we will have that
K E ( G c ) K ( x ) + O log 2 p
holds. Since V G c = A ( G c ) [ 1 ] p , and A ( G c ) [ 1 ] 2 , then we also have that p log 2 V G c log 2 l ( x ) . Therefore, from Equation (12),
K E ( G c ) K ( x ) + O log 2 p K ( x ) + O log 2 log 2 l ( x )
holds. Finally, to prove that K ( x ) K E ( G c ) + O 1 , just note that one can always computably extract x from E ( G c ) . □
An interesting future research is to investigate whether one can construct an worst-case example of node-aligned multidimensional network with uniform multidimensional space that actually displays a distortion of the tight order of log 2 p . In any event, Lemma 5 already demonstrates an upper bound for the worst-case distortion increasing rate with respect to the value of p (i.e., with the number of node dimensions). In particular, as mentioned before, this upper bound is given by only a logarithmic term of p.
On the other hand, although we saw in Lemma 5 that uniform multidimensional spaces can only display very small distortions in the node-aligned case, we show below in Theorem 4 that worst-case distortions that grow exponentially with p are still possible in the node-unaligned case. An extended version of the proof of Theorem 4 can be found in [16].
Theorem 4.
There are encodable node-unaligned simple MAGs G u a c given τ u a ( G u a c ) with A ( G u a c ) [ i ] 2 for every i p and with arbitrarily large uniform multidimensional spaces such that
K τ u a ( G u a c ) + O ( 1 ) K E u a ( G u a c ) | x K τ u a ( G u a c ) O log 2 K ( τ u a ( G u a c ) )
with
log 2 K E u a ( G u a c ) = Ω p
and
K ( x ) = O log 2 K E u a G u a c ,
where x is the respective node-unaligned characteristic string and p is the order of G u a c .
Proof. 
The underlying idea of this proof is similar to the proof of Theorem 3, but with the fundamental distinction in the set of all composite vertices i = 1 p A ( G u a c ) [ i ] = V G u a c , so that now V G u a c = A ( G u a c ) [ 1 ] p holds instead of V G u a c = 2 p 2 ± o p . Since A ( G u a c ) [ 1 ] = A ( G u a c ) [ i ] 2 for every i, then we now have that
p p log 2 A ( G u a c ) [ 1 ] = log 2 V G u a c .
Thus, from the Borel normality of w 2 , there will be a program of length
O p log 2 A ( G u a c ) [ 1 ] + O log 2 ( p )
that returns the integer value E u a c G u a c as output, where E u a c G u a c = l x . Hereafter, the rest of the proof follows analogously to the proof of Theorem 3. □
The proof of Corollary 2 follows from Theorems 2, 3 and 4 in a totally analogous manner as Corollary 1 follows from (Theorem 1, p. 54, [17]) and (Theorem 2, p. 7, [15]). Thus, we choose to leave the following proof up to the reader.
Corollary 2.
There are an infinite family F 1 of node-unaligned simple MAGs, which may have either uniform or non-uniform multidimensional spaces, and an infinite family F 2 of classical graphs, where every classical graph in F 2 is unaligning MAG-graph-isomorphic to at least one MAG in F 1 , such that, for every constant c N , there are G u a c F 1 and a G G u a c u a F 2 that is unaligning MAG-graph-isomorphic to G u a c , where
O log 2 K E u a G u a c > c + K E G G u a c u a .
Besides showing that node-unaligned multidimensional networks can display exponentially larger algorithmic information distortions with respect to their isomorphic monoplex networks, Theorems 3 and 4 together with Corollary 2 show that these distorted values of algorithmic information content grows at least exponentially with the order p (i.e., with number of node dimensions).
Finally, we can combine our results in order to achieve our last theorem, which summarizes the results of the present article:
Theorem 5.
(I)
There are an infinite family F 1 of simple MAGs and an infinite family F 2 of classical graphs, where every classical graph in F 2 is MAG-graph-isomorphic to at least one MAG in F 1 , such that:
(a)
if the simple MAGs in F 1 are node-aligned and have a non-uniform multidimensional space, then for every constant c N , there are G c F 1 and a G G c F 2 that is (aligning) MAG-graph-isomorphic to G c , where
O log 2 K ( E ( G c ) ) > c + K ( E G G c ) ,
and this exponential distortion grows at least linearly with the order p of the MAG G c , that is,
K ( E ( G c ) ) p O ( 1 ) .
(b)
if the simple MAGs in F 1 are node-unaligned and have either non-uniform or uniform multidimensional spaces, then, for every constant c N , there are G u a c F 1 and a G G u a c u a F 2 that is (unaligning) MAG-graph-isomorphic to G u a c , where
O log 2 K E u a G u a c > c + K E G G u a c u a ,
and this exponential distortion grows at least exponentially with the order p of the MAG G u a c , that is,
log 2 K E u a ( G u a c ) = Ω p .
(II)
Let F 1 be an arbitrary infinite family of node-aligned classical MAGs with uniform multidimensional spaces. Let F 2 be an arbitrary infinite family of classical graphs such that every classical graph in F 2 is (aligning) MAG-graph-isomorphic to at least one MAG in F 1 and that both these graph and MAG share the same characteristic string. Then, for every G c F 1 and G G c F 2 that is (aligning) MAG-graph-isomorphic to G c , one has that
K ( E G G c ) K E ( G c ) + O 1 K ( E G G c ) + O log 2 p
and, therefore, any distortion can only grow up to a logarithmic order of p.
Proof. 
The proof of Theorem 5(I)(a) follows directly from Theorem 1 and Corollary 1, which were previously demonstrated in [15]. The proof of Theorem 5(I)(b) follows directly from Theorems 3 and 4 together with Corollary 2. The proof of Theorem 5(II) follows directly from Lemma 5 and (Lemma 2, p. 6, [15]) by fixing a computable ordering for both the sets E c G c and E c G G c . □

5. Limitations and Conditions for Importing Monoplex Network Algorithmic Information to Multidimensional Network Algorithmic Information

It is known that applying statistical informational measures, such as those based on entropy, to evaluate lossless compressibility or irreducible information content of encoded objects may lead to deceiving values in some cases. For example, for some particular low-algorithmic-complexity networks that display maximal degree-sequence entropy [29] or for Borel-normal sequences that are in fact computable (and, therefore, logarithmically compressible) [30], distortions between the entropy-based lossless compression and the algorithmic-information-based approximation can occur. Other discrepancies between entropy and algorithmic complexity in network models can be found in [6].
In this regard, one of the advantages of algorithmic information is that, at least in the asymptotic limit when the computational resources are unbounded, the lossless compression is proven to be optimal. In addition, it gives values that are object invariant. This is because, given any two distinct encoding methods or any two distinct universal programming languages, the algorithmic complexity of an encoded object represented in one way or the other can only differ by a constant whose value only depends on the choice of encoding methods or universal programming languages, and not on the encoded object being compressed. That is, algorithmic complexity is pairwise invariant for any two arbitrarily chosen encodings.
The present article contributes by showing that, even in the asymptotically optimal case given by the algorithmic information, distorted values can occur in multidimensional networks with sufficiently large multidimensional spaces. Although only dealing with pairs of isomorphic objects in addition to the above encoding invariance, some may deem the existence of the distortion phenomena shown in the present work as counter-intuitive at first glance because they can, in fact, result from only changing the multidimensional spaces into which isomorphic copies of the objects are embedded.
In order to avoid these distortions in future evaluations of multidimensional network complexity, our results demonstrate the importance of network representation methods that take into account the algorithmic complexity of the data structure itself, unlike what happens for example with adjacency matrices, tensors, or characteristic strings. One of those representation methods, which was studied in this article, that avoids distortions is using composite edge set strings. Nevertheless, our results hold for any other form of encoding a simple MAG that is Turing equivalent to a composite edge set string. For example, one can equivalently encode a composite edge set string as an three-dimensional array composed of positive integers, Boolean variables, and lists: the first dimension stores the index value a 1 , , a p of each composite vertex v = a 1 , , a p ; the second dimension stores a Boolean value determining whether the composite vertex in the first dimension exists or not; and the third dimension stores a list containing the index value of each composite vertice with which the composite vertex in the first dimension forms a composite edge.
On the other hand, when importing previous algorithmic-information-based methods from monoplex networks (or graphs) into the multidimensional case, another method to deal with the algorithmic information distortions is to accept an error margin given by the algorithmic complexity of the companion tuple. This occurs because our results directly establish that the algorithmic information distortions are always upper bounded by the algorithmic information carried by the companion tuple, whether node-aligned or node-unaligned. Thus, even in the worst-case scenario, the value of the algorithmic complexity of the companion tuple can be always applied as an error margin for the algorithmic information distortions between multidimensional networks and their isomorphic graphs.

6. Conclusions

This article presented mathematical results on network complexity, irreducible information content, and lossless compressibility analysis of node-aligned or node-unaligned multidimensional networks. We studied the limitations for algorithmic information theory (AIT) applied to monoplex networks or graphs to be imported into multidimensional networks, in particular, in the case the number of extra node dimensions (i.e., aspects) in these networks is sufficiently large. Our results demonstrate the existence of worst-case algorithmic information distortions when a multidimensional network is compared with its isomorphic monoplex network. More specifically, our proofs show that these distortions exist when a logarithmically compressible network topology of a monoplex network is embedded into a high-algorithmic-complexity multidimensional space.
Previous results in [15] have shown that node-aligned multidimensional networks with non-uniform multidimensional spaces can display an exponentially larger algorithmic complexity than the algorithmic complexity of their isomorphic monoplex networks. In addition, Abrahão et al. [15] show that these distorted values of algorithmic information content grow at least linearly with number of extra node dimensions.
When dealing with either uniform or non-uniform multidimensional spaces, we show in this article that node-unaligned multidimensional networks can also display exponential algorithmic information distortions with respect to the algorithmic information content of their respective isomorphic monoplex networks. Unlike the case studied in [15], these worst-case distortions in the node-unaligned case are shown to grow at least exponentially with the number of extra node dimensions. Thus, the node-unaligned case is more impactful than the previous node-aligned one precisely because exponential distortions may take place even with uniform multidimensional spaces. Furthermore, the distortions may grow much faster as the number of extra node dimensions increases.
On the other hand, we demonstrated that node-aligned multidimensional networks with uniform multidimensional spaces are limited to only displaying algorithmic information distortions that grow up to a logarithmic order of the number of extra node dimensions. As one might expect, the node alignment in conjunction with the uniformity of the multidimensional space guarantee that, in any event, the algorithmic information content of the multidimensional network and the algorithmic information content of its isomorphic monoplex network are tightly associated, except maybe for a logarithmic factor of the number of extra node dimensions.
The results in this article show that evaluations of the algorithmic information content of networks may be extremely sensitive to whether or not one is taking into account not only the total number of node dimensions but also the respective sizes of each node dimension, and the order that they appear in the mathematical representation. Due to the need for additional irreducible information in order to compute the shape of the high-algorithmic-complexity multidimensional space, the present article shows that isomorphisms between finite multidimensional networks and finite monoplex networks do not preserve algorithmic information in general, so that the irreducible information content of a multidimensional network may be highly dependent on the choice of its encoded isomorphic copy. In order to avoid distortions in the general case when studying network complexity or lossless compression of multidimensional networks, it also highlights the importance of embedding the information necessary to determine the multidimensional space itself into the encoding of the multidimensional network. To such an end, network representation methods that take into account the algorithmic complexity of the data structure itself (unlike adjacency matrices, tensors, or characteristic strings) are required for importing algorithmic-information-based methods into the multidimensional case. In this way, given the relevance of algorithmic information theory in the challenge of causality discovery in network modeling, network summarization, network entropy, and compressibility analysis of networks, we believe this paper makes a fundamental contribution to the study of the complexity of multidimensional networks that have a large number of node dimensions, which in turn also imposes a need to be accompanied by more sophisticated algorithmic complexity approximating methods than those for monoplex networks or graphs.

Author Contributions

Conceptualization, F.S.A., K.W., H.Z., and A.Z.; formal analysis, F.S.A., K.W., and H.Z.; investigation, F.S.A. and K.W.; resources, K.W., H.Z., and A.Z.; writing—original draft preparation, F.S.A.; writing—review and editing, K.W., H.Z., and A.Z.; supervision, A.Z.; project administration, A.Z.; funding acquisition, A.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the partial support from CNPq: F. S. Abrahão (301.322/2020-1), K. Wehmuth (303.193/2020-4), and A. Ziviani (310.201/2019-5). Authors acknowledge the INCT in Data Science – INCT-CiD (CNPq 465.560/2014-8) and FAPERJ (E-26/203.046/2017).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank the anonymous reviewers of [15] for remarks and suggested improvements that led to the elaboration of the present article. We also thank Cristian Calude, Mikhail Prokopenko, and Gregory Chaitin for suggestions and directions on related topics investigated in this article. The authors also acknowledge the contribution of Fábio Porto to project administration and supervision. In memory of Artur Ziviani.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chaitin, G. Algorithmic Information Theory, 3rd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  2. Li, M.; Vitányi, P. An Introduction to Kolmogorov Complexity and Its Applications, 4th ed.; Texts in Computer Science; Springer: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  3. Calude, C.S. Information and Randomness: An Algorithmic Perspective, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  4. Downey, R.G.; Hirschfeldt, D.R. Algorithmic Randomness and Complexity; Theory and Applications of Computability; Springer: New York, NY, USA, 2010. [Google Scholar] [CrossRef]
  5. Mowshowitz, A.; Dehmer, M. Entropy and the complexity of graphs revisited. Entropy 2012, 14, 559–570. [Google Scholar] [CrossRef]
  6. Morzy, M.; Kajdanowicz, T.; Kazienko, P. On Measuring the Complexity of Networks: Kolmogorov Complexity versus Entropy. Complexity 2017, 2017, 3250301. [Google Scholar] [CrossRef] [Green Version]
  7. Zenil, H.; Kiani, N.; Tegnér, J. A Review of Graph and Network Complexity from an Algorithmic Information Perspective. Entropy 2018, 20, 551. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Zenil, H.; Kiani, N.A.; Zea, A.A.; Tegnér, J. Causal deconvolution by algorithmic generative models. Nat. Mach. Intell. 2019, 1, 58–66. [Google Scholar] [CrossRef]
  9. Zenil, H.; Kiani, N.A.; Abrahão, F.S.; Rueda-Toicen, A.; Zea, A.A.; Tegnér, J. Minimal Algorithmic Information Loss Methods for Dimension Reduction, Feature Selection and Network Sparsification. arXiv 2020, arXiv:1802.05843. [Google Scholar]
  10. Zenil, H.; Kiani, N.A.; Tegnér, J. Quantifying loss of information in network-based dimensionality reduction techniques. J. Complex Netw. 2016, 4, 342–362. [Google Scholar] [CrossRef] [Green Version]
  11. Zenil, H.; Soler-Toscano, F.; Dingle, K.; Louis, A.A. Correlation of automorphism group size and topological properties with program-size complexity evaluations of graphs and complex networks. Phys. A Stat. Mech. Appl. 2014, 404, 341–358. [Google Scholar] [CrossRef] [Green Version]
  12. Buhrman, H.; Li, M.; Tromp, J.; Vitányi, P. Kolmogorov Random Graphs and the Incompressibility Method. SIAM J. Comput. 1999, 29, 590–599. [Google Scholar] [CrossRef] [Green Version]
  13. Zenil, H.; Kiani, N.A.; Tegnér, J. The Thermodynamics of Network Coding, and an Algorithmic Refinement of the Principle of Maximum Entropy. Entropy 2019, 21, 560. [Google Scholar] [CrossRef] [Green Version]
  14. Santoro, A.; Nicosia, V. Algorithmic Complexity of Multiplex Networks. Phys. Rev. X 2020, 10, 021069. [Google Scholar] [CrossRef]
  15. Abrahão, F.S.; Wehmuth, K.; Zenil, H.; Ziviani, A. An Algorithmic Information Distortion in Multidimensional Networks. In Complex Networks & Their Applications IX; Benito, R.M., Cherifi, C., Cherifi, H., Moro, E., Rocha, L.M., Sales-Pardo, M., Eds.; Studies in Computational Intelligence; Springer International Publishing: Cham, Switzerland, 2021; Volume 944, pp. 520–531. [Google Scholar] [CrossRef]
  16. Abrahão, F.S.; Wehmuth, K.; Zenil, H.; Ziviani, A. Algorithmic Information Distortions in Node-Aligned and Node-Unaligned Multidimensional Networks. Preprints 2021, 2021030056. [Google Scholar] [CrossRef]
  17. Wehmuth, K.; Fleury, É.; Ziviani, A. On MultiAspect graphs. Theor. Comput. Sci. 2016, 651, 50–61. [Google Scholar] [CrossRef] [Green Version]
  18. Wehmuth, K.; Fleury, É.; Ziviani, A. MultiAspect Graphs: Algebraic Representation and Algorithms. Algorithms 2017, 10, 1. [Google Scholar] [CrossRef] [Green Version]
  19. Abrahão, F.S.; Wehmuth, K.; Zenil, H.; Ziviani, A. On incompressible multidimensional networks. arXiv 2018, arXiv:1812.01170. [Google Scholar]
  20. Bollobás, B. Modern Graph Theory; Graduate Texts in Mathematics; Springer: New York, NY, USA, 1998; p. 394. [Google Scholar] [CrossRef]
  21. Diestel, R. Graph Theory, 5th ed.; Graduate Texts in Mathematics; Springer: New York, NY, USA, 2017; Volume 173, p. 428. [Google Scholar] [CrossRef]
  22. Harary, F. Graph Theory; Addison Wesley Series in Mathematics; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar] [CrossRef]
  23. Newman, M. Networks: An Introduction; Oxford University Press: Oxford, UK, 2010. [Google Scholar] [CrossRef] [Green Version]
  24. Kivela, M.; Arenas, A.; Barthelemy, M.; Gleeson, J.P.; Moreno, Y.; Porter, M.A. Multilayer networks. J. Complex Netw. 2014, 2, 203–271. [Google Scholar] [CrossRef] [Green Version]
  25. Cozzo, E.; de Arruda, G.F.; Rodrigues, F.A.; Moreno, Y. Multiplex Networks; SpringerBriefs in Complexity; Springer International Publishing: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
  26. Boccaletti, S.; Bianconi, G.; Criado, R.; del Genio, C.; Gómez-Gardeñes, J.; Romance, M.; Sendiña-Nadal, I.; Wang, Z.; Zanin, M. The structure and dynamics of multilayer networks. Phys. Rep. 2014, 544, 1–122. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. De Domenico, M.; Solé-Ribalta, A.; Cozzo, E.; Kivelä, M.; Moreno, Y.; Porter, M.A.; Gómez, S.; Arenas, A. Mathematical Formulation of Multilayer Networks. Phys. Rev. X 2013, 3, 041022. [Google Scholar] [CrossRef] [Green Version]
  28. Calude, C.S. Borel Normality and Algorithmic Randomness. In Developments in Language Theory; World Scientific Publishing: Singapore, 1994; pp. 113–129. [Google Scholar] [CrossRef]
  29. Zenil, H.; Kiani, N.A.; Tegnér, J. Low-Algorithmic-Complexity Entropy-Deceiving Graphs. Phys. Rev. E 2017, 96, 012308. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Becher, V.; Figueira, S. An Example of a Computable Absolutely Normal Number. Theor. Comput. Sci. 2002, 270, 947–958. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abrahão, F.S.; Wehmuth, K.; Zenil, H.; Ziviani, A. Algorithmic Information Distortions in Node-Aligned and Node-Unaligned Multidimensional Networks. Entropy 2021, 23, 835. https://doi.org/10.3390/e23070835

AMA Style

Abrahão FS, Wehmuth K, Zenil H, Ziviani A. Algorithmic Information Distortions in Node-Aligned and Node-Unaligned Multidimensional Networks. Entropy. 2021; 23(7):835. https://doi.org/10.3390/e23070835

Chicago/Turabian Style

Abrahão, Felipe S., Klaus Wehmuth, Hector Zenil, and Artur Ziviani. 2021. "Algorithmic Information Distortions in Node-Aligned and Node-Unaligned Multidimensional Networks" Entropy 23, no. 7: 835. https://doi.org/10.3390/e23070835

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop