Next Article in Journal
Using Measured Values in Bell’s Inequalities Entails at Least One Hypothesis in Addition to Local Realism
Previous Article in Journal
On the Definition of Diversity Order Based on Renyi Entropy for Frequency Selective Fading Channels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Citizen Science and Topology of Mind: Complexity, Computation and Criticality in Data-Driven Exploration of Open Complex Systems

Sony Computer Science Laboratories, Inc., Takanawa Muse Bldg. 3F, 3-14-13, Higashi Gotanda, Shinagawa-ku, Tokyo 141-0022, Japan
Entropy 2017, 19(4), 181; https://doi.org/10.3390/e19040181
Submission received: 30 December 2016 / Revised: 14 April 2017 / Accepted: 20 April 2017 / Published: 22 April 2017
(This article belongs to the Section Complexity)

Abstract

:
Recently emerging data-driven citizen sciences need to harness an increasing amount of massive data with varying quality. This paper develops essential theoretical frameworks, example models, and a general definition of complexity measure, and examines its computational complexity for an interactive data-driven citizen science within the context of guided self-organization. We first define a conceptual model that incorporates the quality of observation in terms of accuracy and reproducibility, ranging between subjectivity, inter-subjectivity, and objectivity. Next, we examine the database’s algebraic and topological structure in relation to informational complexity measures, and evaluate its computational complexities with respect to an exhaustive optimization. Conjectures of criticality are obtained on the self-organizing processes of observation and dynamical model development. Example analysis is demonstrated with the use of biodiversity assessment database—the process that inevitably involves human subjectivity for management within open complex systems.

1. Introduction

Recent innovation of information and communication technologies (ICT) embedded in real environments is drastically changing the way society interacts with computation. This has been described as the fourth industrial revolution [1]. In particular, ubiquitous sensors and mobile communication tools have led to an increasing capacity of distributed and interactive environmental sensing. These technological supports bring in new effective methodologies to tackle complex self-organising behaviours in social–ecological systems that are difficult to understand with conventional modelling and simulation approaches (e.g., [2,3]). Massive amounts of sparse and heterogenous data that are based on the internal observation from within various collective phenomena call for an extended analytical framework, ranging from objective measurements (e.g., with sensors) to subjective data such as human evaluations and feedbacks.
Redefining a standard formalization of computation and its complexity that are associated with self-organised citizen science can raise multiple criteria for the evaluation of critical phenomena, spread over the dynamical process of observation, management, and knowledge formation in open complex systems [4,5]. Self-organised criticality appears in various natural and social phenomena, often with scale-free statistical properties [6,7]. They manifest in the power law, which can be reduced to a simple combination of inherent stochastic processes [8], and whose realizations provide proxies of emergent functionality (e.g., [9,10,11]). The large fluctuation of the power law distributes the statistical complexity in multiple scales that cannot be represented by a simple mean value for predictive purposes. The sampling time series from a power-law distribution encounters intermittent shifts of the sample average due to the infinite variance of distribution—even with the upper-bounded power law in the real world (e.g., in the magnitude distribution of earthquakes). This situation addresses a statistical limit of prediction solely by the modelling and simulation of the phenomena, but also presents a positive reason to engage human elements as a practical solution in actual management—especially those involving semantic and cognitive judgements [12,13]. On the technology side, machine learning models have long been attempting to optimize the prediction of unknown stochastic sources, implementing interactive estimation processes to exploit the hidden causal structure from temporal observation sequences (e.g., [14]). Modelling studies of guided self-organization have recently been explored with implementation in robotics, simulated neural networks, and networks of agents, etc. [15]. Although most of the achievement is discussed within the predictability of a confined experimental setting, a hybrid system with the synergy of human and computation elements always lies as a premise of real-world situation, which has been little exploited, except for some prototypical interfaces for the internet of things (e.g., [16]). For a cost-effective monitoring and control within restricted resources, guided criticality should be introduced to the user side of technology, in order to migrate and abstract decision making process from computation to human ability [3,4,17].
In particular, in solving global agendas such as sustainability goals, a comprehensive approach is required that should make use of the full potential of self-organisation in coupled social–ecological systems [5,18,19]. These efforts practically take on the engagement of citizens and multi-disciplinary stakeholders as important actors in the data acquisition and the implementation of an interactive management through guided self-organization, as a novel type of collective intelligence in the era of the fourth industrial revolution [3,20,21].
In facing the transition of data-driven citizen science towards the achievement of dynamical control in managing real-world open complex systems, this article raises fundamental theories and example models to support the discussion of complexity, computation, and criticality in its most possible general form. We formalize the basic objectives as follows, which are exploited in the subsequent sections with the corresponding numbers:
  • Section 2: How can we formalize and treat the databases of varying quality from both machine and human observations, which range from subjective bias to objective fact? How can we set up scientific measures that should assure the compatibility with the principles of accuracy and reproducibility ?
  • Section 3: How can we generalize the concept of complexity measures in application to the human–computer hybrid systems in citizen science?
  • Section 4: What is the nature of computational complexities in actual data processing?
  • Section 5: What is the general condition to yield guided self-organization for cost-effective citizen science?
Although these questions are universal in multiple industries, a common basis of understanding the problems and mutual development of ICT infrastructure are still isolated and developed independently in each sector. Throughout the exploration of these topics, this paper attempts to provide a common terminology and establish a theoretical basis for the realisation of a cost-effective citizen science in open complex systems situations. This is becoming increasingly important for solving transdisciplinary problems through the participation of multiple stakeholders in the real world [5].

2. Inter-Subjective Objectivity Model

We first consider the expression of the quality of data ranging between human subjectivity and machine objectivity in the general form of database X . As a premise, any information that can be represented in digital computing is compatible with the natural number theory. At the infinite limit of computational memory, the representation of the database extends to general sets on a real data type with countably infinite precision, which accepts the definition of σ -finite measure in a measure-theoretical formulation. We define the general form of arbitrary database X as follows:
X = R n × S m ( n , m N ) .
where R is a real data type, S m is the m sets { S i } i = 1 , 2 , , m of arbitrary symbolic set S i = { s 1 , s 2 , , s l i } , with the dimensions n, m, and l i as natural numbers N including 0. Any variable in this article takes the assumption that it can be stored in X . For mathematical simplicity, we hereafter consider the real data type R as a real number. In practice, R n describes the values of n real variables (such as time, spatial coordinates, probabilities, etc.), and S m represents m discrete sets of symbols (such as the name of variables, occurrence of discrete variables, text data, etc.). Obviously, S m R m holds in mathematical simplification, but we separate the notations to distinguish between the quantitative and qualitative variable types.

2.1. Formalization of Subjectivity, Inter-Subjectivity, Subjective–Objective Unity, and Objectivity

Digital data X from citizen science vary from subjective human perception to objective sensor measurement with a different degree of human-induced bias. Here, the subjectivity and objectivity matter because they influence the accuracy and reproducibility of data that is fundamental to establishing scientific analysis. We formalize the nature of observation variables between the subjectivity, objectivity, and these interactions as follows:
  • Subjectivity is the quality of observation that is based on human perception without the substantial support of a machine.
  • Inter-Subjectivity is the degree of commonality between the subjectivities of multiple subjects.
  • Objectivity is the quality of observation that is based on a machine measurement whose consequence does not depend on the operator’s will.
  • Subjective–Objective Unity is the degree of commonality between the subjectivity and objectivity.
  • Inter-Subjective Objectivity is the quality of observation that satisfies the coincidence of both inter-subjectivity and subjective–objective unity.
These follow basic concepts in philosophy and social science and are adapted to the situation of data analysis. The concept of subjectivity is commonly used in philosophy as the collection of the perceptions, experiences, expectations, personal or cultural understanding, and beliefs specific to a person, which influences, informs, and is biased towards people’s judgments and evaluations. In contrast, objectivity refers to a view of truth or reality which is free from any individual’s influence [22]. The most simplistic form of inter-subjectivity in social science employs the term in the sense of having a shared definition of an object, or shared subjectivity [23].
The relations between these classifications are shown in Figure 1a. For example, text data written by humans are subjective data whether the fact described is based on an objective phenomenon or not. Sensor logs are objective data, even measured on a human body such as heart rate that could be influenced by subjective thought. When multiple subjects give the same subjective evaluation, such as rating of web contents, the commonality augments the degree of inter-subjectivity, which is often adapted to cloud-sourced data validation (e.g., [24,25]). When a subjective evaluation coincides with an objective measurement, the commonality represents the degree of subjective–objective unity. A highly reproducible subjective–objective unity can provide on-site practical measurement in field science, typical in biodiversity assessment and soil texture analysis (e.g., [25,26]). This is because these plausible subjective–objective unity measures also coincide with high inter-subjectivity after sufficient training, which guarantees the accuracy of on-site application without confirming the accordance with objective measurement each time. When the methodology is highly established with respect to the accuracy and reproducibility, it belongs to inter-subjective objectivity, where each subjective and objective measurement converges to the same result. The developmental process of reproducible subjective evaluations that converge with objective measurements is depicted in Figure 1b). By training the subjective–objective unity of each human observer, their inter-subjectivity increases, and the commonality of measurement augments to become a self-organizing loop between the subjective–objective unity and inter-subjectivity by a mutual feedback to attain a higher degree of inter-subjective objectivity.
Note that in a philosophical generalization (e.g., phenomenology), all data are the derivatives of subjectivity, because a machine observation is also constructed on human perception in the establishment of measurement principle, construction of sensing devices and data processing workflows, and final interpretation. To avoid trivial argument that does not affect the reproducibility of the results, we adopt the standpoint that separates the subjectivity and objectivity with the degree of intervention to observation outcome between human and machine. We call this conceptual model the inter-subjective objective model.

2.2. Representative Model: Buoy–Anchor–Raft Model

In order to apply the inter-subjective objective model into a quantitative framework of actual data processing, we develop a general example model with a more familiar and analogical terminology that are intuitively easier to understand: the buoy–anchor–raft model, as schematically expressed in Figure 2. The definition and correspondence to the inter-subjective objectivity model are given as follows:
  • Buoy refers to subjective data that fluctuates on the sea surface, representing subjectivity. Buoy can provide subjective estimates of an observation object lying on the objective sea floor, but the observation is biased by subjective fluctuations.
  • Anchor refers to objective data that is fixed on the sea floor representing objectivity, without the influence from the subjective sea surface. Anchors can be connected to buoys, which provide the evaluation of subjective fluctuation with respect to objective machine measurements.
  • Raft represents the relationship between buoys, and refers to inter-subjectivity of data without reference to anchors. A buoy can evaluate another buoy using relative difference of fluctuation on a subjective sea surface, and the overall commonality between buoys is represented as the raft. Nevertheless, it is based on an internal observation between buoys without an objective system of units, and is therefore susceptible to a global drift of collective standard.
  • Buoy–Anchor connection rope defines the degree of subjective–objective unity. As a buoy’s movement is more controlled by its anchor, higher subjective–objective unity is assured.
  • Raft–Anchor connection ropes define the degree of inter-subjective objectivity. In addition to the commonality between buoys represented as a raft, the effects of the global drift from subjective sea surface could be controlled with anchors within a plausible range of error with respect to the objective sea floor.
Concrete examples of the buoy, anchor, raft in various social systems and scientific domains are given in Table 1. While inter-subjective objectivity is a conceptual framework that classifies the quality of observation, the buoy, anchor, and raft refer to actual constructs of databases implemented with ICT. The terms arose from the developmental process of management systems in open systems science [5], sharing the perspective with the transversal question of the grand challenge of AI research regarding the effective extraction of scientific knowledge out of heterogenous data of varying quality [27]. Without properly positioning the subjective background of the study, it is often the case that established knowledge with large-scale experiments and statistical analyses is revealed to be false in high-throughput discovery-oriented research, resulting in a null-field with statistically prevailing bias [28]. As shown in Table 1, conceptual problematics for the implementation of ICT in various fields can be mutually characterized with the use of the buoy–anchor–raft model. This means the ICT infrastructure can be applied and shared in a synergistic way across domains, which is beneficial, especially for open-source development advocated in complex systems science [21]. Recent development in the application programming interface for big data integration has increased the support for this challenge, which calls for a general theoretical framework of information processing that the buoy–anchor–raft model can provide (e.g., [29]).
We then consider a mathematical expression of the buoy–anchor–raft model in view of providing a simplified idea of computation with respect to the evaluation of inter-subjective objectivity. Recently emerging contexts of citizen science make use of buoys as important information sources, in contrast to objective science such as traditional physics, which is usually self-contained with anchors. Buoys fluctuate with human subjectivity, which is scientifically called bias. Suppose we cannot directly measure observation objects as anchors. This constraint does not necessarily arise from the observation principle but rather from the resource limitation: For example, a field evaluation of biodiversity mostly depends on human observation because massive DNA barcoding is too costly or even ineffective. So, the accuracy of buoy data should be evaluated with other buoy–anchor connections compatible with observation objects. By defining a buoy data B X and corresponding measurable anchor data A X , a buoy–anchor connection C can be defined as an error function erf ( · ) between A and B :
C : = erf ( B , A ) .
In case of n observation objects A = ( a 1 , a 2 , , a n ) R n and B = ( b 1 , b 2 , , b n ) R n for one observer, a typical example of buoy–anchor connection c R is given with the regularized mean squared error:
c = 1 n i = 1 n a i - b i a i 2 .
The regularization makes c accessible to the canonical evaluation of confidence interval, such as t-test. As a generalization to m observers, let us describe
C = c 1 c 2 c m ,
where
c j = 1 n i = 1 n a i j - b i j a i j 2 , ( j = 1 , 2 , , m ) ,
given that
A = a 11 a 1 m a n 1 a n m , B = b 11 b 1 m b n 1 b n m .
Next, we consider the raft model. In most social systems, the case-wise precise measurement of anchors is impossible, and we call for the raft of common sense and other social feedbacks as a premise of plausible judgement. Consider m observers with somehow quantifiable opinions (buoy) on n observation objects. We define the raft matrix R as follows, as a generalization of buoy data to m observers and n observation objects:
R = r 11 r 1 m r n 1 r n m ,
where the raft by definition refers to the commonality contained between these buoys. In a completely equal society where every observer’s opinion is equally respected, we obtain the mean inter-subjective evaluation E = ( e 1 , , e n ) on n objects as follows:
E : = e 1 e n = r 11 r 1 m r n 1 r n m 1 / n 1 / n .
Decision-making based on the evaluation of raft can represent the community’s mean quantifiable opinions, although it is not free from collective bias. It remains only within the framework of inter-subjectivity. For a better evaluation in terms of inter-subjective objectivity, we need to introduce a connection with anchors. Let us introduce a buoy–anchor connection C from Equation (4), then an example of the inter-subjective objective evaluation E = ( e 1 , , e n ) in the sense of raft–anchor connection can be given by:
E : = e 1 e n r 11 r 1 m r n 1 r n m [ - l o g ( C ) ] ,
where
[ - l o g ( C ) ] = - l o g ( c 1 ) - l o g ( c m ) .
This means that the error function of the buoy–anchor connection is reflected as an entropy that represents subjective–objective unity of each observer. The opinion of the observer with higher subjective–objective unity is weighted according to the informational scarcity of subjective errors. Such integrated evaluation incorporating the scoring system on observers’ quality are one of the general solutions in web-based citizen science (e.g., [25]).
Note that the n objects of observation can also coincide with m observers themselves. As C can be independently obtained from R , it can also accept subjective objects of observation where direct anchors do not exist, such as psychological state or the quantification of qualia such as Quality Function Deployment (QFD) [30] and pain scale [31]. In such cases, traditional methods only employ simple raft evaluation E without anchors, as formalized in Equation (8). In contrast, with the buoy–anchor–raft model, it is possible to relate indirect anchors to other related objectively quantifiable variables, by expanding the database into a more comprehensive system. In either case, this model provides accessibility to the inter-subjective objective evaluation by properly defining the buoy, anchor, raft and its connections.
The correspondence between the buoy–anchor–raft model and computational variables developed in the following sections are listed in Table 2.

3. Complexity Measures

We consider the generalization of complexity measures with respect to essential information processing in citizen science, based on the inter-subjective objectivity model with buoy–anchor–raft constructs. The concept and definition of complexity vary according to the fields, such as algorithmic complexity, statistical complexity, biological complexity, etc. In this paper, we take a generalized definition of complexity measure as the projection from a system’s variables to one-dimensional quantity, which is composed to express a distinctive characteristic of the system [32]. This includes classical indices mentioned with the context of complexity, as well as various forms of information expressed as numbers in ICT, such as feature dimensions of machine learning.

3.1. Complexity Measure and Search Function

We consider general forms of complexity defined on database X in relation to the search function. Complexity measures are widely studied in information theory, with the underlying principle to abstract a low-dimensional representative index of useful features for functional characterization of complex systems [32]. Usually, complexity measures defined on n real variables are the epimorphism to the one-dimensional real number line, R n R . The general complexity measure for citizen science is therefore the projection of the database to real value index, X R , with the condition that this transformation will provide some utility for the management.
The importance of utility depends on the need for information retrieval in citizen science process, or the conditions that are practically used in a database search. Indeed, the search function is actually the retrieval of corresponding data set with respect to a given condition, such that
S R [ Q ( x ) ] : = { x X | Q ( x ) } ,
where S R stands for the search result on database X with search query Q ( · ) . For example, Q ( · ) is an if–then construct that can specify the value range of real variables, or the matching with specific symbolic sequence, which returns the corresponding data sets into S R .
In order to perform computation such as the calculation of the buoy–anchor–raft model evaluation, the integral I of σ -finite measure μ on X with respect to the condition Q ( · ) can be defined as follows, with indicator function 1 ( · | Q ( · ) ) :
I ( Q ( x ) ) : = X 1 ( x | Q ( x ) ) μ ( d x ) ,
where
1 ( x | Q ( x ) ) : = 1 i f x S R [ Q ( x ) ] , 0 i f x S R [ Q ( x ) ] .
In one-dimensional case, μ can represent either of buoy or anchor. If we define μ : X R as the function of occurrence probability p ( · ) of x X , such as
μ ( x ) = - p ( x ) l o g ( p ( x ) ) ,
then I coincides with entropy, one of the typical information theoretical complexity measures. μ can also include joint distribution, such that with μ :
μ ( x , y ) = p ( x , y ) log p ( x , y ) p ( x ) p ( y ) , x y , x , y X ,
in which case, the mutual information I 2 ,
I 2 : = X 1 ( x , y | Q ( x ) , Q ( y ) ) μ ( d x , d y )
can incorporate raft, buoy–anchor, and raft–anchor connections.
As a search query, Q ( x ) provides a value of complexity measure I; we can also inversely use I to specify S R [ Q ( x ) ] . We consider the invertible map S R - 1 : { x X | I } { Q ( x ) } that generates all possible queries { Q ( x ) } which return the set of x associated with the given value of complexity measure I. For example, we can search the dataset with its entropy higher than a threshold I c by setting
{ Q ( x ) } : = S R - 1 x X | x μ ( d x ) > I c .
Nevertheless, complexity measures that specifically define an arbitrary Q ( x ) are generally not given explicitly. In practice, we usually compare the performance of known complexity measures with respect to the ability to characterize the features on which we focus our analysis. The general task is to invent a novel complexity measure that can exclusively separate patterns in X , given implicitly as Q ( x ) . For that purpose, the following theorem holds:
Theorem 1.
For any search condition Q ( x ) , we can construct an exclusively selective complexity measure I which can sort out effects from other variables, with the function G ( · ) : R { Q ( x ) } , such that
Q ( x ) = S R - 1 { x X | I } = G ( I ) ,
I = G - 1 ( Q ( x ) ) .
The definition of invertibility of G follows that of S R .
Proofs of the theorems are given in Appendix A.
The intuitive geometric meaning of the inverse function relationship between complexity measures and search function is shown in Figure 3.

3.2. Observation Commonality as Complexity

Inter-subjective objectivity is based on the commonality among subjectivity, inter-subjectivity, and objectivity. Essential computation is therefore the search for commonality between different observation datasets, whether it be from humans or machines. We consider the observation commonality to be a complexity measure that conforms to inter-subjective objectivity, and analyze its general mathematical structure.
We consider σ -finite probabilistic measures μ 1 , μ 2 on measurable database space ( X , B ) , where B stands for Borel σ -algebra of X . Then, the convolution * of μ 1 and μ 2 is defined as follows:
μ 1 * μ 2 ( s i ) : = j μ 2 ( s j ) μ 1 ( s i - j ) for s i B ( S ) , { s i , s j , s i - j } S ,
μ 1 * μ 2 ( x ) : = R μ 1 ( x - y ) μ 2 ( d y ) for x B ( R ) , x - y : = { x - y | x x } ,
where B ( S ) and B ( R ) represent σ -algebra of S X and R X , respectively.
Through appropriate variable transformation, the convolution of probability measures with real type variables (21) can be expressed as follows, as the probability of the sum of the variables [33]:
μ 1 * μ 2 ( x ) = R R 1 ( x + y | x + y x ) μ 1 ( d x ) μ 2 ( d y ) , x B ( R ) .
By choosing finite sets of x such as time period, geographic range, and other real type variable range, as well as symbols for { s i } such as name of observation object, one can define the commonality of observations as a part of the convolution of the probabilities from different observers. The observation μ 1 and μ 2 can be of any nature between subjectivity, inter-subjectivity, and objectivity.
We now consider the condition of valid observation with respect to the regularization of probability measure as follows, for a general number of observers i { 1 , , N } :
R μ i ( d x ) = 1 .
This means that by expanding the scale of the real type variable to infinity, one can observe its occurrence with probability 1. The same formalization also applies to σ -finite measure on ( S , B ( S ) ) , which is integrated in the formalization with ( R , B ( R ) ) .
Next, consider a confined variable range r R with positive probability measure μ i ( r ) > 0 . This range can be of any complex form as long as it supports positive measure. In a real situation, this can correspond to intermittent observation time interval, scattered geographical range, and other discrete range of the real type variable. We define the rate of observation q i by observer i within variable range r as
q i ( r ) : = R 1 ( x | x r ) μ i ( d x ) 1 ,
which converges to (23) with r R .
The commonality of observation between two observers i, j based on r is expressed as the following convolution confined to r:
μ i * μ j ( r 2 ) : = R R 1 ( x + y | x + y r 2 ; x , y r ) μ i ( d x ) μ j ( d y ) = R R 1 ( x + y | x , y r ) μ i ( d x ) μ j ( d y ) = r r μ i ( d x ) μ j ( d y ) , r 2 : = { x 1 + x 2 | x 1 , x 2 r } ,
which also means taking the sum of joint distributions μ i · μ j between all smallest measurable events in r. The additional condition x , y r in 1 ( · ) limits the integral of each variable within r, which includes formal condition x + y r 2 . The following generalization holds:
Theorem 2.
For N independent and valid observation μ i ( r ) > 0 ( i = 1 , , N ) on variable range r R , let
λ N ( r N ) : = μ 1 * μ 2 * * μ i * * μ N ( r N ) : = R N 1 Λ i = 1 N x i | Λ i = 1 N x i r N ; x i r i { 1 , , N } μ i ( d x i ) = R N 1 Λ i = 1 N x i | x i r i { 1 , , N } μ i ( d x i ) , r N : = Λ k = 1 N x k | x k r , Λ : = R { 0 , ± } ,
where the coefficient Λ is a free parameter that remains invariant under the convolution. Then
λ N ( r N ) = i N q i ( r ) .
This means that the N - 1 -th power of multiple convolution λ N ( r N ) represents the geometric mean of N independent valid observation rates. By choosing regularization factor Λ , r N corresponds to the ensemble of possible mean values Λ = 1 N , integrated values Λ = 1 , and other weighted sum of N random samplings from r. The regularization parameter Λ can further be generalized to an arbitrary measurable function Λ ( · ) representing commonality characteristics, taking i = 1 N x i as a variable.
With the use of the logarithmic scale, the information of λ N ( r N ) is the sum of those with individual observation:
- l o g ( μ 1 * μ 2 * * μ i * * μ N ( r N ) ) = i N ( - log μ i ( r ) ) .
As a similar property related to geometric mean, note that the following Young’s inequality also holds:
| Λ | · | | μ 1 * μ 2 * * μ i * μ N | | i N | | μ i | | ,
where | | · | | denotes total variation. This assures us that the variation of the commonality remains within the order of the product of each observation’s variation.
However, it is important to note that as a general property of convolution,
λ N ( r ) i N q i ( r ) .
The equality only holds in case r R or μ i ( r N ) = μ i ( r ) for i = 1 , , N , without implication for the independence of observations. For the convolution on general subset r s r N , the exact definition is given by
λ N ( r s ) : = R N 1 Λ i = 1 N x i | Λ i = 1 N x i r s ; x i r μ 1 ( d x 1 ) μ N ( d x N ) ,
though it requires direct calculation without relevance to q i ( r ) . In order to obtain fast computable form, the following asymptotical generalization holds:
Theorem 3.
As N , for r R , μ i ( r ) > 0 , i = 1 , , N and r s r N , λ N ( r s ) converges almost everywhere to the following:
λ N ( r s ) r s N ( Λ ν N , Λ 2 σ N 2 ) m ( d x ) × i N q i ( r ) ,
where m ( · ) is the Lebesgue measure on R , and N ( ν N , σ N 2 ) represents the normal probability density distribution with mean value ν N and variance σ N 2 as follows:
ν N : = i = 1 N R 1 x | x r x μ i ( d x ) , σ N 2 : = i = 1 N R 1 x | x r x 2 μ i ( d x ) - ( ν N 2 - 2 ν N ) i = 1 N R 1 x | x r x 2 μ i ( d x ) - ν N 2 .
A numerical example of the convolution λ N ( r N ) is presented in Figure 4. Theorems 2 and 3 can be directly generalized to R n ( n N ) , with r R d .

3.3. Topological Structure of Complexity 1: Total Order of Observations

We consider the topological structure of inter-subjective objectivity based on the complexity, defined as the convolution between different observations. As the commonality within inter-subjective objectivity is defined with multiple different observations, the topological ordering based on these complexity measures is possible with N > 2 observations of any nature.
We consider the commonality space with respect to each observation dataset as a point, and commonality between them as the distance between each pair of points. This can be considered as the undirected complete graph with N vertices, and its pair-wise complexity measure as N C 2 edges length. The general property of Euclidean space allows a complete graph of size N to be embedded in N - 1 dimensions (e.g., any line between two points is one-dimensional space, and any triangle with three points is two-dimensional surface, etc.), although an additional quantitative restriction such as triangle inequality on each triplet of edges is required. In order to treat an arbitrary set of the complexity measures and yield general characteristics of commonality space, we need to focus not on the actual values of complexity, but on the topological order between them.
Let us first consider the total order between complexity values with N > 2 observation data contained in N vertices V : = { v i } i = 1 , , N . One can determine the total order between N C 2 edges E : = { e k } k = 1 , , N C 2 : = { v i , v j i V } by taking a mean order relationship between each pair of edges by the following algorithm (namely the pair-wise order algorithm):
  • For each pair of edges { e i , e j i E } , calculate the order relation e i e j or e i e j with respect to the given complexity measure as an edge attribute such as length.
  • Score each edge e i by mapping to integer z : e i Z by adding + 1 if e i e j i and by adding - 1 if e i e j i , with respect to all other edges e j i .
  • The sorting with the score { z ( e i ) } provides the total order of E.
Note that the quantitative difference is completely lost in the case of antisymmetry, ( e i = e j ) ( e i e j ) ( e i e j ) . We will consider the meaning of this information loss with respect to other compatible sets of observation in Section 3.4.
Next, we consider the topological order of complexity for N > 2 observations according to the total order of these commonalities. We need here to translate the total order between edges E to that of observations V. This can be obtained by calculating the N C 3 triplet of N > 2 vertices and associated total order of edges with the following algorithm (namely, the triplet order algorithm schematically represented in Figure 5):
  • For each triplet of observation V i , j , k : = { v i , v j i , v k i , j V } and associated edges { e i : = { v i , v j } , e j : = { v j , v k } , e k : = { v k , v i } } , update score of each observation by mapping to integer z : V i , j , k Z with the following six rules:
  • If e i e j e k , then z ( v i ) = z ( v i ) - 1 , z ( v j ) = z ( v j ) + 1 , z ( v k ) = z ( v k ) + 0 .
  • If e i e k e j , then z ( v i ) = z ( v i ) + 1 , z ( v j ) = z ( v j ) - 1 , z ( v k ) = z ( v k ) + 0 .
  • If e j e i e k , then z ( v i ) = z ( v i ) + 0 , z ( v j ) = z ( v j ) + 1 , z ( v k ) = z ( v k ) - 1 .
  • If e j e k e i , then z ( v i ) = z ( v i ) + 0 , z ( v j ) = z ( v j ) - 1 , z ( v k ) = z ( v k ) + 1 .
  • If e k e i e j , then z ( v i ) = z ( v i ) + 1 , z ( v j ) = z ( v j ) + 0 , z ( v k ) = z ( v k ) - 1 .
  • If e k e j e i , then z ( v i ) = z ( v i ) - 1 , z ( v j ) = z ( v j ) + 0 , z ( v k ) = z ( v k ) + 1 .
  • The sorting with the score { z ( v i ) | i = 1 , , N } provides the total order of V.
The commonality order of V represents the topological structure of collective intelligence in citizen science with respect to inter-subjective objectivity, which corresponds to the topological inclusion relation of the Venn diagram in Figure 1.

3.4. Topological Structure of Complexity 2: Permutation between Total Orders of Observations

We expand the situation to two sets of N > 2 observations—namely, observation I and I I . For example, observer I and I I observing N objects, or N observers observing 2 different objects I and I I . It can also represent the application of two different complexity measures I and I I to N observations. For simplicity, we limit the formalization to two sets of N > 2 observations, but generalization to a greater number of sets is possible.
In the general case, total orders I and I I do not necessarily coincide. The relationship between two total orders with N observations can be described with the permutation of N elements (Figure 6a). In order to analyze the permutation between total orders, let G N be a symmetric group with degrees of N. For g G N , we define a linear transformation L g : S N S N by
L g : ( v 1 , , v N ) ( v g ( 1 ) , , v g ( N ) ) ,
which describes the permutation between commonality orders I and I I .
We define a subspace S ( g ) of S N by
S ( g ) = { v i S | v i v g ( i ) } ,
which represents the subspace with compromise of total order. While by defining its complementary subspace
S ( g ) = { v i S | v i = v g ( i ) } ,
we obtain the subspace in which there is no compromise, or the complete matching of two commonality orders. The whole commonality space can be divided into S ( g ) and S ( g ) :
S N = S ( g ) × S ( g ) .
As depicted in Figure 6a,b, the compromise between two commonality orders is expressed as a non-linear folding relationship between them. Making the assumption that the complexity measure is a continuous function, the integrated complexity measure that supports both commonality orders can be expressed as a folded structure (topologically speaking), such as the shape of the letter “N” (also the capital letter of Non-identical), taking the commonality measure of I and I I as an affine coordinate: the example with a red dotted line in Figure 6b shows that we can compose an integrated commonality measure by bending the commonality measure I I in an “N” shape with respect to that of I kept straight (in “I” shape, for Identical), which resolves the compromise. The “N” shape transformation of commonality measure means to change the topology of commonality order with respect to a permutation g G N ( g ( i ) > g ( j ) , 1 i < j N ) , while that of “I” shape represents the identical order ( g ( i ) < g ( j ) , 1 i < j N ) . The non-compromising part of the two commonality orders conserves its order to the projection onto any linear combination of the two commonality measures, which topologically do not require “N” shape folding, but maintain “I” shape matching.
For simplicity, We call the topological compromise between commonality orders the I–N compromise, and we call topologically identical matching I–I matching. Then, I–I matching subspace S ( g ) can be obtained as the linear combination of commonality measures I and I I , and the subspace required for the resolution of I–N compromise corresponds to the complementary space S ( g ) (Figure 6b,c).
We call S ( g ) an I–I space that consists of I–I dimensions, and S ( g ) an I–N resolution space that consists of I–N resolution dimensions. The mean commonality order of two commonality orders projected onto I–I space (red solid arrows in Figure 6b,c) can be obtained with the use of the pair-wise order algorithm in Section 3.3, applied not to commonality itself, but to commonality orders. We call this the I–N mean commonality order, since it adopts the mean total order of commonality orders of I and I I , resolving the I–N compromise. Note that the information lost by antisymmetry of the pair-wise order algorithm does not affect the division of I–I and I–N resolution subspaces. Geometrical representation of the I–N compromise, I–I matching, and these corresponding dimensions, spaces, and the I–N mean commonality order are given in Figure 6.
We finally consider a statistical test on the degree of coincidence (TDC) between 2 commonality orders.
Theorem 4.
Statistical test on the degree of coincidence (TDC) between two commonality orders:
Given that commonality orders I and I I with N observations follow a uniformly random permutation with G N as null hypothesis, the degree of coincidence d c between the three commonality orders follows a binomial distribution:
k I - I : = # { ( i , j ) | g ( i ) < g ( j ) , 1 i < j N , g G N } , P [ d c = k I - I ] : = M C k I - I p k I - I ( 1 - p ) N - k I - I B ( M , p ) ,
where B ( M , p ) signifies a binomial distribution of parameters M = N C 2 and p = 0 . 5 , k I - I represents the degree of coincidence as the number of I–I matching, # ( · ) returns the size of the set, and P [ · ] the probability of the degree of coincidence d c .
With respect to the buoy–anchor–raft model in Section 2.2, the following correspondence is possible:
  • Two observers observing N objects: Commonality orders I and I I can correspond to either subjective (buoy) or objective (anchor) observation. The I–N resolution provides integrated commonality measure such as buoy–anchor connection and raft evaluation according to the nature of the observation. TDC provides connections between buoys and/or anchors.
  • N observers observing two different objects: The commonality of N observers—whether it be subjective (buoy) or objective (anchor)—are ranked with respect to two different objects I and I I . The I–N resolution provides a mean ranking of N observers’ commonality upon these observations. TDC provides the reproducibility of commonality among N observers.
  • Application of two different complexity measures to N observations: For example, the case of raft–anchor connection where N subjective observers (buoys) are ranked with inter-subjective commonality (raft evaluation) and weighted with two different anchors. The I–N resolution provides mean ranking of N observers’ inter-subjective objectivity, integrating multiple criteria of inter-subjective and objective evaluation. TDC represents statistical dependencies between two complexity measures in response to a given inter-subjective objective measurement. While significant matching between two commonality orders assures the reproducibility based on the coincidence of observation with these measures, non-significance can also be used to quantify complementarity of different evaluations [32].

4. Computational Complexity

The computation of complexity measures and commonality orders depends on the exhaustive calculation of combinatorics between observations. The computational complexity of such calculation should also be investigated in terms of topological complexity, in order to yield a general theoretical platform that does not depend on the particularity of the database.

4.1. Topological Complexity of Commonality

First, we investigate topological order of commonality among N observations. Using the convolution as commonality (27), we define the maximum commonality order O : X N as follows:
O ( r X ) : = m a x { k 1 , , N | λ k ( r ) > 0 } .
The general topological structure of O ( X ) is depicted in Figure 7.
On the cardinality of O ( X ) , the following holds:
Theorem 5.
As # ( X ) 0 , r X such that # ( { r | O ( r ) = } ) = 0 , where 0 represents aleph-naught.
This means that for any elaborated inter-subjective objectivity, there is always the possibility to develop another different set of observations that attains higher inter-subjective objectivity by increasing the dataset. This structure assures the representation of a paradigm shift in science when sufficient contradicting evidence gained a majority compared to an old model. For example, minority reports in biology that may lead to novel discoveries in the future can be properly stored and distinguished from erroneous reports as more evidence accumulates [27].

4.2. Algorithmic Complexity

Secondly, we evaluate the computational complexity with respect to the computing time scale. Since data-driven citizen science requires real-time computation in a highly interactive manner with observation process, the algorithmic complexity of the calculation of complexity measures is an essential limiting factor of performance. As commonality is based on the intersection of multiple observations, its exhaustive computing confronts combinatorial explosion as datasets increase. Although computation of complexity itself, or resolution of search query as mathematical theorem is provable and an algorithmic solution can be found, the computation resource is another practical issue for real-world implementation—especially in distributed observation.
The computational time scale required for the sorting of a database according to a given utility such as commonality is listed in Table 3. Under a general condition with the observation probability database X N of size N, X N : = { μ i ( x ) | x X , i = 1 , , N } , maximum complexity lies in the calculation of commonality order based on the intersection of N 2 or N 2 elements, whose sorting time belongs to factorial order of N . The case with N = 5 is depicted in Figure 7. This means that an algorithmic burden exists towards the calculation of middle-scale commonality with respect to the data size. As an inter-subjective objectivity successfully increases in citizen science, this peaking of algorithmic complexity in intermediate scale may hinder the effective feedback necessary for guided self-organization.
However, in a practical situation, the actual computation time may remain in polynomial order if effective data size shrinks with respect to the increase of maximum commonality order:
Theorem 6.
By defining the diminution rate of data combination Δ : N R with respect to maximum commonality order 1 i N N as
Δ ( i ) : = N C dim ( { X N | O ( X ) = i } ) N C dim ( { X N | O ( X N ) = i - 1 } )
the order of its product is upper bounded by the d-th root of maximum computational complexity at N = N 2
i N Δ ( i ) O ( N d N ) d ,
where dim ( · ) returns the size of the database, and d > 0 represents the polynomial order of the algorithm O ( N d ) with respect to the data size N.
From this result, we can conjecture that for N N ,
i N Δ ( i ) O ( N c d )
will assure exhaustive feedback with polynomial response time of degree c. Usually, the left side is based on the past calculation of lower maximum commonality order, we can annotate interactively whether interactive information processing can assure comprehensive feedback. This will add a criterion on the criticality of guided self-organization mediated by computation, which will be explored in Section 5.
Another methodology other than exhaustive computing is to implement a local gradient algorithm as a local interaction that leads to a global heuristic solution without top-down control. This can also be achieved with the use of limited maximum commonality order (e.g., O ( X ) = k < N ), which will keep its computational time within polynomial order O ( N d k ) .

4.3. Big Data Integration

Thirdly, we consider the computational complexity required for big data integration. As open data is increasingly gaining its availability in citizen science, integration of massive databases from different resources has become one of the most important data processing methods. The conversion of different databases through the application programming interface is a basic protocol when the database is distributed over multiple servers.
The computation required in big data integration is the extensive calculation of commonality in the direct product of multiple databases. For simplicity, we consider the integration of two databases X N and X M , with size N and M N , X N : = { μ i ( x ) | x X , i = 1 , , N } , X M : = { μ i ( x ) | x X , i = 1 , , M } , respectively. A joint distribution between subsets of X N and X M needs to be determined with respect to common parameters in order to obtain an integrated database including the calculation of up to ( N + M ) -th order of commonality, such as order-wise correlations [32]. Exhaustive computing follows the argument in Section 4.2, giving the extension of Theorem 6:
Theorem 7.
Given the diminution rate of data combination Δ : N 2 R , with respect to maximum commonality order 1 i N N and 1 j M N , during the integration of two databases X N and X M , respectively, as
Δ ( i , j ) : = N C dim ( { X N | O ( X ) = i } ) N C dim ( { X N | O ( X ) = i - 1 } ) · N C dim ( { X M | O ( X ) = j } ) N C dim ( { X M | O ( X ) = j - 1 } ) ,
the order of its product is upper-bounded by the d-th root of maximum computational complexity at N = N 2 and M = M 2
( i , j ) ( N , M ) Δ ( i , j ) O ( [ N N M M ] d ) d ,
where d > 0 represents the polynomial order of the algorithm O ( [ N N M M ] d ) with respect to the data size N and M.
In this formalization, computational complexity of database integration also confronts combinatorial explosion with respect to data size. Similarly to (42), we then explore a practical condition that effective maximum commonality order can be treated with polynomial time of degree c > 0 , such that
O ( [ N N M M ] d ) O ( [ N + M ] c ) .
For that purpose, we set the uniform sparseness u ( 0 < u < 1 ) of random databases representing the density of combination that supports the existence of commonality at each order,
N C dim ( { X N | O ( X ) = k } ) N C k = M C dim ( { X M | O ( X ) = k } ) M C k = u for k = 1 , , N or M ,
which maintains the diminution rate of data combination Δ (40) and Δ (43) invariant under the definition. With respect to the total size of the database after integration L = N + M , the following holds:
Theorem 8.
As L in random data (46), the mean condition of (45) for all { N , M | N + M = L } converges to the following inequality, which represents polynomial time constraints on computational complexity for exhaustive calculation of newly emerging commonality order within data size L:
u O ( f * f ( L ) ) ,
where
f ( x ) : = L c 4 d x - L 8 L ,
and * signifies the discrete convolution (20):
f * f ( L ) : = N = 1 L - 1 f ( N ) f ( L - N ) .
Numerical observation of the proof is given in Figure 8.
This signifies that the convolution of the power function of each database’s size serves as the complexity measure of big data integration with respect to computational complexity. This provides the condition of data sparseness u such that exhaustive calculation of all newly generating commonality orders within size L can be treated with polynomial time order c under algorithmic constraint d. As the inequality indicates, the more data is sparse, the easier we can calculate joint commonality.

5. Conjectures on Guided Self-Organization

With effective feedbacks by computation, citizen science dynamics is expected to converge to a critical state where objective is collectively optimized through the mutual increase of inter-subjective objectivity. However, several aspects may intervene in the resulting self-organized state, on which we need theoretical interpretation. In this section, general important aspects are exemplified in relation to self-organized criticality.

5.1. Criticality by Limitation

The accuracy and reproducibility of observation is a primary factor that defines the consequent resolution of information represented in a database. Computational complexity also gives constraint on the speed of information processing for prediction. Several limiting factors may generically arise, such as:
  • Limitation by principle: Deterministic chaos inherent in a natural system does not allow for long-term prediction, because the tiniest observation error of the present state will develop in exponential order [35]. Short-term validity of meteorological prediction is a typical example.
  • Limitation by reproducibility: In a real world situation, we mostly encounter one-time-only events, which do not allow reproduction under the same condition [5]. Available data is sparse with respect to latent variables, which causes quantitative limitation of prediction [4].
  • Limitation by computational complexity: As explored in Section 4.2, extensive feedback based on exhaustive computing is often impossible with respect to available computing resources. The resolution of feedback may include time delay or incomplete optimization. Spatial-temporal scale of the forecast also sets the constraint as a general trade-off between prediction accuracy and computational resources. The coarser the forecast granularity is, the more costly the calculation becomes, but the more likely it is to realize an accurate long-term prediction.
These limitations fundamentally regulate the order of significant digits in the prediction process, at the edge of resulting precision where the accuracy reaches criticality. The whole dynamics is also confined by the criticality of the observing phenomena itself, by which observers’ behaviour is influenced.

5.2. Criticality by Successful Learning

The motivation of citizen science is not necessarily the construction of versatile artificial intelligence, but the integration and augmentation of human capacity as well [4,12,13]. Successfulness of citizen science can also be defined in terms of information transition from machine to human, on which criticality is assumed to appear.
Let us consider the case when successful learning mediated by computation transferred an effective prediction model into human cognitive capacity. We take an example with Bayesian estimation, which is also a general model of our brain function [36]. General formulation of Bayesian estimation updates the parameter of hypothesized prior probability P ( A ) with respect to the observed data P ( B ) , and provides an estimation of posterior probability P ( A | B ) given by Bayes’ theorem:
P ( A | B ) : = P ( B | A ) P ( A ) P ( B ) ,
where P ( B | A ) is considered as likelihood function, which updates P ( A ) to P ( A | B ) .
We now consider that the prior probability P ( A ) —or the model of prediction—depends on the process of computation C and human decision D. As human decision is supported by computation,
P ( A ) : = P ( D | C ) .
This formalization corresponds to Bayesian hierarchical modelling, where computation C provides hyperparameter of human decision D as prior distribution:
P ( D , C | B ) : = P ( B | D ) P ( D , C ) P ( B ) , : = P ( B | D ) P ( D | C ) P ( C ) P ( B ) .
When human successfully acquired the model represented in computational model,
P ( D | C ) P ( D )
as independent identical distribution, and
P ( D ) P ( C )
as independent and informationally homologous distribution.
This criticality qualitatively corresponds to the saturation stage of Markov chain Monte Carlo method (MCMC) in the optimization of a hierarchical model (52), where hyperparameter and parameter converge to independent stable distributions. Therefore, by monitoring the dependency of machine–human interaction with respect to the actual predictability, one can suggest whether the computation model or human observation should change, or if the actual phenomenon is in transition:
  • When the actual prediction accuracy is high and human–machine interaction is high, this indicates the successful modelling of observing phenomenon with the use of computation.
  • When actual prediction accuracy is high and human–machine interaction is low, it means the human has achieved a successful understanding of the phenomenon with less dependency on a machine.
  • When actual prediction accuracy is low and human–machine interaction is high, it indicates the possibility that computational capacity is not sufficient to effectively treat the phenomenon. Otherwise, the observing phenomenon might be in dynamical transition that effective computational model needs to be changed.
  • When actual prediction accuracy is low and human–machine interaction is low, more human effort needs to be engaged both on actual observation and the utilization of the machine interface.

5.3. Criticality by Guided Optimization

The actual management task of citizen science is often firmly related to the sustainability of a social–ecological system, where the achievement of robustness and resilience is an important criterion of criticality [3,5]. A universally robust model with respect to an arbitrary variable cost function is canonically given by uniform distribution, which is commonly adopted as a prior of Bayesian estimation and random search algorithm [14]. It is also widely prevalent in biological phenomena, as the survival rate depends on the geometric mean of evolutionary fitness, which is maximized with uniformity in space, time, and statistical configuration [32,37].
On the other hand, a short-term management goal is usually biased by a given objective. How to reconcile short-term local efficiency and long-term global sustainability is a crucial issue for guided self-organization of management in citizen science.
In order to optimize the balance between different spatio-temporal scales, information geometry can provide a theoretical compromise in terms of informational complexity. Suppose the actual distribution of variable X X is given by P a ( X ) , a short-term management goal as P s ( X ) , and idealized long-term robust distribution as P l ( X ) . In many natural systems, the uniformity of P l ( X ) supporting robustness as the result of self-organization is expressed with entropy maximization principle under parameter constraints such as resource availability and energy flux level [38].
For simplicity, take an example with Shannon’s diversity index H defined on discrete distribution P ( X ) on symbols X = { s 0 , s 1 , , s n } , such as frequency of n species in biodiversity observation.
H : = - i = 0 n P ( s i ) log P ( s i ) ,
where s 0 represents the non-occurrence of any species. P ( X ) and H could be either buoy or anchor. Note that H can be generalized to mutual information H 2 to express raft, buoy–anchor, and raft–anchor connections,
H 2 : = i , j P 2 ( s i , s j ) log P 2 ( s i , s j ) P ( s i ) P ( s j ) ,
where P 2 ( · , · ) denotes joint distribution on X × X .
By maximizing H , we can determine the most diverse distribution P l as
P l ( s i ) = 1 n + 1 ,
which represents the most robust ecosystem taking on the assumption that every species including the gap is equally invaluable in terms of ecosystem function in a randomly changing environment.
With respect to the short-term management goal, both H ( P a ) < H ( P s ) and H ( P a ) > H ( P s ) could occur. However, a general relationship between biodiversity and ecosystem functions imposes H ( P a ) < H ( P s ) , meaning a net positive impact on biodiversity and good management in terms of sustainability. H can be generalized to complexity measure G - 1 in Section 3.1 with respect to the commonality λ in Section 3.2, which will be detailed in Section 7.
Expressed as an exponential family, P ( X ) can be parameterized as a statistical manifold based on the canonical setting of information geometry, with the dual-flat coordinates Θ = { θ i | i = 1 , , n } and H = { η i | i = 1 , , n } , with potential functions ϕ and ψ , respectively, based on the Fisher information metric g and connection coefficients Γ ( α ) [39,40]:
P ( X , Θ ) = exp C ( X ) + i = 1 n θ i f i ( X ) - ψ ( Θ ) , θ i ψ = η i , η i ϕ = θ i , ϕ ( H ) = i = 1 n θ i η i - ψ ( Θ ) ,
under the correspondence of the following transformation for discrete distribution,
C ( X ) = 0 , f i ( X ) = 1 ( X | X = s i ) , ψ ( Θ ) = - log P ( s 0 ) , θ i = log P ( s i ) P ( s 0 ) , η i = E [ f i ( X ) ] = P ( s i ) .
The elements of Fisher information metric g = g i j are given with respect to the dual coordinates,
g i j = θ i θ j ψ ( Θ ) = η j θ i , g i j i n v = η i η j ϕ ( H ) = θ j η i ,
where g i j i n v is the inverse matrix of ( g i j ) . This relation defines Θ and H as the dual coordinate systems orthogonal to each other with respect to g. The α -connection coefficients Γ ( α ) = Γ i j ; k ( α ) ( i , j , k { 1 , , n } ) with respect to a real number α is given by the Fisher information metric as
Γ i j ; k ( α ) = 1 2 θ i g j k + θ j g i k + θ k g i j - α E θ i log P ( X ) θ j log P ( X ) θ k log P ( X ) ,
where E [ · ] is the mean value function. The values α = 1 and - 1 are essential in information geometry, which define the e- and m-flat connections, respectively, in terms of the invariance of tangent space under the covariant differential ( α ) on arbitrary coordinates { ξ i } ( i = 1 , , n ) of the statistical manifold:
ξ i ( α ) ξ j = k = 1 n Γ i j ; k ( α ) ξ k ,
where Γ i j ; k ( 1 ) = 0 for ξ i = θ i , and Γ i j ; k ( - 1 ) = 0 for ξ i = η i . For example, the model P ( X ; Θ ) is e-flat with respect to the coordinates Θ , and m-flat with respect to the coordinates H. ( ± 1 ) is called the dual-flat connection of the statistical manifold. The concept of flatness defined by these connections further extends to the concept of geometric parallel and geodesic. As an autoparallel submanifold with respect to the connection, e- and m-flat geodesic Θ ( w ) and H ( w ) between two distributions P 1 ( X ) and P 2 ( X ) are defined as follows with one-dimensional parameter w:
Θ ( w ) = w Θ ( P 1 ( X ) ) + ( 1 - w ) Θ ( P 2 ( X ) ) ,
H ( w ) = w H ( P 1 ( X ) ) + ( 1 - w ) H ( P 2 ( X ) ) .
The unique ( α ) -divergence D ( α ) ( P 1 ( X ) : P 2 ( X ) ) that satisfies D ( P 1 ( X ) : P 2 ( X ) ) 0 and D ( P 1 ( X ) : P 2 ( X ) ) = 0 P 1 ( X ) = P 2 ( X ) , and that remains invariant under possible transformations of the dual-flat coordinates with the connections ( ± α ) is given by
D ( α ) ( P 1 ( X ) : P 2 ( X ) ) = Ψ ( P 1 ( X ) ) + Φ ( P 2 ( X ) ) - i = 1 n θ i ( P 1 ( X ) ) η i ( P 2 ( X ) ) ,
whose dual divergence coincides with Kullbuck–Leibler divergence in case of α = 1 ,
D ( 1 ) ( P 1 ( X ) : P 2 ( X ) ) = D ( - 1 ) ( P 2 ( X ) : P 1 ( X ) ) = X P 2 ( X ) log P 1 ( X ) P 2 ( X ) .
From the Pythagorean relation and the projection theorem of Kullbuck–Leibler divergence on the dual-flat statistical manifold [39] (p. 63), the following holds:
Theorem 9.
Let ( Θ a , H a ) , ( Θ s , H s ) , and ( Θ l , H l ) be the dual-flat coordinates of P a ( X ) , P s ( X ) , and P l ( X ) , respectively, with the canonical definition of e- and m-flat dual connections. We define the optimal distribution P o ( X ) with coordinates ( Θ o , H o ) on m-flat geodesic between P a ( X ) and P l ( X ) with parameter w R as
H o : = w H a + ( 1 - w ) H l .
By optimizing H o with orthogonal projection of e-flat geodesic from Θ s to Θ o as
w = arg min w ( D m ( P o : P s ) ) = arg min w ( D e ( P s : P o ) ) ,
the following Pythagorean relations hold:
D m ( P a : P s ) = D m ( P a : P o ) + D m ( P o : P s ) , D m ( P l : P s ) = D m ( P l : P o ) + D m ( P o : P s ) .
where D m ( · : · ) and D e ( · : · ) are Kullback–Leibler divergence and its dual divergence, respectively,
D m ( P o : P s ) : = D e ( P s : P o ) : = i = 0 n P o ( s i ) log P o ( s i ) P s ( s i ) .
Figure 9 shows the geometrical structure of this theorem. In this case, supposing H ( P a ) < H ( P s ) < H ( P l ) as effectiveness of complexity measure H for management, we want to find the optimal distribution of biodiversity P o balancing between P s and P l with respect to actual distribution P a , such that
H ( P a ) < H ( P s ) < H ( P o ) < H ( P l ) ,
based on statistical dependencies between variables that can be orthogonally separated with Pythagorean relation. As a result, P o provides the optimized distribution with respect to minimum informational discrepancy from the short-term objective to the ideal transition towards the long-term most diverse state. The meaning of major components of Kullbuck–Leibler divergence to be used as a guide of self-organization is listed as follows:
  • D m ( P a : P o ) : Discrepancy between actual distribution and optimum portfolio strategy that orthogonally decomposes and attempts to achieve a balance between short-term management objective and long-term sustainability.
  • D m ( P a : P s ) : Target risk of short-term management objective.
  • D m ( P o : P s ) = D e ( P s : P o ) : Buffering element of robustness trade-off between short-term management objective and long-term sustainability.
  • D m ( P l : P o ) : Potential risk of optimum portfolio w.r.t. long-term sustainability.
  • D m ( P l : P s ) : Potential risk of short-term management objective w.r.t. long-term sustainability.
  • D m ( P l : P a ) , D m ( P a : P l ) : Potential risk of actual distribution w.r.t. long-term sustainability.

6. Results from Biodiversity Management

We demonstrate the application of the model developed in this article to actual citizen science observation data, taking a biodiversity observation activity supported by interactive database as a typical example [17]. Sample data contain the observation by seven citizen participants on 48 subjective binary indices on species occurrence as buoy data on biological diversity, resulting in 336 samples. On the other hand, a buoy–anchor connection was established separately by objective evaluation of each participant’s ability to detect these species.
Commonality orders among seven observers were obtained for both inter-subjectivity based on the mutual information of buoy data and subjective–objective unity by simply ranking with buoy–anchor connection data. These orders are shown in Figure 10. A binomial test defined in (38) was performed on the comparison between inter-subjective and subjective–objective commonality orders. The random order distribution hypothesis was rejected with respect to 4 % significance threshold. The matching was more consistent in a higher order of commonality, which implies the intervention of subjective bias in a lower order. With respect to the conjectures on criticality in Section 5, the results can be interpreted as a significant self-organization process towards criticality with the increase of inter-subjective objectivity.

7. Discussion

We have tackled the general situation in data-driven citizen science where scientific accuracy and reproducibility can only be discussed at the intersection of subjectivity, inter-subjectivity, and objectivity. Based on the conceptual definition of inter-subjective objectivity, a general topological structure was characterized with respect to complexity measure, search function, computational complexity, and criticality conditions. The results provide theoretical criteria for the development of information and communication technology in view of effective assistance and guidance of citizen science from a complex systems perspective.
The universality of the developed theory and models lies in the generality of the commonality concept formalized as convolution. In reality, a joint distribution of N variables can be represented as the function of convolution with degree N, which allows for extensive expression of informational complexities [32].
For example, by choosing the time range T R with positive Lebesgue measure m ( T ) > 0 , marginal distribution P ( x | T ) can be expressed as the time integral of probability measure μ , such as
P ( x | T ) : = T μ ( d t ) = R 1 ( t | t T ) μ ( d t ) = q ( T ) ,
according to (24).
On the other hand, joint distribution P ( x 1 , x 2 | T ) is also the time integral of the products between each variable’s probability measure μ 1 and μ 2 , within simultaneous time range { d T i } i = 1 , , n :
P ( x 1 , x 2 | T ) : = i n d T i d T i μ 1 ( d t 1 ) μ 2 ( d t 2 ) m ( d T i ) , i n d T i = T .
where m ( · ) is Lebesgue measure on R . As defined in (25),
d T i d T i μ 1 ( d t 1 ) μ 2 ( d t 2 ) = μ 1 * μ 2 ( d T 2 i ) , d T 2 i : = i = 1 , 2 t i | t i d T i ,
which derives the practical form for actual data processing as
P ( x 1 , x 2 | T ) : = i n μ 1 * μ 2 ( d T 2 i ) m ( d T i ) .
Taking n , we obtain
P ( x 1 , x 2 | T ) : = T μ 1 * μ 2 ( d T 2 ) = T q 1 ( d T ) q 2 ( d T ) = T μ 1 ( d T ) μ 2 ( d T ) ,
the canonical definition of joint distribution with real value resolution of time.
This follows the generalization to N variables with (A5) as
P ( x 1 , x 2 , , x N | T ) = i n λ N ( d T N i ) m ( d T i ) , d T N i : = Λ j = 1 N t j | t j T i , i = 1 n d T i = T .
Taking n , it converges to
P ( x 1 , x 2 , , x N | T ) = T λ N ( d T N ) = T μ 1 ( d T ) μ 2 ( d T ) μ N ( d T ) .
Therefore, based on the commonality as convolution, we derive whole orders of the joint distribution necessary for the calculation of known complexity measures. In a general form, any complexity measure incorporating the information of a joint distribution can be described as the function of convolution G - 1 ( Q ( λ ) ) , following the formalization of Section 3.1.
Commonality order is also accessible to existing algorithms that extract the total order of system elements, such as Dulmage–Mendelsohn decomposition [41] and phylogenetic tree analyses [42]. Although the calculation of joint distributions of all orders out of matrix data generally confronts exponential computational time, total order based on partial combinatorics and statistical testing with known distribution of p-value can provide a quick evaluation of matching on the results from different algorithms. The pair-wise and triplet order algorithms of N observations can be processed with O ( N 2 ) and O ( N 3 ) , respectively, similar to the range of most other ranking algorithms based on low-order statistics. The comparison between N total orders of commonality requires only second-degree polynomial time O ( N 2 ) (38). Taking such partial optimization and algorithm-wise comparison of performance into account, as an extensive Bayesian estimator including human of Section 5.2, a deep learning model with the use of massive parallel machine learning can be structurally effective for an interactive recombination of an estimation model based on human feedback [4].
In order to effectively attain criticality in citizen science where knowledge acquisition, transfer, and control are optimized through self-organization, we need to reach a collective intelligence that is distributed in a parallel way both in our subjective mind and in objective reality. The cost of data-driven science sometimes depends on the overly weighted objective measurement for complete modelling, which can also hinder the agility of taking actions, and opportunity of effective interaction through internal observation [3]. As explored in this article, if there exist natural laws extended in our collective intelligence—much like the physical law in objective nature—we may count on such topological structure, and it may be possible to take effective guidance through partial and distributed observation. Such a way to organize collective intelligence among independent and parallel activity producers could be considered as a social–environmental expansion of the “intelligence without representation”, which is based on the direct interface to the world through perception and action, rather than comprehensive representation of knowledge isolated from the environment [43]. Data acquisition needs to generate potentially effective action strategies, or the affordance under global management principles, instead of modelling the phenomena without essential intervention of actors [44]. This can be described as data-affordance science in contrast to exhaustive data-driven science, in which we substantially depend on the emergent topological structure of inter-subjective objectivity to make decisions in real time, represented at the intersection of the human mind, computation, and natural phenomena. The buoy–anchor–raft model developed as a mutual framework can provide a theoretical basis that expands external observation of conventional science to internal observation necessary for the management and knowledge extraction as a data-affordance science [5,27]. As a cumulative effect of synergistic efficiency, observation and data processing could diminish within a computable time scale by implicitly augmenting the knowledge representation incorporated into actual action principles. With measurement–action unity as a process of affordance in both data and reality, a cost-effective interface and a human-dependable system could be realized within the framework of internal observation, as a crucial premise for a sustainable solution. The edge of criticality for a successful citizen science—in terms of its nature and resource restriction—could find its limits neither in our internal mind nor external world, but on the topology of these interactions.

Acknowledgments

This study was funded by Sony Computer Science Laboratories, Inc.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Proof of Theorem 1.
Let us formulate Equation (12) as I = F - 1 ( Q ( x ) ) . As I is an epimorphism but not necessarily a monomorphism, its inverse function generally retrieves a larger subset of conditions including Q ( x ) :
F ( I ) Q ( x ) .
Recursively defining Q ( x ) by specifying the value of I as
Q ( x ) = S R - 1 { x X | I = c o n s t . } ,
one obtains the inverse function that brings us back exactly to the comprehensive search condition, F ( I ) = Q ( x ) .
Now, we consider the epimorphism H : { Q ( x ) } { Q ( x ) } with its right-sided inverse as H - 1 : { Q ( x ) } { Q ( x ) } and H - 1 H Q ( x ) = Q ( x ) . We set Q ( x ) = H Q ( x ) , which gives F ( I ) = H Q ( x ) , then H - 1 F ( I ) = Q ( x ) . Next, we consider I such that F ( I ) = Q ( x ) . By resolving F F ( I ) = H - 1 F ( I ) with respect to F : R R , we obtain
I = F ( I ) ,
then
Q ( x ) = F F ( I ) = F ( I ) ,
which shows coincidence between F and G with exclusively selective complexity measure I . The exact construction of Q , F , H , and F depends on the exhaustive computation process, whose computational complexity is characterized in Section 4. ☐
Proof of Theorem 2.
From Tonelli’s theorem,
λ N ( r N ) : = μ 1 * μ 2 * * μ i * * μ N ( r ) = R R R R 1 Λ i = 1 N x i | x i r μ 1 ( d x 1 ) μ 2 ( d x 2 ) μ i ( d x i ) μ N ( d x N ) = R 1 x 1 | x 1 r μ 1 ( d x 1 ) R 1 x 2 | x 2 r μ 1 ( d x 2 ) R 1 x i | x i r μ 1 ( d x i ) R 1 x N | x N r μ 1 ( d x N ) = i N q i ( r ) .
Proof of Theorem 3.
The central limit theorem with Lindeberg’s condition assures the following convergence as the sampling number N and the number of distribution N :
j = 1 N i = 1 N x i j N R N ( ν N , σ N 2 ) m ( d x ) ,
where the variables x i j X i = { x i 1 , , x i N } follow independent distributions p ( X i ) , X i { X 1 , , X N } , with finite mean α i = j = 1 N x i j N and variance β i 2 = j = 1 N x i j 2 N - α i 2 taken over N samples, and
ν N = i = 1 N α i ,
σ N 2 = i = 1 N β i 2 .
Based on the central limit theorem, we consider the numerical convergence of λ N ( r N ) in a way accessible to r S r N . The convolution λ N ( r N ) represents infinite random sampling of x r N : = Λ k = 1 N x k | x k r at the limit of N , from N independent distributions { μ i ( x ) | x r , i = 1 , , N } as the population distributions with finite mean α i and variance β i 2 as follows:
α i = R 1 x | x r x μ i ( d x ) , β i 2 = R 1 x | x r x 2 μ i ( d x ) - α i 2 ,
where each mean and variance is bounded within the total variation of r as
inf ( r ) α i sup ( r ) ,
β i sup ( r ) - inf ( r ) 2 = | | r | | 2 .
If μ i ( x r ) are finite measures, we obtain the following from the central limit theorem of independent distributions with finitely bounded mean and variance:
λ N ( r N ) R 1 x | x r N N ( Λ ν N , Λ 2 σ N 2 ) m ( d x ) × R N 1 Λ i = 1 N x i | x i r μ 1 ( d x 1 ) μ i ( d x i ) μ N ( d x N ) = R 1 x | x r N N ( Λ ν N , Λ 2 σ N 2 ) m ( d x ) × i N q i ( r ) ,
where
ν N = i = 1 N α i , σ N 2 = i = 1 N β i 2 ,
which coincides with (33) as N . In (A12), the term i = 1 N q i ( r ) serves as the overall normalisation factor, since q i ( r ) is not necessarily normalized as a probability distribution with total probability 1. Since the convolution is replaced by the integral of normal distribution with single variable, by restricting on arbitrary subset r s r N , we obtain the theorem (32):
λ N ( r s ) R 1 x | x r s N ( Λ ν N , Λ 2 σ N 2 ) m ( d x ) × i N q i ( r ) = r s N ( Λ ν N , Λ 2 σ N 2 ) m ( d x ) × i N q i ( r ) .
In case μ i ( x r ) includes infinite measures that do not guarantee the above convergence, x r , such that μ i ( x ) = , though
m ( { x r | μ i ( x ) = } ) = 0 .
Because, in the opposite case, m ( { x r | μ i ( x ) = } ) > 0 , μ i ( r ) = q i ( r ) = , which contradicts the definition (24). Since infinite measures could only appear within a countable set of zero Lebesgue measure,
μ i ( { x r | ¬ Theorem   3 } ) = 0 ,
which means for almost every x r s , the theorem holds. ☐
Proof of Theorem 4.
The null hypothesis can be represented as a random order distribution, in which M = N C 2 pairs of N observations are susceptible to generating an I–N compromise between I and I I . Choose an arbitrary commonality order I and consider the null hypothesis distribution of I I .
With respect to an arbitrary pair ( i , j ) out of N observations, all permutations in G N can be divided into two sets H I - I and H I - N , which correspond to those generating I–I matching and I–N compromises, respectively:
H I - I : = { g I - I G N | g I - I ( i ) = i , g I - I ( j ) = j , 1 i < j N } ,
H I - N : = { g I - N G N | g I - N ( i ) = j , g I - I ( j ) = i , 1 i < j N } .
Here, H I - I and H I - N are not groups, but the subsets of the same size,
# ( H I - I ) = # ( H I - N ) = 1 2 # ( G N ) ,
because
H I - N = g i j H I - I ,
H I - I H I - N = G N ,
H I - I H I - N = ,
where for k = 1 , , N ,
g i j ( k ) : = j if k = i , i if k = j , k else .
Then, with respect to the random permutation, the probability p that each pair from N observations will be judged as I–I matching is given by:
p = # ( H I - I ) # ( G N ) = 0 . 5 ,
which leads to the general probability of the occurence number of I–I matching ( k I - - I 1 ) follow binomial distribution with parameters M = N C 2 and p.
Note that the binomial distribution can be approximated to a normal distribution with N 7 in this case, according to the condition of the mean value M p > 5 and variance M p ( 1 - p ) > 5 . ☐
Proof of Theorem 5.
Take n > m N and consider the database X , # ( X ) = n , in which we divide m observations with k = n m elements and these intersections as commonality structure. · is a floor function.
As the cardinality of rational number is 0 , any positive common fraction, or N 2 , can find unique correspondence to N . Now, for an arbitrary k = n m , n , such that k < n m (for example, take n = m n m with ceiling function · ). Since k N , for simplicity, let us consider the correspondence n = k m for any k , m N . With the use of Cantor’s pairing function · , · : N 2 N , we obtain the unique counting natural number k , m for all pairs of ( k , m ) :
k , m : = 1 2 ( k + m ) ( k + m + 1 ) + m .
As n = k m contains permutational symmetry with respect to k and m, the uniqueness does not hold for k , m n , though from the inverse function of · , · ,
lim k , m k = ,
lim k , m m = ,
lim k , m k m = lim k , m n = .
As n is equivalent with either k or m ,
lim n k , m = ,
which results in
lim n k = ,
lim n m = .
Taking n = # ( X ) , m = O ( r ) , and k = # ( { r | O ( r ) = m } ) gives the theorem. ☐
Proof of Theorem 6.
From the definition of Δ ( i ) , when there is no diminution of data or equivalently λ k ( X ) > 0 for all k 1 , , N = N 2 ,
O i N Δ ( i ) = O ( N 2 ) O ( 1 ) · O ( N 3 ) O ( N 2 ) · O ( N 3 ) O ( N N - 1 ) O ( N N ) O ( N N - 1 ) = O ( N N ) = O ( N d N ) d .
As the product monotonically decreases with respect to the decrease of each element, the above relation gives the upper bound. Sorting time of N elements is usually given by O ( N 2 ) , d = 2 , and can be generalized to algorithms with polynomial order d > 0 . ☐
Proof of Theorem 7.
From
Δ ( i , j ) = Δ ( i ) Δ ( j ) ,
we directly obtain
O ( i , j ) ( N , M ) Δ ( i , j ) = O i N Δ ( i ) O j M Δ ( j ) = O ( N N M M ) = O ( [ N N M M ] d ) d .
Proof of Theorem 8.
The condition (45) can be translated into the following with respect to the data sparseness u:
O ( [ N C dim ( { X N | O ( X ) = N } ) M C dim ( { X M | O ( X ) = M } ) ] d ) O ( [ N + M ] c ) ,
O ( [ u N C N · u M C M ] d ) O ( [ N + M ] c ) ,
O ( [ u 2 N N M M ] d ) O ( [ N + M ] c ) .
Expressed as the order of computational time on both sides of formula without O ( · ) for simplicity,
[ u 2 N N M M ] d [ N + M ] c ,
and taking logarithmic scale,
d [ log ( u 2 N N M M ) ] c log L ,
1 2 k { N , M } k 2 log k c 2 d log L - log u .
We consider the application of Chebyshev’s inequality on the left side, such that
1 2 k { N , M } k 2 · 1 2 k { N , M } log k 1 2 k { N , M } k 2 log k .
Since k 2 k 2 1 as k and removing constant coefficient 1 2 , evaluation of the asymptotic behaviour of (A41) can be derived essentially for the left side from f l ( N , M ) and the right side from f r ( N , M ) defined as follows,
f l ( N , M ) : = ( N + M ) ( log N + log M ) , f r ( N , M ) : = N log N + M log M ,
with which (A41) is described as
1 2 f l ( N , M ) f r ( N , M ) .
As N , M that becomes dominant as L ,
f l ( N , M ) O ( ( N + M ) log ( N + M ) ) , f r ( N , M ) O ( ( N + M ) log ( N + M ) ) ,
since D > 0 , C > 0 , such that N , M D , then f l ( N , M ) , f r ( N , M ) C · [ ( N + M ) log ( N + M ) ] . This condition holds with D 1 , C 2 for both f l ( N , M ) and f r ( N , M ) . Note that although explicit inequality between f l ( N , M ) and f r ( N , M ) exists in (A43), these converge to the same asymptotic order O ( L log L ) for all N and M, because as L ,
1 f r ( N , M ) f l ( N , M ) 2 , 1 2 f l ( N , M ) L log L 1 ,
and
f r ( N , M ) L log L 1 ,
which remain within the ranges of multiplication with constant. The relations (A45) and (A46) can be proved by examining the minimum and maximum values of f r ( N , M ) f l ( N , M ) , f l ( N , M ) L log L , and f r ( N , M ) L log L . By considering with the range of 1 N L 2 from the symmetry between N and M ( M = L - N ), we derive the following monotonicity conditions with respect to N,
f l ( N , M ) N 0 ,
f r ( N , M ) N 0 ,
from which we obtain the minimum value of f r ( N , M ) f l ( N , M ) at N = L 2 ,
f r L 2 , L 2 f l L 2 , L 2 = 1 ,
the maximum value of f r ( N , M ) f l ( N , M ) at N = 1 ,
f r ( 1 , L - 1 ) f l ( 1 , L - 1 ) = 2 × L - 1 L 2 ,
the minimum value of f l ( N , M ) L log L at N = 1 ,
f l ( 1 , L - 1 ) L log L = 1 2 log ( L - 1 ) log L 1 2 ,
the maximum value of f l ( N , M ) L log L at N = L 2 ,
f l L 2 , L 2 L log L = log L - log 2 log L 1 ,
the minimum value of f r ( N , M ) L log L at N = L 2 ,
f r L 2 , L 2 L log L = log L - log 2 log L 1 ,
and the maximum value of f r ( N , M ) L log L at N = 1 ,
f r ( 1 , L - 1 ) L log L = L - 1 L log ( L - 1 ) log L 1 ,
with the associated convergence as L . Numerical observation of the convergence between f l ( N , M ) , f r ( N , M ) , and L log L is given in Figure 8a.
As it converges to the same asymptotic behaviour O ( ( N + M ) log ( N + M ) ) on both sides of (A43), we apply the left side of Chebyshev’s inequality f l ( N , M ) to (A40), which gives asymptotical relation
1 8 f l ( N , M ) c 2 d log L - log u , u L c 2 d N - L 8 ( L - N ) - L 8 ,
where coefficient 1 8 is derived from the relation (A41), including the effect of transformation N = N 2 and M = M 2 . As L and taking the sum over N, it converges to the theorem:
L u N = 1 L - 1 L c 2 d N - L 8 ( L - N ) - L 8 , u f * f ( L ) .
Numerical observation of the proof is given in Figure 8 (b). ☐
Proof of Theorem 9.
We consider the Θ coordinates of P o ( X ) as Θ o , which constitutes the e-flat geodesic Θ ( w ) = { θ i ( w ) } between P s ( X ) and P o ( X ) as
Θ ( w ) : = w Θ s + ( 1 - w ) Θ o .
The tangent vector T e of the e-geodesic is expressed as
T e = i = 1 n d d w θ i ( w ) θ i = i = 1 n { θ i ( P s ( X ) ) - θ i ( P o ( X ) ) } θ i ,
and the tangent vector T m of the m-geodesic H o as
T m = i = 1 n d d w η i ( P o ( X ) ) η i = i = 1 n { η i ( P a ( X ) ) - η i ( P l ( X ) ) } η i .
Then the inner product < T e , T m > of these tangent vectors at P o ( X ) is expressed as
< T e , T m > = i = 1 n { θ i ( P s ( X ) ) - θ i ( P o ( X ) ) } { η i ( P a ( X ) ) - η i ( P l ( X ) ) } ,
since from the duality of the coordinates in (60),
θ i , η j = 1 if i = j , 0 else .
As P a ( X ) , P l ( X ) , and P o ( X ) are aligned on the m-geodesic, the relation (A60) can be translated to
< T e , T m > = i = 1 n { θ i ( P s ( X ) ) - θ i ( P o ( X ) ) } { η i ( P a ( X ) ) - η i ( P o ( X ) ) } · C 1 , < T e , T m > = i = 1 n { θ i ( P s ( X ) ) - θ i ( P o ( X ) ) } { η i ( P l ( X ) ) - η i ( P o ( X ) ) } · C 2 ,
with some constant C 1 and C 2 .
Now, from the definition of ( α ) -divergence (65) and its dual divergence (66), the Pythagorean relations between Kullback–Leibler divergences are expressed as
D m ( P a : P o ) + D m ( P o : P s ) - D m ( P a : P s ) = D e ( P o : P a ) + D e ( P s : P o ) - D e ( P s : P a ) = i = 1 n { θ i ( P s ( X ) ) - θ i ( P o ( X ) ) } { η i ( P a ( X ) ) - η i ( P o ( X ) ) } · ( - 1 ) = - 1 C 1 < T e , T m > , D m ( P l : P o ) + D m ( P o : P s ) - D m ( P l : P s ) = D e ( P o : P l ) + D e ( P s : P o ) - D e ( P s : P l ) = i = 1 n { θ i ( P s ( X ) ) - θ i ( P o ( X ) ) } { η i ( P l ( X ) ) - η i ( P o ( X ) ) } · ( - 1 ) = - 1 C 2 < T e , T m > .
When orthogonality holds between the e- and m- geodesic, < T e , T m > = 0 for (A62), which proves the Pythagorean relations from (A63).
Finally, we prove that P o ( X ) satisfies the minimum condition (68). By considering P o ( X ) with a parameter w w as
H o : = w H a + ( 1 - w ) H l ,
we obtain the Pythagorean relation
D m ( P o : P s ) = D m ( P o : P o ) + D m ( P o : P s ) , D e ( P s : P o ) = D e ( P o : P o ) + D e ( P s : P o ) .
Since D m ( P o : P o ) = D e ( P o : P o ) 0 from the definition of divergence, D m ( P o : P s ) D m ( P o : P s ) and D e ( P s : P o ) D e ( P s : P o ) hold, which means P o ( X ) is a stationary point giving the minimum value with respect to D m ( · : P s ) = D e ( P s : · ) , on the m-geodesic between P a ( X ) and P l ( X ) . Note that the theorem also holds when X P ( X ) takes arbitrary finite values other than 1. ☐

References

  1. Schwab, K. The Fourth Industrial Revolution; Crown Business: New York, NY, USA, 2017. [Google Scholar]
  2. Nature’s Notebook. Available online: https://www.usanpn.org/natures_notebook (accessed on 21 April 2017).
  3. Funabashi, M.; Hanappe, P.; Isozaki, T.; Maes, A.M.; Sasaki, T.; Steels, L.; Yoshida, K. Foundation of CS-DC e-Laboratory: Open Systems Exploration for Ecosystems Leveraging. In First Complex Systems Digital Campus World E-Conference 2015, Springer Proceedings in Complexity; Springer International Publishing Switzerland: Cham, Switzerland, 2017; pp. 351–374. [Google Scholar]
  4. Funabashi, M. Open Systems Exploration: An Example with Ecosystems Management. In First Complex Systems Digital Campus World E-Conference 2015, Springer Proceedings in Complexity; Springer International Publishing Switzerland: Cham, Switzerland, 2017; pp. 223–243. [Google Scholar]
  5. Tokoro, M. Open Systems Science: A Challenge to Open Systems Problems. In First Complex Systems Digital Campus World E-Conference 2015, Springer Proceedings in Complexity; Springer International Publishing Switzerland: Cham, Switzerland, 2017; pp. 213–221. [Google Scholar]
  6. Bak, P. How Nature Works: The Science of Self-Organized Criticality; Copernicus: New York, NY, USA, 1996. [Google Scholar]
  7. Jensen, H.J. Self-Organized Criticality; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  8. Takayasu, H.; Sato, A.; Takayasu, A. Stable Infinite Variance Fluctuations in Randomly Amplified Langevin Systems. Phys. Rev. Lett. 1997, 79, 966. [Google Scholar] [CrossRef]
  9. Scanlon, T.M.; Caylor, K.K.; Levin, S.A.; Rodriguez-Iturbe, I. Positive feedbacks promote power-law clustering of Kalahari vegetation. Nature 2007, 449, 209–212. [Google Scholar] [CrossRef] [PubMed]
  10. Gabaix, X. Power Laws in Economics: An Introduction. J. Econ. Perspect. 2016, 30, 185–206. [Google Scholar] [CrossRef]
  11. Alves, L.G.A.; Ribeiroa, H.V.; Lenzi, E.K.; Mendes, R.S. Empirical analysis on the connection between power-law distributions and allometries for urban indicators. Phys. A 2014, 409, 175–182. [Google Scholar] [CrossRef]
  12. Michelucci, P.; Dickinson, J.L. The power of crowds. Science 2016, 351, 32–33. [Google Scholar] [CrossRef] [PubMed]
  13. Hanappe, P.; Dunlop, R.; Maes, A.; Steels, L.; Duval, N. Agroecology: A Fertile Field for Human Computation. Hum. Comput. 2016, 1, 1–9. [Google Scholar] [CrossRef]
  14. Scott, S.L. A modern Bayesian look at the multi-armed bandit. Appl. Stoch. Models Bus. Ind. 2010, 26, 639–658. [Google Scholar] [CrossRef]
  15. Prokopenko, M. Guided Self-Organization: Inception; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  16. Rekimoto, J.; Nagao, K. The World through the Computer: Computer Augmented Interaction with Real World Environments. In Proceedings of the 8th Annual ACM Symposium on User Interface and Software Technology (UIST’95), Pittsburgh, PA, USA, 15–17 November 1995; pp. 29–36. [Google Scholar]
  17. Funabashi, M. IT-Mediated Development of Sustainable Agriculture Systems: Toward a Data-Driven Citizen Science. J. Inf. Technol. Appl. Educ. 2013, 2, 179–182. [Google Scholar] [CrossRef]
  18. Aichi Biodiversity Targets. Available online: https://www.cbd.int/sp/targets/ (accessed on 21 April 2017).
  19. Funabashi, M. Synecological farming: Theoretical foundation on biodiversity responses of plant communities. Plant Biotechnol. 2016, 33, 213–234. [Google Scholar] [CrossRef]
  20. Goodchild, M.F. Citizens as sensors: the world of volunteered geography. GeoJoumal 2007, 69, 211–221. [Google Scholar] [CrossRef]
  21. ISC-PIF (Institut des Systèmes Complexes, Paris Île-de-France). French Roadmap for Complex Systems. ISC-PIF, 2009. Available online: http://cnsc.unistra.fr/uploads/media/FeuilleDeRouteNationaleSC09.pdf (accessed on 21 April 2017).
  22. Solomon, R.C. Subjectivity. In Oxford Companion to Philosophy; Honderich, T., Ed.; Oxford University Press: Oxford, UK, 2005; p. 900. [Google Scholar]
  23. Gillespie, A.; Cornish, F. Intersubjectivity: Towards a Dialogical Analysis. J. Theory Soc. Behav. 2009, 40, 19–46. [Google Scholar] [CrossRef]
  24. Galaxy Zoo. Available online: https://www.galaxyzoo.org/ (accessed on 21 April 2017).
  25. iNaturalist. Available online: http://www.inaturalist.org/ (accessed on 21 April 2017).
  26. Rowell, D.L. Soil Science: Methods & Applications; Wiley: New York, NY, USA, 1994. [Google Scholar]
  27. Kitano, H. Artificial Intelligence to Win the Nobel Prize and Beyond: Creating the Engine for Scientific Discovery. AI Mag. 2016, 37, 39–50. [Google Scholar]
  28. Ioannidis, J.P. Why most published research findings are false. PLoS Med. 2005, 2, e124. [Google Scholar] [CrossRef] [PubMed]
  29. Linked Data. Available online: http://linkeddata.org (accessed on 21 April 2017).
  30. Akao, Y. QFD: Quality Function Deployment—Integrating Customer Requirements into Product Design; Productivity Press: New York, NY, USA, 2004. [Google Scholar]
  31. Hawker, G.A.; Mian, S.; Kendzerska, T.; French, M. Measures of adult pain: Visual Analog Scale for Pain (VAS Pain), Numeric Rating Scale for Pain (NRS Pain), McGill Pain Questionnaire (MPQ), Short-Form McGill Pain Questionnaire (SF-MPQ), Chronic Pain Grade Scale (CPGS), Short Form-36 Bodily Pain Scale (SF-36 BPS), and Measure of Intermittent and Constant Osteoarthritis Pain (ICOAP). Arthritis Care Res. 2011, 63, 240–252. [Google Scholar]
  32. Funabashi, M. Network Decomposition and Complexity Measures: An Information Geometrical Approach. Entropy 2014, 16, 4132–4167. [Google Scholar] [CrossRef]
  33. Walter, R. Fourier Analysis on Groups, Interscience Tracts in Pure and Applied Mathematics, No. 12; Wiley: New York, NY, USA, 1962. [Google Scholar]
  34. Symmetrical 5-Set Venn Diagram. Available online: https://commons.wikimedia.org/wiki/File:Symmetrical_5-set_Venn_diagram.svg (accessed on 21 April 2017).
  35. Funanashi, M. Synthetic Modeling of Autonomous Learning with a Chaotic Neural Network International Journal of Bifurcation and Chaos. Int. J. Bifurc. Chaos 2015, 25, 1550054. [Google Scholar] [CrossRef]
  36. Doya, K.; Ishii, S.; Pouget, A.; Rao, R.P.N. Bayesian Brain: Probabilistic Approaches to Neural Coding; The MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  37. Yoshimura, J.; Clark, C.W. Individual adaptations in stochastic environments. Evol. Ecol. 1991, 5, 173–192. [Google Scholar] [CrossRef]
  38. Harte, J. Maximum Entropy and Ecology: A Theory of Abundance, Distribution, and Energetics; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
  39. Amari, S.; Nagaoka, H. Method of Information Geometry; American Mathematical Society: Providence, RI, USA, 2007. [Google Scholar]
  40. Rao, C.R. Information and accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 1945, 37, 81–91. [Google Scholar]
  41. Murota, K. Matrices and Matroids for Systems Analysis; Springer: Berlin, Germany, 2000. [Google Scholar]
  42. Roy, S.S.; Dasgupta, R.; Bagchi, A. A Review on Phylogenetic Analysis: A Journey through Modern Era. Comput. Mol. Biosci. 2014, 4, 39–45. [Google Scholar] [CrossRef]
  43. Brooks, R.A. Intelligence without representation. Artif. Intell. 1991, 47, 139–159. [Google Scholar] [CrossRef]
  44. Gibson, J.J. The Ecological Approach to Visual Perception; Houghton Mifflin: Boston, MA, USA, 1979. [Google Scholar]
Figure 1. Schematic representation of the inter-subjective objectivity model. (a) Relations between two subjectivities A and B, objectivity, inter-subjectivity between A and B, subjective–objective unity for A and B, and inter-subjective objectivity are depicted as inclusion relations between each other set. (b) Development of inter-subjective objectivity as effective measurements of citizen science. As the inter-subjectivity increases along with the training of subjective–objective unity and inter-subjective feedbacks, the accuracy and reproducibility of measurement based on subjectivity can be assured by the convergence to inter-subjective objectivity.
Figure 1. Schematic representation of the inter-subjective objectivity model. (a) Relations between two subjectivities A and B, objectivity, inter-subjectivity between A and B, subjective–objective unity for A and B, and inter-subjective objectivity are depicted as inclusion relations between each other set. (b) Development of inter-subjective objectivity as effective measurements of citizen science. As the inter-subjectivity increases along with the training of subjective–objective unity and inter-subjective feedbacks, the accuracy and reproducibility of measurement based on subjectivity can be assured by the convergence to inter-subjective objectivity.
Entropy 19 00181 g001
Figure 2. Schematic representation of buoy–anchor–raft model. Buoy, raft, anchor, and connection rope refer to subjectivity, inter-subjectivity, objectivity, and subjective–objective unity, respectively. Concrete real-world examples are given in Table 1.
Figure 2. Schematic representation of buoy–anchor–raft model. Buoy, raft, anchor, and connection rope refer to subjectivity, inter-subjectivity, objectivity, and subjective–objective unity, respectively. Concrete real-world examples are given in Table 1.
Entropy 19 00181 g002
Figure 3. Schematic representation of complexity measures as non-linear feature space and search function as its inverse functions. (a) Utility characteristics of a complex system, or complexity measure in general terms, is expressed with a complex configuration in parameter space. Parameters can also represent other complexity measures. (b) Complexity measures transform parameter space into non-linear feature space, which provides easier interpretation by sorting the order of a given utility. The inverse functions of complexity measures therefore correspond to search functions with respect to the search condition on utility.
Figure 3. Schematic representation of complexity measures as non-linear feature space and search function as its inverse functions. (a) Utility characteristics of a complex system, or complexity measure in general terms, is expressed with a complex configuration in parameter space. Parameters can also represent other complexity measures. (b) Complexity measures transform parameter space into non-linear feature space, which provides easier interpretation by sorting the order of a given utility. The inverse functions of complexity measures therefore correspond to search functions with respect to the search condition on utility.
Entropy 19 00181 g003
Figure 4. Numerical example of convolution λ N ( r N ) . For two kinds of probability measure μ 1 (green distribution) and μ 2 (blue distribution) on r R (supported by black rug), the convolution λ N ( r N ) with N = 2 , 4 , 8 , 16 are shown with different colors based on random sampling of 600 , 000 × N points from N 2 pairs of μ 1 and μ 2 . The case of Λ = 1 is simulated, which shows the canonical convergence towards normal distribution following the central limit theorem with σ N N σ , where σ = 1 2 ( β 1 + β 2 ) as defined in (33) and (A13). For simplicity, ν N is adjusted to 0 by the symmetric selection of μ 1 , μ 2 , and r.
Figure 4. Numerical example of convolution λ N ( r N ) . For two kinds of probability measure μ 1 (green distribution) and μ 2 (blue distribution) on r R (supported by black rug), the convolution λ N ( r N ) with N = 2 , 4 , 8 , 16 are shown with different colors based on random sampling of 600 , 000 × N points from N 2 pairs of μ 1 and μ 2 . The case of Λ = 1 is simulated, which shows the canonical convergence towards normal distribution following the central limit theorem with σ N N σ , where σ = 1 2 ( β 1 + β 2 ) as defined in (33) and (A13). For simplicity, ν N is adjusted to 0 by the symmetric selection of μ 1 , μ 2 , and r.
Entropy 19 00181 g004
Figure 5. Schematic representation of the triplet order algorithm that calculates the total order of three observations with respect to the complexity defined on the pair-wise commonality between them. Three observations A, B, and C are expressed as vertices of triangle in a two-dimensional surface, whose edge lengths A–B, B–C, and A–C represent the commonality of each vertex pair. For simplicity, the triangles are projected as regular triangles, but the actual edge lengths generally differ, which provides the total order of edges. The six case statements of the algorithm are shown separately. Given the total order between the edges in blue magnitude relation, the corresponding total order of observations are depicted with orange axes at the side of each triangle. Orange axes superimposed with triangles signify that by orthogonally projecting the vertices onto them, the total order of vertices are obtained, whose generalization is developed in the Section 3.4. This holds for arbitrary three positive values of edge length without the constraint of triangular inequality, by considering appropriate projection of the triangles to a non-Euclidian surface.
Figure 5. Schematic representation of the triplet order algorithm that calculates the total order of three observations with respect to the complexity defined on the pair-wise commonality between them. Three observations A, B, and C are expressed as vertices of triangle in a two-dimensional surface, whose edge lengths A–B, B–C, and A–C represent the commonality of each vertex pair. For simplicity, the triangles are projected as regular triangles, but the actual edge lengths generally differ, which provides the total order of edges. The six case statements of the algorithm are shown separately. Given the total order between the edges in blue magnitude relation, the corresponding total order of observations are depicted with orange axes at the side of each triangle. Orange axes superimposed with triangles signify that by orthogonally projecting the vertices onto them, the total order of vertices are obtained, whose generalization is developed in the Section 3.4. This holds for arbitrary three positive values of edge length without the constraint of triangular inequality, by considering appropriate projection of the triangles to a non-Euclidian surface.
Entropy 19 00181 g005
Figure 6. Integration of two commonality orders. (a) The correspondence between commonality orders I and I I (orange arrows) can be described as the permutation between N observations (black circles), providing the topology of I–I matching (green dotted line) and I–N compromise (blue dotted line); (b) Affine space with respect to the commonality orders I and I I as coordinate system (orange arrows) for the resolution of I–N compromise. The I–N mean commonality order (red solid arrow) can be calculated from the pair-wise order algorithm (Section 3.3) applied on the commonality orders I and I I , which makes the I–I matching identical to the I–I dimension (green arrow) and sets the mean order to I–N compromise. One I-N resolution dimension is required to resolve one I–N compromise (blue arrow). The implicit structure of the integrated commonality order with continuity assumption takes a complex form reflecting I–N compromises (red dotted arrow as an example), which corresponds to the complex utility configuration in Figure 3a; (c) The general case with an arbitrary number of I–N compromises. Total commonality space of N - 1 dimensions is divided between I–N resolution dimensions (blue arrows) and I–I dimensions (green arrow), between which I–N mean commonality order can be defined (red arrow). k < N axes of I–N resolution dimensions are required to resolve k I–N compromises (blue arrows). Taking the I–I dimensions and I–N resolution dimensions as Affine coordinates, the integrated commonality order is projected onto the I–N mean commonality order as a simplest sorted order of utility, which corresponds to Figure 3b.
Figure 6. Integration of two commonality orders. (a) The correspondence between commonality orders I and I I (orange arrows) can be described as the permutation between N observations (black circles), providing the topology of I–I matching (green dotted line) and I–N compromise (blue dotted line); (b) Affine space with respect to the commonality orders I and I I as coordinate system (orange arrows) for the resolution of I–N compromise. The I–N mean commonality order (red solid arrow) can be calculated from the pair-wise order algorithm (Section 3.3) applied on the commonality orders I and I I , which makes the I–I matching identical to the I–I dimension (green arrow) and sets the mean order to I–N compromise. One I-N resolution dimension is required to resolve one I–N compromise (blue arrow). The implicit structure of the integrated commonality order with continuity assumption takes a complex form reflecting I–N compromises (red dotted arrow as an example), which corresponds to the complex utility configuration in Figure 3a; (c) The general case with an arbitrary number of I–N compromises. Total commonality space of N - 1 dimensions is divided between I–N resolution dimensions (blue arrows) and I–I dimensions (green arrow), between which I–N mean commonality order can be defined (red arrow). k < N axes of I–N resolution dimensions are required to resolve k I–N compromises (blue arrows). Taking the I–I dimensions and I–N resolution dimensions as Affine coordinates, the integrated commonality order is projected onto the I–N mean commonality order as a simplest sorted order of utility, which corresponds to Figure 3b.
Entropy 19 00181 g006
Figure 7. Topological hierarchy of commonality between observations. For example, five observations A, B, C, D, E are depicted with correspondence to the commonality order of each topological subset. The Venn diagram on the left represents the commonality structure within observation probability database X 5 on variable X ( N = 5 in Section 4.2), where coincident observation is superimposed. The maximum commonality order is the projection between these topological subsets to the natural number N in right axis, describing the number of matching observations. Venn diagram cited from [34].
Figure 7. Topological hierarchy of commonality between observations. For example, five observations A, B, C, D, E are depicted with correspondence to the commonality order of each topological subset. The Venn diagram on the left represents the commonality structure within observation probability database X 5 on variable X ( N = 5 in Section 4.2), where coincident observation is superimposed. The maximum commonality order is the projection between these topological subsets to the natural number N in right axis, describing the number of matching observations. Venn diagram cited from [34].
Entropy 19 00181 g007
Figure 8. Numerical observation of the proof of Theorem 8. (a) Chebyshev’s inequality (A41) and asymptotic convergence to O ( L log L ) (A44) with respect to N , M 1 ( N + M = L ) , L = 10 , 10 2 , 10 3 . Y-axis is plotted with log scale. The equality in (A41) is given at N = M = L 2 ; (b) Behaviour of f ( N ) L , f ( M ) L , and f ( N ) f ( M ) L with respect to L = 10 , 10 2 , 10 3 . For visibility, the Y-axis scale is given as log 2 ( Y - 1 ) that represents smaller Y value to the bottom, and Y-axis label shows the value of - log Y . The surface below the solid line f ( N ) f ( M ) L represents the convolution multiplied by L, f * f ( L ) L . The mean value of solid line f ( N ) f ( M ) L therefore corresponds to the upper limit of u that satisfies the polynomial constraint (45) with respect to given L. c = d = 2 were used for the simulation.
Figure 8. Numerical observation of the proof of Theorem 8. (a) Chebyshev’s inequality (A41) and asymptotic convergence to O ( L log L ) (A44) with respect to N , M 1 ( N + M = L ) , L = 10 , 10 2 , 10 3 . Y-axis is plotted with log scale. The equality in (A41) is given at N = M = L 2 ; (b) Behaviour of f ( N ) L , f ( M ) L , and f ( N ) f ( M ) L with respect to L = 10 , 10 2 , 10 3 . For visibility, the Y-axis scale is given as log 2 ( Y - 1 ) that represents smaller Y value to the bottom, and Y-axis label shows the value of - log Y . The surface below the solid line f ( N ) f ( M ) L represents the convolution multiplied by L, f * f ( L ) L . The mean value of solid line f ( N ) f ( M ) L therefore corresponds to the upper limit of u that satisfies the polynomial constraint (45) with respect to given L. c = d = 2 were used for the simulation.
Entropy 19 00181 g008
Figure 9. Information geometrical optimization of diversity strategy portfolio with respect to actual distribution P a , short-term management objective P s , and long-term sustainability P l . On a dual-flat statistical manifold based on Fisher information metric, each distribution is represented as a point (black circles). The m-geodesic is depicted with a blue line, while the e-geodesic is shown with a red line, which orthogonally cross at the optimized strategy P o . Topological correspondence between complexity measure H (aligned on left orange arrow) and diversity strategy portfolio ( P a , P s , P l and P o ) is shown with dotted lines with respect to the magnitude relation.
Figure 9. Information geometrical optimization of diversity strategy portfolio with respect to actual distribution P a , short-term management objective P s , and long-term sustainability P l . On a dual-flat statistical manifold based on Fisher information metric, each distribution is represented as a point (black circles). The m-geodesic is depicted with a blue line, while the e-geodesic is shown with a red line, which orthogonally cross at the optimized strategy P o . Topological correspondence between complexity measure H (aligned on left orange arrow) and diversity strategy portfolio ( P a , P s , P l and P o ) is shown with dotted lines with respect to the magnitude relation.
Entropy 19 00181 g009
Figure 10. Results of inter-subjective and subjective–objective commonality orders in citizen observation of biodiversity. Seven people represented with numerical ID are aligned with commonality orders (a) based on inter-subjectivity; and (b) based on subjective–objective unity, which showed a 3.92% residual error probability regarding the rejection of the random order distribution hypothesis with respect to the binomial test (38).
Figure 10. Results of inter-subjective and subjective–objective commonality orders in citizen observation of biodiversity. Seven people represented with numerical ID are aligned with commonality orders (a) based on inter-subjectivity; and (b) based on subjective–objective unity, which showed a 3.92% residual error probability regarding the rejection of the random order distribution hypothesis with respect to the binomial test (38).
Entropy 19 00181 g010
Table 1. Examples of buoy, raft, and anchor in various social systems and scientific domains. Examples are not comprehensive, but a partial list of typical data from the recently increasing public availability.
Table 1. Examples of buoy, raft, and anchor in various social systems and scientific domains. Examples are not comprehensive, but a partial list of typical data from the recently increasing public availability.
EconomyJudiciaryBiodiversity RecordMedical Treatment
BuoyDemand, satisfactionSense of justice, guiltVisual identification of speciesPain, psychological state
RaftPrice, exchange rateLaw, court decisionIdentification with votingDiagnosis, prescription
AnchorGoods abundanceEvidential matterDNA sequencesPhysiological markers
Table 2. Correspondence between buoy–anchor–raft model and computational variables in this article.
Table 2. Correspondence between buoy–anchor–raft model and computational variables in this article.
Section Number2.23.13.23.33.445
Buoy B μ ( · ) , I ( · ) μ i ( · ) , q i ( · ) Data contained in vertices VCom. order I and I I between N objectsObservations A, B, C, D, E P ( · ) , P a ( · ) , P s ( · ) , P l ( · ) , P o ( · ) , H
Anchor A
Raft R , E μ ( · , · ) , I 2 ( · , · ) λ N ( · ) Edge attribute of ECom. order I and I I b/w N observers, TDC, I-I and I-N res. dim. O ( · ) H 2 , D m ( · : · ) , D e ( · : · )
Buoy–Anchor C
Raft–Anchor E
Table 3. Algorithmic complexity for the calculation of commonality orders. With respect to the maximum commonality order in (39), an exhaustive number of combinations with the use of observation probability database X N of size N and the time scale required for the sorting of the commonality measure is shown. Sorting time is based on the worst-case performance of canonical algorithms such as bubble sort and quick sort (polynomial degree d = 2 ). O ( · ) denotes asymptotic notation of Landau. O ( X ) = N 2 and N 2 require the maximum calculation and sorting time. Note that the total computation time is upper-bounded by the sorting process ( d = 2 ) than the combinatorics of commonality ( d = 1 ), though calculation time of each commonality such as convolution should be further considered in actual implementation.
Table 3. Algorithmic complexity for the calculation of commonality orders. With respect to the maximum commonality order in (39), an exhaustive number of combinations with the use of observation probability database X N of size N and the time scale required for the sorting of the commonality measure is shown. Sorting time is based on the worst-case performance of canonical algorithms such as bubble sort and quick sort (polynomial degree d = 2 ). O ( · ) denotes asymptotic notation of Landau. O ( X ) = N 2 and N 2 require the maximum calculation and sorting time. Note that the total computation time is upper-bounded by the sorting process ( d = 2 ) than the combinatorics of commonality ( d = 1 ), though calculation time of each commonality such as convolution should be further considered in actual implementation.
Maximum Commonality Order O ( X ) Number of CombinationSorting Time ( d = 2 )
2 N C 2 O ( ( N C 2 ) 2 ) = O ( N 4 )
3 N C 3 O ( ( N C 3 ) 2 ) = O ( N 6 )
N 2 or N 2 N C N 2 = N C N 2 O N C N 2 2 = O N C N 2 2 = O N 2 · N 2
N N C N = 1 O ( ( N C N ) 2 ) = O ( 1 )

Share and Cite

MDPI and ACS Style

Funabashi, M. Citizen Science and Topology of Mind: Complexity, Computation and Criticality in Data-Driven Exploration of Open Complex Systems. Entropy 2017, 19, 181. https://doi.org/10.3390/e19040181

AMA Style

Funabashi M. Citizen Science and Topology of Mind: Complexity, Computation and Criticality in Data-Driven Exploration of Open Complex Systems. Entropy. 2017; 19(4):181. https://doi.org/10.3390/e19040181

Chicago/Turabian Style

Funabashi, Masatoshi. 2017. "Citizen Science and Topology of Mind: Complexity, Computation and Criticality in Data-Driven Exploration of Open Complex Systems" Entropy 19, no. 4: 181. https://doi.org/10.3390/e19040181

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop