1. Introduction
There are opinions, expressed e.g., in [
1], about the difficulties of the reduction theory in treating consciousness and the possibility to use the phenomenological approach. In contrast to these statements, we intend to develop a reductive constructive approach, believing that consciousness can manifest itself as a result of the transition of the level of complexity of the system through a certain threshold. The purpose of this work is to describe the initial model of neural systems based on kinetic-statistical methods. The general research methodology and basic concepts that can be formalized are presented.
The goal of the technological race between countries and corporations is to approach to the size of the brain and a billion biosimilar neurons. At the same time, with a good hardware base, there is some lag all over the world in the principles of functioning of biosimilar neural networks. Obviously, learning should be local and based on the neuron and nearest neighbors. In this regard, the question arises: how should the principles of plasticity be organized for the transition from a chaotic state of neurons to states of a given function and to more symmetrical structures? The authors provide a partial answer to this question in this paper, taking into account an analogy with the phase transition of the second type (as for the transition from paramagnetic to ferromagnetic).
Neural network algorithms have entered everyday life, these algorithms mainly solve problems of recognition, control, or decision support. However, research on artificial consciousness is now becoming relevant. From this point of view, we can discuss briefly the common and different features of the animal and human brains and consciousnesses. Instantaneous fast reactions, correlated with a set of instincts, are approximately the same in both animals and humans, or at least reaction times are comparable. But human consciousness is more characterized by the study of reflections, not only direct “pathways” from receptors to effectors. Thus, we can assume that neural networks can be more complex and contain not only graph trees (from receptors to effectors), but also cyclic, vortex, and cluster structures in neuron-like networks.
We can mention some ideas and methods that more or less used similar approaches, first of all the conditioned reflex arc of I.P. Pavlov. The next possibility is related to P.K. Anokhin’s theory of functional systems. A circle of the efferent excitation and the feedback can serve as a basic elementary model for studying complex brain structures.
Many works are devoted to the critical problems of artificial neural networks and algorithms for their synthesis. At the same time, the issues of evaluating cyclic structures in neuron-like graphs that grow according to certain laws are not considered. However, some trend towards these studies can be traced. The principal issue is as follows: findings of the complex clusters and cycles imply structures intrinsic to consciousness with their special autonomous in cognitive operations.
Conducting a literature review on the topic, we found many works of various scientific orientations. At the same time, in modern publications, three main areas can be distinguished: graph theory, neural network technologies, and the junction of these two areas.
Recent research into large networks, such as the brain, has focused on the three-dimensional layout of the network, which affects the structure and function of the system. The results of [
2] allow formulating a statistical model for the formation of tangles in physical networks, with finding that the mouse connectome is more entangled than expected based on optimal wiring. In [
3], general hierarchical models of consciousness were constructed, including equations governing the cooperative variables for several cognitive modalities: semantic and working memory, attention, emotion, perception, and their sequential interaction.
It was shown in [
4] that network switching in the human brain is decreased with an increasing timescale opposite to that in random systems. In [
5], the authors propose fractal shapes as a measure of proximity to critical points based on the hypothesis that consciousness level and complexity of the neural network system are positively related, and are consistent with previous EEG (electroencephalography), MEG (magnetoencephalography), and fMRI (functional magnetic resonance imaging) studies in the paper [
6], authors constructed the underlying graph based on the geometrical distances between each electrode pairs.
We used our earlier elaborated methods, constructing graphs [
7]. On the other hand, the approach of statistical physics and kinetic theory is applied. It is capable of describing open nonequilibrium systems for various equations and formulations of problems, in particular, to simulate some properties of biostructures [
8]. The directly applied apparatus is based on the kinetic-statistical approach related to the theory of complex networks, see [
9,
10]. This approach corresponds to the method of construction of random ER graphs. In the present paper, we attempt to combine these conceptions and search for large graph structures, keeping in mind modeling of the mentioned neural systems.
We will also mention some modern works concerning aspects of neuro-nets with the use of the random graph models and biosimilar structures [
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28]. The latest technology and a conceptual approach to building neuron nets have resulted in network structures that capture the basic properties of cortical organization [
11,
12,
13].
In the paper, the first methods for neuro-simulation are given. Then we consider the basic mathematical apparatus of the random graphs. Some algorithms are then constructed to simulate complex graph networks. Simulations and the fixed transitions from tree structures to clusters are shown. The following is the outline of the paper. Methodology, which includes a summary of the computational technique, findings, and commentary, which includes an in-depth comparative analysis of information of neurobiological research. Possible comparisons with neuropsychological and multidisciplinary studies are discussed in the conclusion.
Stochastic models are discussed in [
15,
16], where the researchers offer a technique based on an exponentially distributed network modeling approach and performance in comparison to that of the basic approach. The authors of [
17] use Bayesian exponential random graph modeling (ERGM) to describe an observable structure based on the combined contribution of local network topologies or characteristics. A cascading random graph system based on the maximum likelihood method is proposed in [
18]. A trajectory planning approach based on the generation of random pathways among two places is recommended in [
19]. The developed paths were categorized on supplied parameters like the biggest and smallest pathways. The researchers of [
20] demonstrated results in hypergraphs, and introduced the concepts of protected edge stable subsets. The article describes a network transfer function model called resource network in [
21].
The study [
22] proposes an implicit enumeration strategy for detecting the highest cluster in a random graph, relying on the triangular decomposition of topologies. The study [
23] considers how graphs can be used to describe data. The study [
24] focuses on the concept of network dominance.
The study [
25] presents a generalized training system based on extended stochastic topologies and neural nets. The authors of [
26] focus on a parametric problem for class of random networks with dynamic arrangement. In [
27], the authors provide a methodological guide to visual analysis of brain signals.
Optimization neural techniques have been shown to be highly efficient for a variety of optimization engineering challenges in current publications. The goal of the work [
28] is to apply a new concept in stochastic optimization methods called “The evaporation rate-based water cycle algorithm”. It is a tweaked version of the water cycle algorithm that is applied to adapt an artificial neural network computationally. Despite the evaluation of cyclic structures, this is just one of the optimization methods for training the synaptic structures of neural networks.
In [
29], the authors studied cyclic processes in neural structures by watching nerve cells over several days to months and observing the kinetics of their activation in freely acting mice. Such processes can be tied to both daily activity and brain activity in various conditions.
More often than cyclic algorithms, chaotic methods are used in practice in neural network structures. Chaos-based neural synchronization is proposed for the development of a public key exchange protocol in the article [
30]. A chaotic logistic map is used at the work [
31] to construct a key double logistic sequence in a chaotic encryption system, and deoxyribonucleic acid (DNA) matrices are created on DNA coding. The denoiser is a convolutional neural network (CNN). Although these research are purely applied in a field of cryptography, it demonstrate the importance of application of chaotic processes in artificial neural structures.
The paper [
32] is approached as a combinatorial optimization problem: to identify formulations to compute the Euler number, a neural algorithm based on simulated annealing is designed.
Study dynamical systems connected with a large family of groups that includes several well-known examples of intermediate-growth groups at [
33]. The orbital graphs for group actions on d-regular rooted trees and their borders, considered as topological spaces or spaces with measure, were shown. They form finitely ramified graph families, and researchers are studying their combinatorics, “isomorphism classes,” and geometric properties like growth and number of ends.
Graph neural networks (GNNs) have sprung onto the machine learning in last years. GNN algorithm have applications in electronics, neuroscience, and social networks [
34]. The authors of the research [
35] suggest a temporal model for constructing customized random graphs with a set of the “real-world graph attributes”. This line of research seems promising due to the presence of network structures associated with biological and social processes. The morphological and architectural features of these networks can be determined by comparison with stochastic models, as well as graphs derived from many other dynamic structures or neurological information. The researchers used a pooled exponentially randomized model that represents the research model in brain functional networking as a mathematical linkage system that represents a model for studying functional brain networks using MEG and fMRI measurements in [
36]. Modeling cerebral connection networks, they say, is crucial for a deeper understanding of the mind’s working mechanism and inferring specific neural features. According to them, modeling cerebral connection networks is crucial for better understanding the mechanism of brain function and infer specific neural features.
As follows from the literature review, the current state of the issue indicates the widespread use of various algorithms in the theory of neural networks, growing graphs, chaos and cyclic structures. At the same time, it needs a solution of the problem of neuromodeling by combining different approaches. To solve this problem, we propose specialized algorithms for the growth of neuron-like structures.
The paper is organized as follows.
Section 1 contains the literature review.
Section 2 contains the methodological problems.
Section 3 contains basic concepts and methods.
Section 4 contains the results.
Section 5 contains discussion.
Section 6 contains the conclusions.
2. Methodological Problems of Neurosimulation
These following problems, presented in
Table 1, are fundamental in nature and are being addressed by thousands of laboratories around the world. The use of the biocybernetic approach will allow us to overcome the above methodological difficulties to make a breakthrough in creating a working model of the brain (which is now being successfully done by technological giants such as Ascent, Boston dynamics, Numenta, etc.).
Hence, the following conclusions can be drawn:
Modeling just one subsystem of the visual analyzer can take years, but more subsystems can take decades. At the same time, the probability of achieving a positive result, according to experts, ranges from 0.01 to 50%.
The volume of work will be constantly growing due to the replenishment and refinement of data, a significant part of which is phenomenological in nature.
To minimize risks when constructing neurobiological models, it is necessary to use methodological developments in the field of neuroinformatics and neuromorphic computing, which have been successfully applied in practice since the 60s of the last century.
Here, the emergence of neural networks is interpreted using cycles (clusters). It is believed that the elements of consciousness and self-awareness are associated with the manifestation of the interaction of such structures. Cycle means the circulation of electrical and/or chemical signals in it. Thus, the statistical nature of collective interaction in the structures of large cycles can be attributed to the manifestations of elements of the phenomenon of consciousness and self-awareness. In principle, consciousness already presupposes a kind “reflection on reflection”, but this is especially evident, apparently, for self-awareness, which can be modeled by at least a pair of giant interacting cycles a part is distinguished from the whole: one part “contemplates” the other.
3. Basic Concepts and Methods
As the number of graph edges increases above the so-called percolation threshold, the number of loops grows, and a giant graph (cluster) is formed in the graph tree. Moreover, a kinetic equation is used to describe the change in the distribution of the degree of nodes in the ER graph. This cluster corresponds to the circulation, maybe partly, of the signal (electrical or/and chemical) in the appropriate network of neurons.
A mosaic genetic structure can be imposed on a neural-graph network (or appear as a result of further formalization). This presents a less formal visual system of images that is close to valid verification by tools. So, it seems possible to simulate a network of random graphs with the selection of some elements of the structure of consciousness. According to the classification adopted in [
9,
10], three types of clusters are distinguished: trees, unicyclic clusters, and complex clusters. By definition, there are no closed paths (cycles) in a tree, a unicyclic cluster contains one cycle, and a complex cluster contains at least two cycles.
The idea of the Euler characteristic can be used to identifying different kinds of clustering. The Euler characteristic is defined for graphs: E = Nlinks − Nnodes, Nlinks is the entire number of links (edges), and Nnodes is the total number of nodes, hence E = Nlinks − Nnodes; for any tree E = −1.
The development of an ER graph can begin with N isolated nodes, such as N tree-graphs.
We get at a giant tree after making connections between two trees. A randomized network is a forest at the start of the expansion, which indicates that all clusters are trees. When a link is added to the inside of the tree, it becomes a unicyclic element. A unicyclic cluster is formed when a link between a tree and a unicyclic cluster is added, while a complex cluster is formed when a link between unicyclic clusters is added.
It may seem that a classification based on the selection of trees, unicyclic and complex clusters is insufficient, since a whole range of clusters of varying complexity will arise. Instead, we would have to look at complex clusters in more detail and classify them according to their Euler characteristics. With a further increase in the number of links, a situation arises when a giant component can contain all nodes.
The kinetic approach leads to mathematical formalism, first based on equations. We will briefly consider this approach, but we will deal mainly with the geometric approach.
Now we’ll look at a geometrical variation of the method. We will be working here on building the complicated clustering with the distribution of a huge part in constructing a randomized ER network, because we believe that the development of this kind of large cycle in a network is the most significant feature of consciousness, as described above. The article’s idea is that when randomized neural-like networks are generated, large cyclical patterns will develop with a specific mathematical probability.
The topic of the presented article relates to graph theory. In this paper, we model the quasi-chaotic growth of a graph. The algorithm parameters allow you to choose different growth strategies for a random graph. In this case, the spread of cyclic and other parameters in the structure of the resulting graphs is estimated. The research strategy is based on repeated series of experiments under equal initial conditions. We have found that the statistical spread of the output parameters of the synthesized graphs has a characteristic distribution.
Speaking about cyclic structures in a graph, we note the connection of such structures with the phenomena of symmetries and asymmetries of the brain, as well as the phenomena of reflexes, higher nervous activity, preconsciousness, and consciousness.
We offer two strategies for creating expanding randomized networks with a neural-like topology.
3.1. Method One
The algorithm is essentially a step-by-step growth of the graph G. At each step, one vertex is added, which is connected by an edge to a random vertex of the graph until the maximum number of steps is reached. The algorithm of simulation of randomized networks that grow is shown in
Figure 1.
3.2. Method Two
The second method is more neuro-like because it is based on random tree structures. These tree structures have such parameters as the number of tiers of the tree, and a range of numbers corresponding to random branches for each neuron. Moreover, the parameter of the algorithm is the number of feedbacks between the trees to combine them into a single structure.
Method two is the following. The first step is to create a random tree. The depth and branching of the trees are the algorithm parameters. The number of outgoing synapses from each graph vertex is determined by a range of integer values. The number of branches at each phase is its parameter. The second step is the using of multiple feedbacks to combine these trees. Only between the outputs of certain trees and the inputs of others link can be formed.
Tracked parameters: the number of simple paths, the number of simple cycles, Euler’s characteristic.
Method two generates graph structures that to some extent reproduce the neural connections in the brain. In the structure of the graph, ganglion formations are reproduced in the form of bundles or clusters. The simulated formations have a tree-like structure. In this regard, free parameters of the algorithm for generating a neural-like graph are the number of such trees, their depth, their branching, as well as the number of feedbacks between tree structures. All free parameters are selected equally likely from a given range of values. The following parameters were used in the experiments:
Number of trees, 100–1000
Depth of trees, 4–12
The number of outgoing edges from each vertex, 2–8
Number of feedbacks, 10–20
The algorithm is designed to generate a wide range of neural-like structures with various properties, such as the intensity of feedbacks, network structure, cluster organization.
Since the algorithm is step-by-step (dynamic), it can to some extent simulate the process of evolution, the result of which is the emergence of higher nervous activity and consciousness.
In this study, the authors did not consider the functions of individual neurons, but studied only the structures of neural-like networks. Classical artificial neural networks can act as neurons. Multiple feedbacks, which are reproduced by the above algorithm, are known to be responsible for the processing of dynamic information. However, the practical application of generated neural networks is beyond the scope of this study.
4. Results of Simulation of the Growing Graphs
Figure 2 and
Figure 3 demonstrate the results of simulation experiments with network layouts.
As can be seen from
Figure 3, all processes are explosive-like, or a similar nature.
Figure 2a,b shows a sharp exponential growth in graph dynamics. This growth is due to the growth in the number of simple paths and, at the same time, simple cycles. In
Figure 3c the growth of the Euler number has a different character. In the bottom row of
Figure 2, you can also pay attention to the explosive growth of dynamics. Here we display the relationship between the variables to each other: this is the Euler number, the number of simple cycles and the number of simple paths. Thus, almost any described algorithmic process is associated with explosive dynamics. This dynamics is considered by us as a complication of the structure of neural networks in the process of natural evolution.
To estimate the numerical parameters obtained during the growth of quasi-chaotic structures according to certain given rules, it is expedient to use the method of multiple repeated experiments. Thus, fixed parameters of the algorithm (initial conditions) are set, and the growth of the graphs is repeated N times, for example, N = 100. The idea is to generate some percentage of complex structures with a large number of neural network cycles. We believe that with neural network modeling, these networks will have preconsciousness. Thus, finding a structure with giant cyclic clusters will facilitate the transition from artificial intelligence to artificial consciousness.
To analyze the results, we used data from a random experiment, in which we constructed networks with a configuration comparable to the columnar arrangement of the neocortex with natural cycles (
Figure 4). Feedback can only be formed between input and output vertices of different trees.
Table 2 and
Table 3 show the statistical data values.
Numerical tests have shown that Method two, like Method one, has the property of a quasi-distribution of the set of generated simple cycles. In 1% of situations, there is a large increase in the number of simple pathways, which is 1400 times greater than the average statistically.
Table 3 shows that even with a limited number of experiments, the differences can be hundreds of thousands of times (150 experiments). Trees in the structure of the graph, in accordance with the hypothesis of the study, are used as models of columnar neocortex structures. The results of quantitative tests are compared with neuropsychology parameters, revealing that the fMRI data of the normal and pathological cortex differ [
37,
38].
5. Discussion
Speaking about the network and cyclical structures that model consciousness, the contribution of the Russian neurophysiological school should be noted. These are Pavlov’s theory of conditioned reflex and academician Anokhin’s theory of functional systems. The development of the theory of the functional systems is the neurocomputer model of the brain [
39]. This is a technical model of the brain, which is interesting because it is suitable for studying cyclic structures in neural network processes and, at the same time, is a combinatorial model given in a discrete space.
We will try to apply this approach to the Embryo neurocomputer (where, in fact, all the vertices of the network are nested in the space-time hypercube) or to the Zhdanov’s models (Neurox), where we can run the emulation on an arbitrary graph with the construction of neurograms of neuron activity over time [
40].
Some artificial neural nets, such as the spiking neural network [
41], may also prove valuable in future simulations. Recommendations for conducting real experiments can be generated based on the results of proper network tests. There have been new studies on fast neuronal signaling patterns. The brain’s global properties are explored, for example, in [
42]. It has become possible to record the activity of huge populations of neurons and individual cells [
43]. Since new tomographs can record blood flows in the brain, it will be possible to determine the circulation of neuron messages indirectly using these methods. Neurogenetic approaches can be used to encode the registered sections of neurons. When analyzing neural activity systems in different environments, the goal may be to detect the appearance of signal complication in neuronal systems, up to the detection of cyclic structures (as aspects of the expression of consciousness).
The results obtained in this work show the self-organizing properties of growing graphs. At the same time, graphs need not necessarily model only neural networks. They can also model complex social and eco-biological systems. In this regard, the authors propose to expand the principles of connectionism, which are widely used in neuroinformatics, with the help of data obtained on the self-organization of complex network systems. This can be useful in neurosciences and network modeling of complex objects, processes, and phenomena.
We support the view on the theory of consciousness similar to [
44] as a general neuro-model. Some theories of consciousness are mentioned by Anokhin, namely, a selection theory of neural groups [
45], the theory of neuron coalitions [
46], a global neural workspace theory [
47], and an integrated information theory [
48]. The author of [
16] suggests that to understand consciousness, it is necessary to consider the brain not as a connectome-neural network, but as a cognitome—a neural hypernetwork consisting of neuronal groups with specific cognitive properties. The structure of the cognitome is the structure of the mind, and consciousness is a specific process of large-scale integration of cognitive elements in this neural hypernetwork. Our concept actually partially overlaps with Anokhin’s, but our model is more specific and considers this “hyperstructure” as a cluster (cycle) or a system of clusters. There are some conditions formulated in [
44] for adequate theories of consciousness, in particular, the theory should reflect the features of evolution. In our model, due to the peculiarities of algorithms for growing graphs, evolutionary features can be reflected (such structures are also reproduced in the embryo). The problem is how to replicate these complex cluster systems in the educational process. Perhaps the structure of a random graph is insufficient for a complete description, but some necessary conditions for an adequate reflection of important properties of consciousness have been confirmed.
The purpose of our article was to construct a theoretical justification of the grapho-neuron model of the hypothetical structure of consciousness and try to distinguish it among other theoretical concepts centered around the problem of the neural basis of consciousness. There is a general statement formulated, for example, in [
46]: “You do not need to invoke a homunculus, a little person living in the brain, to interpret the meaning of perception.” In our approach of random clusters of graphs, the neurosystem as a whole can be divided into parts of cycles (clusters), and one part can “contemplate” the other, creating a subjective impression.
We emphasize the fundamental “qualitative leap”, “phase transition” in the complex tree structure of the graph, which can lead to the manifestation of “elements of consciousness” and even to self-awareness in pairs of clusters. This is a necessary condition, not a sufficient one, but it provides specific information that can be attempted to be recognized in experimental studies such as EEG, tomography, etc., and which is referred to in the findings as prospective. On the other hand, this concept can be compared with other theoretical approaches, in particular with the mentioned global structure of Anokhin’s cognitom. Note also that the percolation transition (in a different sense) is the subject of some studies of brain oscillations during criticality in [
16]. Topographic representations may be relevant to neural networks, but this is questionable.
It should be noted that the presented results are promising in neurocomputer interfaces, man-machine systems, and artificial intelligence systems. This statement is based on the fact that the so-called impulse neural networks today got a wide distribution. In such networks, it is not the coefficients of synaptic connections that are fundamentally important but the structures of synaptic connections between neurons. The corresponding area of algorithmic mathematics related to optimization or finding the form of giant neural networks is undoubtedly significant. Networks with a dynamic tunable structure are much more flexible and reliable compared to networks with a fixed structure. In particular, classical neural networks with a fixed synaptic structure can fall into the local optimum of control functions when optimizing procedures. This can lead to a result that is inadequate to the real situation and, as a result, to an unjustified risk in man-machine systems.
6. Concluding Remarks
In the course of simulations, the properties associated with cyclic structures were estimated. The described approach allows us to investigate clusters.
The human brain, as well as all its organs, suborganic structures, and tissues, have pronounced fractal properties with their own symmetries and asymmetries. The same applies to DNA structure. This is especially clearly demonstrated in the article [
14] with a description of the DNA visualization algorithm and examples of fractal structures display. These structures have symmetrological properties. It is logical to assume that the inherited fractal structure of DNA is realized in the form of a fractal structure of the brain and the whole organism, with preservation of fractal and symmetrological properties.
In this regard, our proposed theory of the origin of consciousness certainly has a connection with evolutionary genetics. From the point of view of symmetry, the fact of symmetrological properties of human brain and DNA is of great evolutionary importance, which is caused by interference properties of multilevel organization of complex fractal structures of living matter.
Recently, there has been a trend toward the use of artificial intelligence systems for assessing and monitoring the state of operators of complex technical systems (airplanes, high-speed trains, etc.). This is an important area of research, due to the fact that the accuracy of actions, the efficiency, and the safety of large technical systems are associated with the health of the operator, his motor control, and mental functions. In this regard, the study of the phenomenon of consciousness from the standpoint of complex cyclic structures, symmetries, and neuro-modeling can be useful for the design of neuro-interfaces in critical control systems. The relevance of assessing the condition of operators of complex technical facilities will be increasing constantly.
It seems promising to refine the presented algorithms so that they allow modeling the symmetry and asymmetry of large cerebral structures and the whole brain, including the cerebellum, hypothalamus, right and left hemispheres, each of which is responsible for the corresponding higher mental and cognitive functions.
The algorithms presented in this paper can improve the understanding of methods for developing machine vision systems, decision support systems, medicine, rehabilitation, manufacturing, and in intelligent agents. Overall, the presented concept, covering the algorithms for the synthesis of cortical networks, symmetry in DNA, and the brain, seems promising to the authors and requires further research. We see that the improvement of algorithms for generation of neural structures based on genetic and molecular genetic algorithms, as well as by introducing symmetry/asymmetry coefficients into the generated large neural-like structures, is promising.
The possibility to modify the algorithm for the synthesis of neurostructures with generalization to two hemispheres at once will make it possible to construct biosimilar objects in which, as we believe, a highly organized model of consciousness can emerge. However, this requires a series of experiments using high-performance supercomputers.
This approach to the construction of giant neural networks differs from the classical one, in which the network uses a relatively small number of neurons, necessary to provide only the problem to be solved. On the other hand, the approach based on the percolation theory allows us to take a fresh look not only at the phenomenon of consciousness but also at neuroinformatics in general. Obviously, the future is in large neural networks modeling complex processes in the brain, including complex cyclical processes and structures. A step in this direction is recurrent neural networks, which are widely used for analysis and prediction of dynamic processes.
It should be noted that this study can be compared with transitions to new types of symmetry, for example, in phase transitions of the second order: ferromagnetic-paramagnetic, etc. In general, the chaos-order transition can be accompanied by the appearance of complex structures with symmetric properties (Benard cells, etc.).
Modeling of consciousness is promising in the gaming industry and social networks. Social connections can be displayed in the form of graph structures, and the actual identification and modeling (design) of closed cycles of different scales, fractal structures, and symmetries in social networks is an important and urgent task. It is necessary to improve the efficiency of socio-economic relations, social engineering with the ultimate goal of building highly developed social structures.
Control of large clusters based on the theory of percolations in both large neural-like and real socio-economic systems will make it possible to identify new types of processes and phenomena at the intersection of various subject areas, the study of which may be of significant interest for science and technology. This becomes especially relevant in connection with the rapid development of bioinformatics, where new tools for research and genome editing are emerging. In this regard, it is advisable to apply the methods of visualization of large genetic data in conjunction with neuro-modeling of large networks to search for giant clusters in biological and ecological systems.
The use of the percolation approach is also promising for the study of mental processes, in particular, speech disorders or other disorders caused by mental disorders or lesions of cerebral structures. Devices that stimulate different parts of the brain or individual neurons by electrical or electroacoustic influence for preventive and therapeutic purposes are currently being developed and implemented. These devices should take into account the structure of large cycles in the brain responsible for the corresponding pathologies. This will make it possible to fine-tune the operating modes of the device to enhance the therapeutic effects. This refers to increasing the efficiency of the brain based on the method of transcranial stimulation. Complex configurations of cyclic structures can set the parameters of the electromagnetic field acting on the brain of a human operator in the process of solving a problem or controlling a technical object to increase the productivity of mental work, safety and efficiency of the operator’s activity.