1. Introduction
Since the new millennium, biomimetics gradually has been undergoing a paradigm shift to simulate consciousness, which is evidenced by more and more technologies emerging or under precast, germane to consciousness. For example, Jangsun Hwang, etc., expect the future technologies of biomimetics, including aircraft, automobiles, and robots; [
1] “Getty Images”, to involve brain-reading robots [
2]. These technologies are not intended for simulating parts of biological organs as in traditional biomimetics, but to simulate, or physically duplicate, recognize, and understand the will of a whole body. Besides the expected biomimetics technologies that closely relate to consciousness, current biomimetics technologies are seeking to characterize consciousness to a large extent. For example, Kyle Mizokami states that “The drone is an example of biomimetics, in which machinery, particularly drones, are designed to resemble living creatures in order to take advantage of the animal’s physical advantages” [
3]. Accordingly, “Biomimetics” has been re-defined as follows:
Biomimetics is an interdisciplinary field in which principles from engineering, chemistry and biology are applied to the synthesis of materials, synthetic systems or machines that have functions that mimic biological processes.
According to this definition, simulating consciousness such as AI, at least the will-simulating technologies should be classified into an interdisciplinary field, for consciousness serves as a natural, biological process. Therefore, we might accept that consciousness-simulating biomimetic technologies such as drones are emerging.
Consciousness is a complex object of philosophy, science, and common sense. However, science sometimes leaves the defining of consciousness to philosophy and considers the object only relative to or as the biological basis of [
5] consciousness, regardless of remarkable advances of consciousness. Meanwhile, biomimetics and AI seem to essentially emulate consciousness, which calls for not (only) a philosophic but also a scientific definition of consciousness, so there should be a clear and tangible definition of consciousness, as in what in-physical and everyday consciousness refers to. In as much as consciousness is empirical and appeals to science, both philosophy (conservatively, as a part) and common sense do not refute consciousness as a scientific object. That is, in contrast to meta-physics, science should be “philosophy-inclusive” and at least provide an answer to what the relationship of meanings of consciousness between philosophy and science is. In fact, the paradigm of studying consciousness is evolving such that science is not only addressing the question of “what is the basis of consciousness?” [
5] but also of “what is consciousness?”. As science studies nature, consciousness as a natural activity is spontaneously included as no extra-nature exists, and it should not be treated only as a basis to the concept mentioned by philosophy, as a supervenience to philosophy, as epiphenomena, or as only explananda of the concept. Hence, it is indispensable to scientifically define consciousness in detail, mathematically, physically, biologically, or even technically for biomimetics, AI, etc., which should be compatible overall with the deep-rooted concept regardless of the disciplines in which it is used.
The relationship of consciousness between philosophy and science was proposed by Uriah Kriegel as follows:
Philosophy may have a more significant role to play in shaping our understanding of consciousness; that even a complete science of consciousness may involve certain lacunae calling for philosophical supplementation.
This paradigm presents an architectural scheme that naturally (physically, chemically, biologically, etc.) analyzes phenomena corresponding to philosophical ideas. It establishes a scheme where philosophy explains consciousness based on the hypothesis that the philosophical ideas on consciousness are relatively clear, not controversial for an integrated and objective world. However, as philosophy is too imaginary and always controversial, the paradigm calls for an improvement, namely the acceptance of natural analysis in philosophy. This paradigm is, as proposed here, a mathematics assistant; that is, a mathematical framework where its infusing methods are comprehensive, parallel to natural analysis and also to philosophy, is desired. The present study attempts to establish an algebraic model that serves as such a framework and as methods to mathematically and, therefore, epistemologically describe and analyze consciousness, including the relevant philosophical ideas. Against its controversial definitions or interpretations, mathematical methods may achieve the “Greatest Common Divisor”—the widest consensus drawn from various statements on consciousness by modeling common and intrinsic features inside the statements and descriptions with greater precision.
D. M. Armstrong and Norman Malcolm [
7] (pp. 3–45) abstract two semantics of consciousness, which summarize the essence of consciousness well. One is the transitivity, meaning sensing or being aware of objects; another one is the non-transitive, meaning a mental state yet not unconscious. If we regard the non-transitive as the capability of performing the transitive, we can simplify consciousness as the transitivity of experiential objects or some mental states. Therefore, could we acquire a mathematical description of consciousness in the sense of transitivity? Fortunately, in this way, mathematically modeling consciousness has been attempted in the past. Piaget, the psychologist influenced by the mathematical school Bourbaki, established a mathematical model to depict consciousness generated from the growth of a young child. Piaget’s consciousness model properly reflects the essence of consciousness—transitivity with established psychological logic [
8,
9,
10].
Piaget gave his definition of consciousness in terms of “cognizance”, a type of behavior that is a psychological structure holding a logic where a cognized object is homomorphic to the type:
In general, when a psychologist speaks of a subject being conscious of a situation, he means that the subject is fully aware of it.
Cognizance (or the act of becoming conscious) of an action scheme transforms it into a concept and therefore that cognizance consists basically in a conceptualization.
Thus, cognizance, starting from the periphery (goals or results) moves in the direction of the central regions of the action in order to reach its internal mechanism: recognition of the means employed, reasons for their selection or their modification en route, and the like.
Piaget explained the mechanism of cognizance as that depicted in
Figure 1 [
8] (pp. 332–352), where the interaction (indicated by the arrows) between the subject and object is given, accompanying double centers as the cores of the corresponding classifications and the abstracted environment.
Concerning homomorphism between the subject and object with their own operations or movements, Piaget gave an example of setting up a causality of consciousness [
11]:
Returning to the problems with which we begin, one can wonder what the relationships are between the correspondences or transformations that the child discovers in reality and those that he discoverers in his own operations or actions. In particular, one can ask whether there exists, as we suppose, a correspondence between causality contributed to objects and the subject’s operatory structure.
Correspondences of this sort between causality and the subject’s deductive productions become conscious only tardily, of course.
Identical to Malcolm referring the essence of consciousness to the transitivity (“being aware of”) between a psychological state and the corresponding objects, Piaget’s cognizance interaction between the two sides of the periphery is transitivity; moreover, the pairs in interaction, Center and Center’, maintain their own transitivity on a level, and the double transitivity on their own levels constructs a homomorphism. Thus, we can express Piaget’s categorical idea about consciousness [
12] as (1), with the corresponding diagram shown in
Figure 2.
where Csc denotes consciousness, the object of Rft denotes referents, and the morphism is
l(
A,
B),
m(
C,
D),
A,
B ∈ Csc,
C,
D ∈ Rft (
l represents a psychological operation like “logic” as Piaget described, and
m represents the objective “movement” or behavior of the subject).
The objects and morphisms satisfy the following: f(A,B) × f(B,C) → f(A,C) (transitivity—Piaget typically instantiates the transitivity as operations which have identical or equivalent effects to the composition) and Iden (A,A) (identity).
F is a natural transform, which resembles sensing objects (like a projection of, say, a ball) and generates the corresponding mind, which are mappings of the objects onto psychological states. F represents the transitivity mentioned by Malcolm. The morphisms l and m and the transformer F are all compositing/ed or expanding/ed such that the image is flexible in functional selves, in the levels, and for the levels.
Equation (1) exhibits a homomorphism between referents and their subjects, or consciousness and its reflected objects in a categorical scheme, as stated in (2).
In other words, although (1) and (2) do not describe an inner characteristic of what is to be determined as consciousness, like features or structures, they give a composite relation structure by which consciousness can be determined by these outer relationships (like
l and
F in
Figure 2). Thus, it is the relationship, varying with each other between the information
F(
A) as the reflex in the line of logic
l and the periphery worlds (an object
A, referred to by a mind event
F(
A)), confirmed by a function or effect, that consciousness is defined in languages and thoughts, including philosophic or scientific. In particular, Piaget described consciousness as realizing some logic operations like inferences by the process; a reflex (like
F(
A)) in the brain resulted from an input gives rise to, say, predicting a position of the referent object (say,
C in
Figure 2 after a movement of
A). Thus, a simple stimulus–reaction can be mapped by
F, but it lacks sufficient relevant operation logic
l to predict a movement of stimulus (
m), causing such an excessively simple reaction to not always be determined as a conscious event. Similarly, unconsciousness might go through
F but lack sufficient {
l}, causing
F(
A) to fail in becoming a conscious event made up of {
F(
B),
F(
C)}. The transitivity on one layer like
F(
A) →
F(
C) guarantees equivalency arising in the operations on the layers, so logic can be generated----it is the generation of logic that is beyond simple
F that might be not linking to {
l}.
Equations (1) and (2) show a paradigm of the mathematical modeling of consciousness; in parallel, there exists a paradigm of Kriegel [
6], where mathematics is weaker than Piaget’s model, and philosophy—its notations and questions—inspires science (including that of consciousness) to be confirmed and resolved. The two schemes of Piaget and Kriegel, as visually represented by (a) and (b) in
Figure 3, respectively, call for an evolved paradigm so that mathematic and philosophic approaches are all sufficiently applied in defining and explaining consciousness, as proposed in
Figure 3c.
3. The Construct of Consciousness in View of 3-CMGC
3-CMGC raises the question of whether it can develop, or has contained, a construct of consciousness.
Constructs and compositions of consciousness have been uncovered by many philosophers and psychologists, most notably Kant, who regarded the mind as a cognition process which conforms to objectivity with physical laws, the paradigm of the Cognitive Theory of Consciousness (CTC). In addition, he referred to his thoughts as a “future scientific metaphysics”, which includes the contemporary cognition science cross-disciplines of philosophy, psychology, AI, neuroscience, etc., as remarked by Matt Maccormick [
31], where it had been argued by Kitcher, Brook, Sellars, and others that Kant’s philosophy of mind makes valuable contributions to contemporary cognitive science and artificial intelligence.
Kant viewed consciousness as a complex process which follows receptivity and understanding to generate knowledge and its synthesis. He thought of consciousness as only matter as thoughts, or a form of knowledge resulting from appearance [
32]. These thoughts interpret the process of consciousness in line with the description in
Figure 5.
For explaining meanings of (represented as modules in
Figure 5) functions in mind, Kant gave definitions or statements, among which some key notations are as follows:
Concerning Module 1 [
30] (pp. 59–61):
Whatever the process and the means may be by which knowledge refers to its objects, intuition is that through which it refers to them immediately, and at which all thought aims as a means. But intuition takes place only insofar as the object is given to us.
The capability (receptivity) to obtain representation through the way in which we are affected by objects is called sensibility. Objects are therefore given as by means of our sensibility. Sensibility alone supplies us with intuitions.
The effect produced by an object upon the capacity for representation, insofar as we are affected by the object, is sensation.
I call all representation pure (in a transcendental sense) in which there is nothing that belongs to sensation. this pure form of sensibility may it self be called pure intuition. These belong to pure intuition, which even without an actual object of the senses, exists a priori in the mind, as a mere form of sensibility.
There are two forms of sensible intuition, which are principles of a priori knowledge, namely space and time.
Concerning Module 2:
We call sensibility the receptivity of our mind to receive representation insofar as it is in some wise affected, while the understanding, on the other hand, is our faculty of producing representations by ourselves, or the spontaneity of knowledge.
The thought of consciousness appealing to reason, as analyzed in
Section 2.3, indicates that sensation, which contains qualia, not wholly serves as consciousness but as an intermediary nature extracted for generating consciousness. However, in the former process of receptivity and understanding, there are infused priori functions, which are priori intuition and priori logic; these two should be classified into consciousness. This is because this priori intuition, referred to by Kant as time and space to tidy up intuition, has been confirmed as participating in the generation of consciousness by means of a time-intermediary “script” [
33] or as the cognition of the space feature or remaining as that introduced by
Section 2.2. In addition, that logic, which a human holds, is confirmed as consciousness by Piaget [
11]. Therefore, the developed Kant’s model of mind should be conceivable in (3) and (4), such that (3) and (4) obtain a more detailed categorical form as in (6).
where → denotes the meaning of logic implication; (→, →) denotes morphisms like
A→
B,
B→
C,
A→
C, or a linear process of objects by natural laws;
F are natural laws, which support the functions in
F1,2,…,n; and {
F1,2,…,n:
x→
y} are the individual psychological contents, basically in long-term memories as the factors in Global Workspace Theory (GWT [
34]), like logic, knowledge bases, or culture in terms of Kant’s priori generating or appealing to reason and science, such that a conscious event
Fi calls {
F1,2…,n}, as (7) shows
where {
Fi} are specific experiences, as originated from the object with their features in
Figure 5, which accompany their individual long-term psychological contents like logic, knowledge, or culture as the priori function.
Consequently, Kant’s model of consciousness can be represented as a homomorphism as
Figure 6, which is a simplified version of
Figure 4 for simplifying a transitivity in a layer.
After converging (Neo-)Kant’s model with the correction from Piaget of turning the priori into generation in constructing consciousness, 3-CMGC starts to delineate a consciousness even with a static framework of a consciousness construct as stated in (6) and (7), which should be an addition to the models of consciousness based on the Cognitive Theory of Consciousness (CTC) and GWT to be an inner construct to add (3) and (4).
4. The Computability of Consciousness in View of 3-CMGC
4.1. Current Computability Models of Consciousness
Now that consciousness is regarded as a cognitive activity according to the CTC, it should be computable, as explained by Charles Wallis, where the exact nature of cognition lies in dynamic computationalism [
35].
Consciousness has actually been viewed as computable for a long time since the proposition by the Turing machine (TM). Strong AI at least favors this proposition, as mentioned by David J. Chamers:
The strong AI thesis is cast in term computation, telling us that implementation of the appropriate computation suffices for consciousness. To evaluate this claim, we need to know just what it is for a physical system to implement a computation.
In addition, he predicted that a computation implementing consciousness would be a TM in the form of combinational-state automata (CSA).
Recently, consciousness-represented brain neurons have been modeled by Lenore Blum and Manuel Blumin in more detail as a conscious Turing machine (CTM) [
37]. This model proposes that conscious neurons according to GWT are represented by neuron networks, and the connexity of the networks is regression, that is, Turing-computable. The CTM describes consciousness-related neurons as a six-tuple:
where
STM: Short-term memory that, at any moment in time, contains the CTM’s conscious content;
LTM: Long-term memory, like expertise constituting processors;
Env → LTM: Given that edges are directed from the environment via sensors to processors of the sensory data, Env is an environment for the neurons;
LTM → STM: Via the Up Tree;
STM → LTM: Via the Down Tree;
LTM → LTM: Bidirectional edges (links) between processors;
LTM → Env: Edges directed from specific processors (like those that generate instructions for finger movement) to the environment via actuators (like the fingers that receive instructions from these processors) that act on the environment (via the actions of the fingers from these processors).
According to the theory of the CTM, the stage is represented by STM that, at any moment in time, contains the CTM’s conscious content. The audience members are represented by an enormous collection of powerful processors—each with its own expertise—that make up LTM processors that make predictions and receive feedback from the CTM’s world. Based on this feedback, learning algorithms internal to each processor improve that processor’s behavior. Thus, LTM processors, each with their own specialty, compete to obtain their questions, answers, and information in the form of chunks in the stage for immediate broadcast to the audience.
Conscious awareness is defined formally in the CTM as the reception by the LTM processors of the broadcast of conscious content. In time, some of these processors become connected by links turning conscious communication (through STM) into unconscious communication (through links) between LTM processors. Communication via links about a broadcasted chunk reinforces its conscious awareness, a process known as ignition.
A conscious neuron phenomenon is explained as a chunk of linked nodes in the Up Tree—an up-directed binary tree—as the chunk wins the competence of neurons through a function f that holds, expands, or curtails its nodes such that the chunk acquires sufficient |weight| and weight, respectively, called the intensity (the sum of the sub-node weights after a duration) and mood—a node’s weight by virtue of assigning probability in a coin-flip neuron node. Therefore, f is regressive and, thus, Turing-computable.
CTM neuron dynamics of a conscious event show a more competitive neuron network in a GWT. Spontaneously, it raises two points as follows:
- i
If a (pure) mathematical representation of a CTM can be made, note that the mode of the TM is an extractive mechanism model rather than a (pure) mathematical one (compare notations like (of transition function) “turning left”, “turning right”, “tape writer”, and “tape reader” in the TM with “prime number” and “triangle” in pure mathematics);
- ii
If there exists a common model depicting consciousness of both the brain and machines (if the machine or artificial consciousness is putative), note that the mode of the CTM explains only biological brains, or a similar mechanism model rather than a mathematical model.
4.2. A Proposed Computability Model of General Consciousness of 3-CMGC
In this section, a computability model of general consciousness is proposed to respond to points (i) and (ii).
The Church–Turing thesis was proved as a theorem by Yuri Gurevich [
38] with the help of Marvin Lee Minsky [
39], which proves that a TM equivalently serves as a general recursion function [
40,
41,
42], defined by Godël [
43], making up primitive recursions in a recursion or a composition pathway as expressed by (9) and (10), wherein the general recursion function
f is achieved by means of recursion of
h and
g, and
x is a vector.
Equations (9) and (10) describe a definite decision and are different from rough or flexible values such as {l}{u} in an interval stage as logic in generation described by Piaget, as a script for a final version of a conscious event described by Deniel Dannett, or as a moving scheme in a competence for winning as the CTM predicts. Moreover, a swarm computation is not targeted by Equations (9) and (10), which forms the reason for the emergence of AI’s new strands—connectionism.
Upon this insufficiency in depicting rough or flexible values in a network computation, a mathematical model of neuron network computation is proposed here. Consider a kind of machine learning, an artificial neural network (ANN), which simulates natural neurons. Common formulas of this ANN can be expressed as (11) and (12).
where
x are certain elements in Rft;
g initializes the informalization of
x for input as nodes of neurons in the 0 layer; {
g(
x)} indicates inputting examples of
g(
x) many times in training;
f:
x →
f(
x) is a recursion function, for example, the reflex function of nodes;
f0(
x, 0) obtains the connection weight of elements of
x from the 0 layer of the ANN; {
f0(
x, 0)} indicates inputting
f0(
x, 0) many times in training;
f1 =
wT x +
ε +
y obtains the connection weight of elements of
x with bias
ε based on
y;
y is the regression vector of neurons next to
x;
f2 is the recursion function for the next layer of
f1; {
f2} refers to
f2 for multiple training;
ε is a bias for adjusting the weights of nodes
w; and
h is a threshold function, including iterating
ε many times in training {
h}.
Equations (11) and (12) give the result of a neuron network {x, y} in recursion satisfying (9) and (10), constituting a general recursion of multiple training. Considering the neuron network given by (8), it is plausible that the CTM can be expressed as (11) and (12).
It is determined from (11) and (12) that a learned outcome {
f} on the primary structure {
g} in a similarity emerges, as indicated by (13).
where {
f} learns, viz.
L from {
g}, which is a proposed relation beyond the Godël recursion model (Equations (9) and (10)).
The anatomical evidence of the human (brain) neuron network (HNN) holding an identical structure to the ANN and showing the same mechanism in learning has been given and modeled by Kali and Dayan [
44].
More detailed evidence that the HNN is identical in construction to the ANN was recently confirmed by Jane X. Wang et al., who claimed that there exists a learning system—a meta-reinforcement learning system in the prefrontal cortex of the human brain [
45]—where even the corresponding living matter of the weights in the HNN is narrowed upon the participation of dopamine (DA), which is the molecular-level evidence mapping the molecular Unt onto the Csc, which can be formalized in (14) and (15).
Equations (14) and (15) are instances of learning of the HNN-ANN, which contain a core of recursion in swarm computation (second-order variables of sets in contrast to (11) and (12)). Corresponding to 3-CGCM, (14) and (15) implement the following diagram from a learning unit to the classified result:
or a diagram with three layers as in (16).
Equations (13)–(16) describe sets of recursion variables in place of individual variables as in (9) and (10) and for brains or machine learning of a general consciousness of species of natural and artificial units. Hence, points i and ii should be considered solved. This implies that the swarm characteristic variables have been joined as a TM model, such that cognitive theories based on swarm characteristic variables in terms of the TM are mathematically represented. For example, “multiple scripts” [
34] for editing into consciousness can be mathematically represented as the type {
f} in (11)–(13).
4.3. The Biological Positive of Recursion for 3-CMGC
Empirical arguments that confirm that neurons carry out natural computation to perform a recursion were authenticated. Here, we cite a study by H. R. Heekeren and S. Marrett et al., who conducted an experiment that made subjects decide whether stimuli were a face or a house given an image with noise [
46].
With the decrease in noise in images showing a face or a building, the subjects’ brain fMRI signals in an area of their dorsolateral prefrontal cortex (DLPFC) are proportional to the gaps of cognition cells corresponding to a face or house, which shows the decision of neurons to subtract to maximize a characteristic difference. If “intensity” means the intensity of fMRI signals for a special object (face or building), using “areas” to denote a reflecting region in the subject’s brain, the experiment demonstrates a homomorphism as in (17).
The quantification relation in (17) (up formula) is a mathematical expression, which is a primitive recursion function implemented by the brain neurons (dictated by down expression). This shows an instance of the material layer Unt (neurons and their properties) running computations {u} and being homomorphic to Csc (like making a decision).
Recently, increasingly more discoveries of the functional neurons have been achieved, where mathematical features are verified to be gradient-characteristic and -relative for functional neutrons [
47], which implies natural computations belonging to {
u}. The relations between the two gradients are primitive recursion (projection), as (18) shows.
where the recursion function is of regression, that is, running in two steps. If an input
α in Rft is assumed, the mappings between layers are given as (5), showing that the Unt operation is a composite.
5. The Unity of Natural and Artificial Consciousness in 3-CMGC
Functionalism and computationalism and, therefore, physicalism in view of the present study, all argue that the mind, particularly consciousness, is computable, according to which there exists a common algorithm like (11) and (12) depicting a machine or artificial system functionally equivalent to a natural consciousness. Recent advancements show an attempt to establish an artificial consciousness model with the extracted pillars—data, information, knowledge, wisdom, and purposes (DIKWP) [
48,
49]—which has a similar essence to physicalism in the unity of artificial and natural consciousness, much like 3-CMGC with the idea of homomorphism. The viewpoint of unity seems to be reasonable; however, there is still a “conservative” instance that insists that the gap between natural consciousness and technologies claiming “possessing consciousness” cannot be filled, as held by John Search [
16] and Stephen Rothman [
50], for example, the characteristic of the first person with privacy is treated as one of the essential factors of consciousness, which does not concede physicalism like models (3) and (4). Here, we give an argument that “the first person” is accessed, not private.
Consider the case of a brain–computer interface (BCI). An experiment was created to read the mental contents from brain activity through fMRI [
51]. In the experiment, brain fMRI data were recorded while the subject viewed a large collection of natural images
I. The data were used to estimate a quantitative receptive-field model
F2 for each voxel for space, orientation, and spatial frequency from responses evoked by stimuli of the natural images. The model
F3 predicts brain activity that sees potential images
I’ that match the stimuli images
I with a maximum correct ratio of 92%. In other words, the images in Rft were being coded in Unt by brain fMRI as
F1 to generate image variables
I in Csc by the brain-receptive-field models, say,
F2;
F2 was to be estimated by the brain-receptive-field models
F3, giving the potential (objective) images
I’ in Csc’ (machine consciousness). This result confirms that
I ≈
I’ with a maximum correct ratio of 92% as if Csc ≈ Csc’, as shown in
Figure 7.
As a result, private Csc becomes open to a large extent (92%) as Csc-simulated Csc’. In other words, the experiment accessed the brain of the subject by means of fMRI, and its signals were verified as consciousness about space. Then, the first-person characteristic of consciousness was broken, becoming open to a large extent.
Similarly, collecting signals from the brain, regardless of interference, semi-interference, or non-interference in a BCI, may all be regarded as breaking the “first person” for breaching the private and making the brain’s function common. In other words, a BCI fully authorizes first-person access, and their outputs, mostly with appliable outcomes, show a conscious result such as an everyday action, which undermines the monopoly of the “first person” of biological entities, which delivers pursuable evidence to confirm the unity of general consciousness by filling the gap between artificial and natural consciousness.