Next Article in Journal
CALSczNet: Convolution Neural Network with Attention and LSTM for the Detection of Schizophrenia Using EEG Signals
Previous Article in Journal
A New Alternating Suboptimal Dynamic Programming Algorithm with Applications for Feature Selection
Previous Article in Special Issue
Thread Algebra with Prospecting Services and Foresight Patterns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Mathematics of a Process Algebra Inspired by Whitehead’s Process and Reality: A Review

Collective Intelligence Laboratory, Department of Psychiatry and Behavioural Neuroscience, McMaster University, 255 Townline Rd, E., Cayuga, ON N0A 1E0, Canada
Mathematics 2024, 12(13), 1988; https://doi.org/10.3390/math12131988
Submission received: 2 May 2024 / Revised: 25 June 2024 / Accepted: 26 June 2024 / Published: 27 June 2024
(This article belongs to the Special Issue Theories of Process and Process Algebras)

Abstract

:
Process algebras have been developed within computer science and engineering to address complicated computational and manufacturing problems. The process algebra described herein was inspired by the Process Theory of Whitehead and the theory of combinatorial games, and it was developed to explicitly address issues particular to organisms, which exhibit generativity, becoming, emergence, transience, openness, contextuality, locality, and non-Kolmogorov probability as fundamental characteristics. These features are expressed by neurobehavioural regulatory systems, collective intelligence systems (social insect colonies), and quantum systems as well. The process algebra has been utilized to provide an ontological model of non-relativistic quantum mechanics with locally causal information flow. This paper provides a pedagical review of the mathematics of the process algebra.

1. Introduction: Why Process?

The goal of this paper is to present a pedagogical overview of the process algebra approach, as inspired by the writings of the process philosopher, Alfred North Whitehead, which has particular application to the study of living systems. It is expected that the reader will be unfamiliar with the ideas of Whitehead, and certainly with this version of a process algebra, there being many others in the literature, which were all designed for different purposes and to address different problems than those being considered here [1,2,3]. It provides an overview of the core concepts and the core constructions. It is not meant to be deep or exhaustive. More details may be found in [4,5,6,7,8,9,10] or the forthcoming book [11] if the reader is interested.
Any field of endeavour, no matter how sophisticated, risks becoming a victim of its own success. Science is no exception. Success in one domain of application can lead to overconfidence and a tendency toward overgeneralization. That approach is then imposed upon other domains, even when the fit may not be good. The consequences range from misunderstanding to catastrophe. Examples of catastrophic failures to accurately comprehend a situation abound, especially in medicine and structural engineering.
For more than two millennia, our theoretical understanding of the world around us has been grounded in the concept of ‘object’. The concept of an (ideal) object presumes the following characteristics:
  • It exists independently from any other entity—it can be isolated and treated as a whole unto itself.
  • It is eternal—it does not become, it merely is.
  • It is passive—it reacts, it does not act; it lacks intentions.
  • Its behavior is rational, that is, conforming to the usual rules of logic.
  • Its properties are non-contextual.
  • History is irrelevant—the future of an object depends at most upon its present state.
Mathematical entities constitute ideal objects. This gives them power. The closest entities to ideal in the natural world consist of inanimate matter, the main subject for physics, so that the fruitful interplay between mathematics and physics is not that surprising.
Biological organisms and their subsystems, unfortunately, did not get the memo. They depart from this ideal of an object in almost every way imaginable. Consider some of the features observed in neural systems:
  • Neural systems are born, develop, age, and die.
  • Neural activity and connectivity is transient, and any internal representations are metastable; their participants are fungible [12,13,14,15,16,17,18,19].
  • Large-scale behavior is generated [12].
  • Neural activity is stochastic [20,21].
  • Neural activity is contextual [22,23,24,25,26].
  • Neural systems are open systems [23,24,25,27,28,29,30,31].
  • Meaning-laden information is more important than energy or entropy [28,29,30,31].
  • Behavior is grounded in finite duration transients, not instantaneous states [28,29,30,31,32,33,34].
  • Neural systems anticipate [27].
  • History is fundamental to understanding their activity.
Collective intelligence systems such as social insect colonies exhibit many similar charcteristics at a macroscopic level [35].
Physics has acquired great mastery over the realm of inanimate matter, which is governed mostly by simple laws such as the conservation of mass energy, linear momentum, angular momentum, the non-decrease in entropy, and action–reaction. Biological organisms do not care so much for those laws. They generally possess adequate resources. Reactions are not proportional to actions. Anyone with a cat knows that it can sleep through all sorts of noise—music, kids, housekeeping—but crack open a tin of food and it arrves in the blink of an eye. Organisms behave according to goals, responding to signals based upon their salience and meaning. These ideas are foreign to mathematics and physics. Behavior can be determined by the forms of things, not merely their motion or substance. Moreover, randomness is everywhere.
One of the great challenges of biology is to understand how stable patterns of form, function, and behavior emerge from this flux. It is a challenge to mathematicians to develop mathematical entities and theories that can address situations in which entities come into being, persist for a finite duration, and then fade away; when their constituents are in constant flux, so that their phase spaces are themselves dynamic entities; when interactions are in continual flux; when properties are generated and contextual, and thus also in flux; when probabilities are generated and non-stationary; when the presence of an environment is essential to their function; when meaning and form play a greater role than structure and geometry; when the fundamental objects of study are not instantaneous states but rather behavioral acts, which are finite duration transients.
Reductionism as an explanatory paradigm fails—the presence of “on the fly” generativity, fungibility, emergence, downward and horizontal causation, and contextuality all point to the necessity to approach the study of nervous systems (and organisms generally) simultaneously across multiple spatiotemporal scales ranging from neurotransmitters and receptors, cellular membranes, individual neural and glial cells, local cell assemblies, global cellular networks, somatic states, observable individual behavior, and individual subjective experience to dyadic relationships, familial relationships, social groups, communities, societies, and cultures. The belief that this complexity can be captured by a single partial differential equation (or some stochastic variant thereof), restricted to a single level, seems naïve at best. A new set of concepts and mathematics seems required if we are to deal with the nature of organisms and of biological systems beyond cartoon-like engineering stereotypes. One such concept is that of process.
Process is a term that possesses many meanings. In biology, “a process means any of the various biological activities occurring within an organism” [36]. In physics, “a process is a series of progressive and interdependent steps by which an end is attained” [36]. In engineering, a process is a set of transformations that take input elements and create products while respecting constraints, requiring resources, within an environment and satisfying a specific mission. The CPRET acronym [37] refers to Constraints, Products, Resources, Input Elements, and Transformations. These are (taken from [37]:
  • Constraints: imposed conditions, rules or regulations.
  • Products: everything generated by transformations. The products can be of the desired or not desired type.
  • Resources: human resources, energy, time and other means required to carry out the transformations.
  • Input elements: whatever is submitted to transformations for producing the products.
  • Transformations: operations organized according to a logic aimed at optimizing the attainment of specific products from the input elements with the allocated resources and in compliance with the imposed constraints.
There is an abundant literature dealing with the above interpretations of process, all of them grounded upon the concept of an object. There is an alternative approach which takes process itself as the ground. This is the Process Theory of Alfred North Whitehead, which is described in detail in his book, Process and Reality. In Whitehead’s Process Theory, a process is a generator of events, which causes them to come into being, and eventually to fade away. A process is a generator of space and time, not an entity within space and time. Whitehead referred to his theory as a philosophy of organisms because organisms are the quintessential exemplars of process. Organisms act —they have intention, they do something. A process too does something—which makes it wholly distinct from mathematical structures, which simply are. Social insect colonies provide another example of process. A social insect colony is capable of complex decision making in the service of salient ethological needs—nest selection, foraging, mate selection, colony-level reproduction. Nest selection takes the form of a plebiscite, moving to a site once a quorum of workers choosing that site exceeds a certain threshold [38,39]. There is no central authority, and the individual workers forming the quorum change from time to time. Each decision is made ‘on the fly’, never to be repeated exactly.
A process is in most respects the antithesis of an object. It is presumptuous to assume that a formal framework grounded in the concept of object will be adequate to describe its antithesis—rather a new kind of framework is needed. The process algebra described here is an attempt to provide such a framework. It draws upon some innovative approaches in mathematics which sadly remain out of the manstream.
Fortunately, the past century has seen a fruitful interplay between mathematics, logic, and computer science, resulting in the concept of the combinatorial game, which is used in generating models in mathematical logic [40,41] and in the analysis of real-world games [42,43,44]; the concept of Lindenmeyer systems [45] to model aspects of development; the development of process algebras in computer science [1,2,3]; and the development of methods to deal with dynamical transients [29,30,31,32,33,34,46]. It also witnessed the appearance of innovations in philosophy, which attempted to move beyond object-based mechanistic constructs. The central figures include Bergson [47], Heidegger [48], and most importantly, Alfred North Whitehead [49]. These ideas were later taken up by several researchers, most notably Prigogine and Stengers [50], Shimony [51], Hansen [52], Stapp [53], Chew [54], Finkelstein [55], Cahill [56], Hiley [57], and others [58]. The present work develops a process algebra which is specifically inspired by Whitehead’s philosophy and the concept of combinatorial game as developed by Conway [43].
Unfortunately, Whitehead’s writing is dense, turgid, and full of off-putting terminology. It is best to read scholars such as Hansen [52], Epperson [59], and Stengers [60,61] before tackling Whitehead directly. Nevertheless, Whitehead’s philosophy is rich, profound, masterfully reasoned, and exhaustive (if not exhausting).
For Whitehead, becoming is logically prior to being, and the study of becoming forms the core of his ideas. Although the entities which become may have spatial and temporal extension, the act of becoming forms a complete, indivisible whole. Any partitioning of becoming is purely heuristic. The primitive entities that arise because of this becoming are termed actual occasions, and these too form complete wholes. Divisions of actual occasions into smaller units are again merely heuristic.
Hansen describes the core of Whitehead’s idea as follows:
So, one central point in Whitehead’s system is that processes are themselves active—the temporal modalities are a function of the happening of processes themselves—the basic realities in the world pass from potentiality through actuality into pastness. Each process is a unit of becoming, and according to Whitehead, it “becomes in solido”; it is not some temporally extended entity placed on an axis of time along which it could be played, like an audiotape, one stage or movement after the other. The idea is not that it is of zero extension like a point. It is the more radical idea that it is something different from extension—something out of which extension is manifested. Within this relational system, according to its properties, it may become meaningful to assign the process a finite extension. … The other equally important element in Whitehead’s processual and relational account of time is the relationships with other processes. They are deeply involved in the very sense of the modalities. … These include relations between “parent” and “child” processes—causal relations in which the outcome of a parent process adds to the beginning conditions of each of its child processes. Each child process can have many parent processes and vice versa.
Whitehead’s picture implies that such branching and rejoining is generally the case. What it means for something to have already happened, Whitehead says, is that it has produced a determinate result—to terminate is to be determinate—but this result should not be understood substantially. “Result” means it is available to be part of the initial conditions for new processes. Or perhaps better, it is taken in by these processes—the terminated parent processes are indeed said to be there, in some sense and to some degree “repeated” within the actively new process. It is really all that the new process can “be” apart from its own core of creativity [52] (pp. 153–154).
Whitehead posits that this generative activity of process serves as the ground for extension, for the origins of space and time. The “modalities are not really situated in space and time at all, but in the concrete processes whose web of relations gives rise to space and time [52] (p. 154)”. Others writers have argued for the same [10,62].
Each process receives something from all the processes within its universe, meaning all those processes which serve as its parents (i.e., lie in its direct causal past), and it subsequently “leaves its mark” on all its children (i.e., those processes in its direct causal future). Whitehead makes the point that no two processes may share their entire universes (i.e., possess the exact same past), for that would make them the same process. Additionally, no process ever exactly repeats itself —there is a universal irreversibility and no backward causation. This failure of exact repetition is a feature of organisms [12,26,29,30,31] and is a core feature of functional constructivism [29,30,31].
Actual occasions or actual entities constitute the facts, made determinate in their becoming by means of process, which serve as the basis for the next generation of actual occasions.
Prehension is the act of taking up the data inherent in an actual occasion, interpreting and analyzing it, and incorporating the results into the actualization or formation or concrescence of a new actual occasion.
A nexus is an interaction conglomerate of actual entities.
Whitehead assumes that process generates actual entities which become objectified, concrete (concrescence being the process of being made concrete), and thus provide data in the next instance of becoming. He also assumes the existence of a continuum structure for the collection of all potentialities. Note that this continuum is not real in the sense of the entities represented all being actual—it is not the atemporal space–time continuum of events. It is instead more akin to a mathematical collection of possibilities and their interrelationships. Whitehead points out that this continuum is not a ‘fact’ prior to the world, as it would be if it were an atemporal listing of all its events. It is derivative, arising from the concrete nature of the world which then sets the limits on what is possible.
Actual entities atomize the extensive continuum. This continuum is merely the potentiality for division; an actual entity affects this division…With the becoming of any actual entity what was previously potential in the space–time continuum is now the primary real phase in something actual. For each process of concrescence, a regional standpoint in the world, defining a limited potentiality for objectifications, has been adopted. In the mere extensive continuum, there is no principle to determine what regional quanta shall be atomized so as to form the real perspective standpoint for the primary data constituting the basic phase in the concrescence of an actual entity [49] (p. 104).
Whitehead’s entities are fundamentally open systems and must be understood as such from the get-go. The study of process is thus the study of relations among processes. A process must always be studied in relation to its environment of processes.
For Whitehead, becoming involves two types of fluency: concrescence, which leads to the creation of an actual occasion, and transition, which converts an occasion into a kind of datum which can be used in other instances of becoming.
Concrescence is a concept of fundamental importance in Whitehead’s metaphysics. “‘Concrescence’ is the name for the process in which the universe of many things acquires an individual unity in a determinate relegation of each item of the ‘many’ to its subordination in the constitution of the novel ‘one’” [49] (p. 321). The following point is very important, since Whitehead does not separate out things from concrescences. There is no thing which undergoes concrescence; rather, it is a concrescence alone which is a thing. And by making a concrescence the prototype for an entity, he marks a fundamental departure from the notion of entity as object toward that of entity as process.
For Whitehead,
The term ‘event’ is used in a more general sense. An event is a nexus of actual occasions inter-related in some determinate fashion in some extensive quantum: it is either a nexus in its formal completeness or it is an objectified nexus. One actual occasion is a limiting type of event. The most general sense of the meaning of change is ‘the differences between actual occasions in one event’. For example, a molecule is a historic route of actual occasions, and such a route is an ‘event’. Now, the motion of the molecule is nothing other than the differences between the successive occasions of its life-history with respect to the extensive quanta from which they arise; and the changes in the molecule are the consequential differences in the actual occasions [49] (pp. 124–125).
Actual occasions do not do anything. They do not move, they do not change state, they do not interact. Processes do those things. Actual occasions are more akin to the tokens on a scratchpad. They come into being, their information is utilized by some processes, then they fade away. Period. Actual occasions may be likened to the bits in a computer register, the difference being that in a register, the physical locations of these bits are fixed in advance, whereas the becoming of actual occasions generates the physical locations at which they appear. In the process algebra, space–time is generated by process, it is not a pre-existing container in which events take place.

2. Metaphor of the Tossed Coin

The concept of process described above is quite abstract, and its depiction in the writings of Whitehead even more so. Nevertheless, a simple concrete example can serve as an illustration of some of these ideas, bearing in mind that this example is not meant to be representative or canonical but rather merely illustrative.
The example to be considered is that of a tossed coin. An outcome measurement for a tossed coin is either a head (H) or a tail (T). The criterion for such a measurement is the presence of a static coin resting on the surface of a measuring platform such that the upward-facing side of the coin contains either a head or a tail. Now imagine a device that consists of three components. There is a tall vertical post along which a platform can be moved up or down. The upward surface of the platform has a dampening pad so that if a falling coin lands upon it, its motion will be rapidly dampened, and the coin will fall over with one or the other side facing upwards. The third component resides at the top of the post and tosses a coin vertically into the air above the platform, following which the coin freely falls, engaging in random (or at least chaotic) multi-axial rotations, until it reaches the platform. The platform height can be adjusted to intersect the trajectory of the coin at any point. The angle of the platform with the vertical can also be adjusted in the range of, say, ±45 degrees. A coin landing on the angled pad will still come to rest with one face up.
The motion of the coin prior to contacting the surface of the platform is presumed to be effectively random (chaotic), or random, and the coin is presumed to be fair. In these circumstances, the measurements obtained will either be H or T, and they will occur with equal frequency, 1/2. Note that for any angle chosen for the platform, the coin will always come to rest with either a head or tail facing away from the platform surface, so that the measurements obtained will also be either H or T, with equal probability 1/2.
The choice of the height and angle of the platform rest entirely with the observer. In all cases, measurement values will be either H or T with equal probability 1/2.
These measurement values can be predicted with high accuracy by a simple Bernoulli model. If the only thing of interest is the probabilities of these measurement values, then this Bernoulli theory can be considered to be complete. The addition of ‘hidden variables’ will not improve the calculation of these probabilities in any way, which again supports the conclusion that this theory is complete.
However, if we ask a different question, such as what is the direction, while the coin is in motion, of a normal vector pointing outward from the surface having the head, then the Bernoulli theory tells us nothing at all. The idea of completeness is relative entirely to the specific measurement being described.
If we ask what is the value, H or T, while the coin is in motion, we find that there is no answer, since a measurement of head or tail has meaning only under the condition that the coin is at rest with one side facing upwards on the platform. In no other case is the idea of head or tail meaningful. Does that mean that the coin does not exist between measurements? Not at all. It only means that a measurement need not exist until a measurement has taken place. In this case, we see that a measurement is not something that is attributable to the tossed coin itself but rather is attributable to the specific measurement procedure or process being imposed upon the tossed coin. The tossed coin is a disposer or generator of measurements but only when the tossed coin interacts with the platform during the measurement process. Prior to a measurement, the tossed coin has physical existence and indeed is undergoing extremely complicated motion. In its condition of tossed motion, the coin is a process.
The tossed coin nicely expresses Bohr’s idea of the contextuality of measurement without recourse to any form of mystical thinking. A measurement is not the thing measured. A measurement is simply the outcome of an interaction between some entity and some measurement entity according to some predetermined set of conditions.
A coin may be an ideal classical object, but a tossed coin is not. A tossed coin is a process that is able to generate actual occasions corresponding to instances in the motion of the coin. The interaction between the tossed coin process and the measuring platform (itself a process) forms a new process which generates actual occasions corresponding to measurement values. The static coin may be said to possess definite values of H or T, but the tossed coin does not. Nevertheless, the tossed coin generates measurement values when coupled with a measurement process. The idea of a generator of measurements no longer appears so mysterious. Neither does the appearance of contextuality nor the idea of non-commutativity (successive measurements of the very same system will not necessarily yield the same answer as that of the first measurement or might not be possible at all). In the case of the tossed coin, angle measurements commute neither among themselves nor with any dynamical measurements, since an angle measurement terminates the motion of the coin (thus inactivating the tossed coin process), so subsequent measurements cannot be carried out. The coin must be tossed again, reactivating the process. Thinking of the tossed coin as a process rather than as an object eliminates the mysticism which seems to pervade much of the thinking in quantum mechanics and puts reality back into the picture.
Natural processes are of course much more complex than a tossed coin. Nevertheless, treating the tossed coin as an example of process makes the idea of generated states and measurements no longer perplexing. The idea of an actual occasion too is much broader and deeper than merely the instantaneous states of a tossed coin or measurement values, but the tossed coin provides a concrete starting point for generalizing to more complex and abstract situations.
Five things can be noted from the example of the tossed coin.
  • Probability is meaningless unless something actually happens.
  • A theory can provide a complete description of measurements without providing a description of the entity being measured.
  • Theories vary according to that which they are meant to describe—theories are contextual
  • Measurements are not ‘states’. Measurements do not reveal measurement values. Entities and interactions create measurement values.
  • The act of measuring is an interrogation of an entity—it does not determine the entity. Measuring is an act, a construct, a practice, and thus affected by worldview.
These insights are important for understanding the concept of process and for clearly distinguishing process from the concept of object. Measurement is important, but the choice of measurement determines what can and cannot be said about an entity. Think of the proverbial five blind people and an elephant. If they are all forced to wear gloves, might that not influence the observations that they can make? The conceit of quantum mechanics (and to some extent physics more generally) is that the particular choices of measurements that have been made provide a complete set of all that we need and shall ever need. A physicist might care only about the motion of the center of gravity of an entity, but if the observed entity is a falling cat, and the observer is a veterinarian, the vet might care more about the location of its claws then its center of gravity!

3. Process in the Process Algebra

The process algebra presented here was originally developed in the context of non-relativistic quantum mechanics (NRQM) to provide a realist or ontological model of quantum mechanics (i.e., a model of quantum mechanics in which something actually takes place, as opposed to merely being a description of probabilities), in which information flows between entities (or events) in a purely locally causal manner [4,5,6,7], and its concepts have been used to tackle a variety of subjects in the foundations of physics [8,9,10] as well as in the study of contextuality in decision making by social insect colonies [63,64]. It was inspired by empirical observations of process in neurobiology, social insect colonies, and psychiatric illness, by the mathematics of combinatorial games, which can generate mathematical structures, by interpolation, which can create continuity from discreteness, and by Whitehead’s theory of process, which provides the philosophical ground upon which all of this is based. In the process algebra, Whitehead’s actual occasions are represented by informons. The change in term is intentional and emphasizes that informons are mathematical entities inspired by Whitehead’s theory, but they are not actual occasions themselves nor do they represent all the characteristics of actual occasions as described in Process and Reality. To do otherwise would be a formidable undertaking well beyond the scope of this more modest model. The choice of the term informon is to emphasize the important role that information plays in the process algebra, and that actual occasions are to be viewed as tokens of meaning-laden information rather than merely physical entities.
Processes are generators of actual occasions—they actually do something, which makes them challenging to represent mathematically. Nevertheless, if we shift our attention from the process itself, and focus instead on the history of consecutive actions of a process, and later to the set of all possible histories which can be generated by a process (given certain conditions being held fixed—generalized boundary conditions), then we have shifted from process to a mathematical object upon which we may begin to carry out an analysis. Conway made a similar shift in perspective when studying combinatorial games, but there he identified a game with its history, while here we wish to keep the distinction clear. Processes are not objects, although histories are. Histories of the consecutive actions of a process are called causal tapestries, while complete sets of such histories form process-covering graphs, and configuration space-covering graphs when interactions are involved. These will be defined more formally below.
In Whitehead’s process theory, an actual occasion simply is not until concrescence is complete and it is. Only then does it acquire actuality, fact-ness. Once it has passed on its information, an actual occasion no longer exists and merely was. The moment at which concrescence is complete forms an instant of time. The act of concrescence, however, possesses a duration, representing an instance of time, the duration of the actual occasion. Each process creates a generation of informons. Each informon is created during a round, and the act of concrescence takes place over a series of short rounds, during each of which, information from an informon from the prior generation is incorporated into a nascent informon. At any moment, reality consists of a collection of prior informons, a process, and a collection of nascent informons which are in the process of being created. With each round, the set of nascent informons increases. Once the nascent generation is complete, the set of prior informons is eradicated, the set of nascent informons becomes the next set of prior informons, and the process begins creating the next set of nascent informons. The moment of completion again forms an instant of time, while the entire act of creation forms a duration of time: the duration of the generation. If two informons lie in different generations, I 1 , I 2 the gap is the sum of the durations of the generations lying between I 1 and I 2 . Note that the gap between a prior and a nascent generation is 0. A causal sequence of informons n 1 , n 2 , n 3 , n 4 , corresponds to the sequence of durations d 1 , d 2 , d 3 , d 4 . The gap between n 1 and n 4 is d 2 + d 3 , which gives the duration between the completion of n 1 and the initiation of n 4 .
Processes can exist in one of two existential states: active, meaning they can generate informons, or inactive, meaning that they cannot generate informons. If P denotes an active process, then P ¯ denotes its inactive form.
Processes are also understood to be generators of properties (think again of the tossed coin and its relationship to measurement), which become expressed as a result of an interaction with a measurement process. Since this is a mathematical model, we assume that each property P (or at least its value) can be represented as an element of some mathematical structure, C P . For convenience, we may group these properties as a vector p = ( p 1 , p 2 , , p n ) , where each p i C i represents a value of some property P i .
A process is thus characterized by several parameters.
  • N, the total number of informons in each generation.
  • R ¯ , the total number of rounds per generation cycle.
  • R, the number of informons generated in each round. Hence, N = R R ¯ .
  • s, the number of short rounds per round.
  • r, the number of prior informons which may condition the creation of a single actual occasion. Often, we will assume that s = R r , so s can be dropped.
  • r , the number of nascent informons to which each prior informon may contribute as a condition (in many cases, r = r or r = r = N ).
  • p , the set of properties disposed by the process—often p will consist of a set of subsets, each subset corresponding to a specific property and the elements of each subset corresponding to the measured values so disposed.
  • S , the strategy utilized by the process in its acts of prehension and concrescence.
  • t Π , which sets the temporal scale between generation cycles.
  • l Π , which sets the spatial scale between informons within a causal tapestry.
  • A process may be denoted as Π r R r N ( p , S , t Π , l Π ) .
In general, the parameters N , R , R ¯ , r , r , s , S , t Π , l Π will be fixed for a given process and only the properties will vary. In that case, we will refer to a process as P p for simplicity.
A process is said to be primitive if R = 1 , that is, only a single informon is generated per round; otherwise, it is complex.
The propagation of information in the creation of informons is held to be strictly causal and local, meaning that any speed which might be attributed to this propagation never exceeds that of light. The actions of processes are always held to be consistent with special relativity.
The simplest process is the zero process, O = O ¯ , which, as one might imagine, does nothing at all.

3.1. Process Algebra

Whitehead argued that interactions between processes play a fundamental role in their actions. The process algebra provides a general framework within which these interactions can be described and analyzed. Broadly speaking, there are three different kinds of interactions: couplings, entanglements, and proper interactions. Coupling occurs when two processes generate informons within the same spatial region and must avoid one another, but no exchange of information or interaction takes place. Entanglement occurs when the generation of informons by two processes becomes correlated, but no exchange of information or interaction takes place. Interaction occurs when two processes alter each other’s properties.
The process algebra was inspired by combinatorial game theory and the different ways in which games can be played [42,43,44]. A play of a game is akin to an instance in the creation of an informon. The process algebra accounts for three fundamental aspects of generation:
  • The relative timing of the generation of informons: Sequential meaning alternating, possibly randomly, from one process to another, or Concurrent meaning informons corresponding to each process are generated together.
  • The flow of information from prior to nascent informons: this may be Exclusive, meaning information propagates only among informons corresponding to the same process, or Free, meaning information may propagate among informons corresponding to different processes.
  • The relationship between processes: Independent, Coupled, Entangled, or Interacting.
The process algebra has 11 operators reflecting the nuances of these different modes of interaction. They are outlined below:
  • Independent: ,
  • Coupled: ∗
  • Succession (concatenation): ·
  • Free sequential (free sum): ^
  • Exclusive sequential (exclusive sum): ⊕
  • Free concurrent (free product): ^
  • Exclusive concurrent (exclusive product): ⊗
  • Free sequential interactive (free interactive sum): ^
  • Exclusive sequential interactive (exclusive interactive sum): ⊞
  • Free concurrent interactive (free interactive product): ^
  • Exclusive concurrent interactive (exclusive interactive product): ⊠
The basic rules for applying these operations in combining processes are the following:
  • The independent operator ‘,’ is used when two or more processes act completely independently of one another.
  • ∗ is used when two independent processes begin generating informons within the same spatial region and must ensure that they do not attempt to generate informons at the same location.
  • Concatenation is used to designate the progression of processes generation by generation (thus marking changes in state and progression in time).
  • The free sum is only used for single systems and combining states which possess identical property sets (pure states).
  • The exclusive sum is used for single systems and combining states which possess distinct property sets (mixed states).
  • The free product is used for multiple systems which possess distinct character (scalar, spinorial, vectorial, and so on) such as coupling a boson and a fermion. It is unclear whether two bosons might couple via a free product.
  • The exclusive product is used for multiple systems which possess the same character such as coupling two bosons or two fermions.
The most accurate representation of a nexus of interacting processes is by means of a connection or nexus graph in which vertices are labeled by processes, and the edges between any two vertices are labeled by the connector linking the represented processes. This is really the only way to properly represent the connections between any but a trivial collection of processes.
The relationships among these various operations are subtle. The following list is illustrative rather than exhaustive [4,11]. In the list, + refers to any sum, but of the same type, while × refers to any product, but again of the same type.
The two connectors, ∗ and ·, do not describe any information flow between processes (although · reflects a causal relationship between antecedant and subsequent processes) and serve more as logical connectives. Nevertheless, we can assume the following as reasonable:
  • ϕ , ϕ = ϕ ϕ = ϕ ;
  • ϕ , ψ = ψ , ϕ ;
  • ϕ ψ = ψ ϕ ;
  • ϕ , ( ψ , ρ ) = ( ϕ , ψ ) , ρ ;
  • ϕ ( ψ ρ ) = ( ϕ ψ ) ρ ;
  • ϕ · ( ψ · ρ ) = ( ϕ · ψ ) · ρ ;
  • ϕ + ( ψ ρ ) = ( ϕ + ψ ) ( ϕ + ρ ) and ϕ ( ψ ρ ) = ( ϕ ψ ) ( ϕ ρ ) , but more precisely they refer to triples in the graphical representation.
The connectives , ^ , , and ^ all refer to information flow between the informons generated by processes. The following follow easily from their definitions.
  • ϕ + ψ = ψ + ϕ ;
  • ϕ × ψ = ψ × ϕ ;
  • ϕ · ψ ψ · ϕ ;
  • ϕ + ( ψ + ρ ) = ( ϕ + ψ ) + ρ ;
  • ϕ × ( ψ × ρ ) = ( ϕ × ψ ) × ρ ;
  • ϕ × ( ψ + ρ ) = ( ϕ × ψ ) + ( ϕ × ρ ) ;
  • ϕ ψ ϕ ^ ψ ;
  • ϕ ψ ϕ ^ ψ ;
  • ϕ + ( ψ , ρ ) = ( ϕ + ψ ) ( ϕ + ρ ) ;
  • ϕ × ( ψ , ρ ) = ( ϕ × ψ ) ( ϕ × ρ ) ;
  • ϕ ^ ( ψ ρ ) = ( ϕ ^ ψ ) ( ϕ ^ ρ ) ;
  • ϕ ( ψ ^ ρ ) ( ϕ ψ ) ^ ( ϕ ρ )
  • Neither ϕ ^ ( ψ ρ ) nor ϕ ( ψ ^ ρ ) simplify or expand. They refer to distinct triples in the graphical representation.
Changes in the states of processes from one generation to the next are represented by concatenation. For a single process, there are several possibilities:
  • P P ( P · P ), persistence.
  • P P ¯ ( P · P ¯ ) or P ¯ P ( P ¯ · P ), change of status.
  • P p P p , ( P p · P p ), change of state.
  • P P ( P · P ), replacement by a different process.
The latter three require P to be participating in an interaction.

3.2. Informons

Informons are the process algebra representation of Whitehead’s actual occasions. Intrinsic components represent the physical pole of actual occasions, while extrinsic components represent the mental pole. These are simplifications of Whitehead’s concepts but suffice here. An informon has the following form:
[ n ] < m n , ϕ n ; p n , Γ n > { G n } , where
  • n is a heuristic label.
  • p n , Γ n are intrinsic components. p n is a vector of properties which links the informon to its generating process. It too is heuristic. Γ n is the information associated with the informon, which is propagated and incorporated into nascent informons by the generating process. Γ n is a feature of the informon itself and is represented by an element in some mathematical structure (think of the representation of various fundamental particles: scalar (Higgs), vector (photon), spinor (electron), or tensor (graviton), for example.
  • m n , ϕ n are the extrinsic components imposed by an external observer or process (real or imagined for heuristic purposes). m n is a map from each informon to some formal causal space M (for example, a manifold with a pseudo-metric ρ and compatible partial order). ϕ n is a mapping on M belonging to a Banach, Hilbert, or other function space, which is denoted H ( M ) . It provides an interpolation from or interpretation of the information of the informons [65].
  • G n is a causally ordered set of prior informons called the content of the informon n. It consists of all prior informons which propagated information that was (or will be) incorporated into n. It is also introduced for heuristic purposes, but it does convey important information regarding the process.
A bare informon is an informon minus the extrinsic components.

3.3. Causal Tapestry

Histories of a process are represented by sets of informons called causal tapestries. A causal tapestry is a set of informons which can represent a round, a generation, or a history of generations.
Definition 1. 
A causal tapestry C is a tuple ( I , F , G , P , p , d , M , I , Φ , ρ ) , where
  • I is a set of informons.
  • F is a directed graph with vertices I called the information flow.
  • G is a non-directed graph with vertices I called the skeleton.
  • P is a set of processes.
  • p is a map from F G P such that if ( x , y ) F and either { x , z } or { y , z } lies in G, then p ( ( x , y ) ) = p ( { x , z } ) = p ( { z , y } ) .
  • d is a signed metric on I such that if ( x , y ) F then d ( x , y ) 0 and if { x , y } G then d ( x , y ) < 0 .
  • M is a causal structure with causal order ⪯, pseudo-metric ρ, value space I .
  • m : I M , which is called the M interpretation.
  • The mapping Φ ( m ) = n I ϕ n ( m ) is called the global H ( M ) interpretation. It is thus an interpolation lying within a function space whose base space is M .
  • ( n , m ) F implies m m m n .
  • For all n , m I , d ( n , m ) = ρ ( m n , m m ) .
A bare, unadorned, or uninterpreted causal tapestry is a causal tapestry composed of bare informons. Bare informons and bare causal tapestries represent reality ‘as it is’ devoid of any observer’s interpretation of same.
F represents the flow of information induced by the processes that generated the causal tapestry, while G associates the informons that comprise a single generation. G conveys topological or metric, but not causal, relations between the informons of a generation.
A morphism between (uninterpreted) causal tapestries ( I 1 , F 1 , G 1 , p 1 , P 1 , d 1 ) and
( I 2 , F 2 , G 2 , p 2 , P 2 , d 2 ) is a tuple ( Θ , Π , Λ ) such that
  • Θ : I 1 I 2 .
  • Π : P 1 P 2 .
  • Λ : R R .
such that
  • ( x , y ) F 1 ( Θ ( x ) , Θ ( y ) ) F 2 .
  • { x , y } G 1 { Θ ( x ) , Θ ( y ) } G 2 .
  • There is an induced map Θ ^ on F 1 G 1 given by Θ ^ ( ( x , y ) ) = ( Θ ( x ) , Θ ( y ) ) and Θ ^ ( { x , y } ) = { Θ ( x ) , Θ ( y ) } .
  • For x F 1 G 1 , Π p 1 ( x ) = p 2 ( Θ ^ ( x ) ) .
  • For x F 1 G 1 , Λ d 1 ( x ) = d 2 ( Θ ^ ( x ) ) .
If the tapestries share the same interpretation manifold, the morphism has a natural extension.
The usual notions of embedding, quotient, and isomorphism apply.
Given two causal tapestries C and C 1 , we may form a sum of the two, C C 1 , whose set of informons is the union of the two sets, the order is the order theoretic sum, and the metric ρ equals ρ and ρ 1 when restricted to the appropriate subsets of informons. However, in general, there is no unique way to extend the metric to the entire sum. If we restrict ourselves to topological causal tapestries, then the sum is well defined. This again suggests that metricity is not a fundamental intrinsic characteristic of causal tapestries. We may also define a product of two causal tapestries C C 1 , which is the Cartesian product of the sets of informons, with order given as the order product, and metric given as a vector quantity, each component applying to only one causal tapestry. That is, ρ ( ( n , n 1 ) , ( m , m 1 ) ) = ( ρ ( n , m ) , ρ 1 ( n 1 , m 1 ) ) .
In the setting of causal tapestries with interpretations, two additional parameters become important as they determine the effectiveness of the interpolation procedure involved in creating the global H ( M ) interpretation. In particular, they determine whether or not the set of informons can serve as an interpolation set for a class of global interpretation functions [66,67].
Definition 2. 
Suppose one has a causal tapestry C and consider a single generation I of informons.
  • Local density δ L : The embedding { m n | n C } of informons of C in the structure M , in addition to being discrete, will occupy a collection of sites which are distributed in M . Surround these embedding points by some hypervolume. This may be a hypercube or hypersphere or some other standard form. Keep the form fixed and find the form with the smallest volume V containing these embedding points. Define the local density δ L as N / V . This may be performed for each generation within a causal tapestry. One can average over these local densities, δ ^ L = n δ L n / N ^ , where δ L n is the local density of generation n and N ^ is the total number of generations within the tapestry.
  • Global density δ G : If we wish to form the global H ( M ) interpretation for the causal tapestry I as a whole, then we need to determine the density over the entirety of the tapestry. Again, one surrounds the embeddings points of the entirety of the tapestry informons with some hypervolume of minimal volume V T containing all of these points, and then it calculates the global density δ G = N / V T , where N is the total number of informons. This need not equal the average density since local hypervolumes might overlap, thus overestimating the global density. Note that this also implies that the total number of informons scales with time, not volume.

3.4. Separability

An important question is to determine when two processes can be considered to interact. Leibniz’s idea of the identity of indiscernables is taken as a basic princple, so that in general, two processes cannot contribute information to the same informon. Thus, if two processes are generating informons which are sufficiently at a distance from one another, no such possibility can arise. Such processes are considered to be separable. If, however, two processes are generating informons within the same region so that they might abut one another or interleave amongst one another, then they are inseparable. Separable processes might interact, but inseparable processes always interact, even if only by coupling. To make this idea more formal, we need some definitions. There are two central considerations: how far do informons disperse in space–time, and how far can information propagated by a process travel in the course of a round. This is easiest to deal with in the context of an interpretation. An interpretation-free version can be defined but would take us beyond the scope of this basic review.
Assume that we have an interpretation space M with metric ρ .
Definition 3. 
Suppose one has a causal tapestry C and consider a single generation I of informons.
  • Dispersal: Let P denote a process. Assume one is given an initial causal tapestry C which embeds into M . Consider first of all the set of all potential embedding points for informons created by propagating information from informons of C . These will occupy a region of M surrounding the embedding points of I informons. Since we are considering all potentialities, rather than merely realized informons, this region can be expected to be expansive rather than discrete. This subspace of M is called the domain of dispersal from the initial causal tapestry, which is denoted D C ( P ) . In most cases, we shall assume that the spread of informons topologically and geometrically does not depend upon the specifics of the initial causal tapestry, essentially assuming spatial invariance. In that case, we can reasonably assume that if we determine the domain of dispersal D [ n ] from a single initial informon [n], then we can find that for the entire initial tapestry by suitable transformations of D [ n ] . That is, there will exist some transformation g on M so that for any point m M , we find a related domain of dispersal by g m ( D [ n ] ) . Then, D C = n C g m ( D [ n ] ) . The range of dispersal R D C ( P ) of P is defined as R D = sup [ n ] C { sup m D [ n ] ρ ( m n , m ) } . The range of dispersal is the greatest distance an informon may be created from some initial causal tapestry.
  • Influence: Assume the same conditions as for Dispersal. This time, we consdier all potential locations to which the process may propagate information from an initial causal tapestry. This new set for the initial causal tapestry is called the domain of influence, which is denoted I C ( P ) . This set may be constructed analogous to the domain of dispersal above, together with an analogous range of influence, R I C ( P ) .
Definition 4. 
Two processes P 1 and P 2 are said to be absolutely independent if D C ( P 1 ) D C ( P 2 ) = . Under such a condition, it is impossible for the two processes to enter into any interaction. We denote them as P 1 , P 2 .
Definition 5. 
Two processes P 1 and P 2 are said to be separable or informationally independent if I C ( P 1 ) I C ( P 2 ) = . Under such a condition, it is impossible for the two processes to enter into any information exchange, although they still might interact.
Theorem 1. 
Let the two processes P 1 and P 2 be separable and not subprocesses of a single state. Suppose that in generation n, D C ( P 1 ) D C ( P 2 ) . Then, during that generation, the two processes must become non-separable. We denote this by P 1 P 2 . Such processes are called weakly entangled.
Proof. 
If at generation n, D C ( P 1 ) D C ( P 2 ) , then it is possible for either process to generate an informon within the range of dispersal. If that is the case, and the processes do not represent subprocesses of a single state of a process, then it is not permitted for them to generate actual occasions and attribute them to the same space–time location. They must remain distinct. Since the two processes are separable, a priori there need be no relationship between the timings of their generations, only the requirement that they cannot generate to the same location. Thus, they may generate synchronously, sequentially, sometimes, or all of the time. This is represented in the process algebra by P 1 P 2 . □
Note that I C ( P 1 ) D C ( P 1 ) . The flow of information is bound by special relativity, but this is not a constraint on the location of the creation of inforons since they are not propagated, they simply become.
Now, if I C ( P 1 ) I C ( P 2 ) , then it becomes possible for one process to send information to actual occasions of the other process. While information need not be transferred, there is a sense in which the two processes must ‘sense’ one another. This means that there is at least a primitive awareness of ‘two-ness’. This means that these processes may become coupled, either P 1 P 2 or P 1 P 2 . Of course, it is possible that they remain P 1 P 2 , but these new possibilities emerge.
This proves the following theorem.
Theorem 2. 
Let the two processes P 1 and P 2 be coupled, that is, P 1 P 2 . Suppose that in generation n, I C ( P 1 ) I C ( P 2 ) . Then, they become strongly entangled, denoted as P 1 O P 2 , where O could represent , ^ , ^ depending upon the circumstances.
Two processes start out spatially separated and independent P 1 , P 2 . As they approach, they eventually encroach upon each other’s region of dispersal, becoming P 1 P 2 . If they continue to approach one another, their timings become organized either sequentially ( P 1 P 2 or P 1 ^ P 2 ) (two states) or concurrently ( P 1 P 2 or P 1 ^ P 2 ) (two separate processes). Finally, they may interact ( P 1 P 2 or P 1 ^ P 2 or P 1 P 2 or P 1 ^ P 2 ).

3.5. Rules and Strategies

Processes do something—they generate actual occasions. Representing this in the process algebra requires representing this act of generation. To do so, we must describe exactly how the process goes about carrying out such an action. Continuing the analogy with combinatorial games, we must specify the strategy that the process uses to create its informons. Strategies are purely heuristic tools to enable modeling the actions of processes. From an observational standpoint, the main entity of interest is the global interpretation of the causal tapestry. The usefulness of the concept of strategy arises from the concept of epistemological equivalence. If our principal concern is the global interpretation, then any strategies that give rise to the same global interpretation can be utilized. The notion of epistemological equivalence is similar to that of gauge invariance. In studying process, we study classes of epistemologically equivalent strategies.
Strategies, while heuristic in terms of modeling, are nevertheless fundamental in terms of the actions undertaken by any given process. Without a strategy, there is no way to determine what it is that a process does in creating informons nor in determining the flow of information among informons. Two general statements can be made regarding strategies. First of all, the strategy associated with a process remains constant unless there is an interaction with one or more other processes. This is a straightforward generalization of Newton’s First Law of Motion. Related to it is the suggestion that if two or more processes activate from the same history, then their local interpretations extend to the same global interpretation. Another way of expressing this is that each local frame of reference extends to the same global frame. This is a generalization of the concept of an inertial frame. If two global frames give rise to two distinct frames of reference, then the associated processes must have different histories. This is essentially saying that if two systems are moving relative to one another, then they must possess different histories. The emphasis here shifts from motion per se to history as the more fundamental concept. Two frames are said to be inertial relative to one another if there is an inertial transformation mapping one to the other. Inertial here means Galilean or Lorentzian. If no such transformation exists, then they are said to be non-inertial relative to one another.
Given a fixed prior causal tapestry C , a process P may generate different causal tapestries having different local interpretations with each activation of the process. This set of causal tapestries generated by the process P with initial causal tapestry C is denoted H P ( C ) . To this set, there corresponds a set of global functions, Σ ( P ) = { Φ C | C H P ( C ) }
It is possible that these local interpretations give rise to different global interpretations. That results in a degree of inconsistency or incoherence in the generation of informons by a given process. Thus, we will be interested only in coherent processes, namely the following:
Definition 6. 
A process P is said to be coherent if given any initial causal tapestry C , and any two causal tapestries C 1 , C 2 H P , then Φ C 1 ( z ) = Φ C 2 ( z ) .
Definition 7. 
(Deterministic process) A process P is said to be deterministic if given any initial causal tapestry C , the set H P ( C ) = { C } consists of a single causal tapestry. Otherwise, the process is said to be non-deterministic.
Definition 8. 
(Ontic equivalence [ O E ]): Two processes P 1 and P 2 are said to be ontic-equivalent if they generate the same collections of causal tapestries, that is, for any initial causal tapestry C , H P 1 ( C ) = H P 2 ( C ) .
Definition 9. 
(Strong epistemic equivalence [ S E E ]): Two processes P 1 and P 2 are said to be strong epistemic equivalent if for every causal tapestry C , the corresponding process covering maps (defined in a later section) are identical, i.e., P C ( P 1 ) = P C ( P 2 ) . In other words, the two games generate the same set of wave functions on the subspace of M .
Definition 10. 
(Weak epistemic equivalence [ W E E ]) (also Ψ-epistemic equivalent): Two processes P 1 and P 2 are said to be weak epistemic equivalent if, for every causal tapestry C , in the asymptotic limit N , r (and sometimes t P , l P 0 ), we have P C ( P 1 ) = P C ( P 2 ) .
Clearly, if two processes P 1 and P 2 are strong epistemic equivalent, then they are weak epistemic equivalent. It is also easy to see that that if two processes are ontic equivalent, then they are also strong epistemic equivalent.
Thus, we have that process equivalence implies ontic equivalence implies strong epistemic equivalence implies weak epistemic equivalence or symbolically,
O E S E E W E E
Two strategies are said to be epistemologically equivalent if the two processes implemented utilizing these strategies are themselves epistemologically equivalent. We can model any process by utilizing an epistemologically equivalent process.
As an example, a basic strategy which has been shown to give rise to non-relativistic quantum mechanics is that of the Bounded Radiative Uniform Sinc Path Integral Strategy ( P I ) [4,5,10]. The path integral strategy is specified by the parameters R , r , N , d , t P , l P and by
  • Δ ^ (distance bound): arbitrary, determines maximum causal distance of information transmission.
  • ρ ^ (approximation measure): arbitrary, set by the observer according to mathematical or experimental considerations.
  • δ ^ (approximation accuracy): arbitrary but bounded by experimental measurements.
  • ω ^ (band limit frequency): bounded by upper limits of energy and momentum of the quantum system.
  • L (Lagrangian): determined by the particulars of the quantum system.
  • p (set of properties): here energy, momentum.
The informons of the causal tapestry I n will be embedded into a sub-lattice of a space-like hyper-surface (or time slice) { n t P } × R 3 in M , where t P is Planck time and M is Minkowski four-space. The embedding lattice in M will thus take the general form ( n t P , i l P , j l P , k l P ) , where l P is Planck length, for integers n , i , j , k . The embedding point in M of an informon n will be denoted m n .
The path integral strategy for a single short round proceeds as follows:
  • Player I moves first. Player I non-deterministically chooses any informon [ n ] from the current tapestry C m which has not previously been played in this round.
  • If there is an informon [ n ] currently in play in the new tapestry C m + 1 , then Player II tests whether d ( n , n ) < Δ ^ in the new tapestry I . If the bound is exceeded, then play reverts back to step 1 and Player 1 begins again; otherwise, it proceeds. If there is no current informon, then Player II chooses a label n not previously used and selects a lattice site α = ( ( m + 1 ) t P , i l P , j l P , k l P ) not previously used such that d ( m n , α ) < Δ ^ .
  • Player I next updates the content set. If the new informon already possesses a content set G n , then Player I replaces G n with G n G ^ n { [ n ] } ( G ^ n is an order theoretic upset of G n ) and checks to ensure that all necessary order conditions are satisfied. If the new informon is nascent, then Player I simply sets G n = G ^ n { [ n ] } . The content set determines what prior information is permitted in constructing tokens. It only includes informons from the past causal cone of the informon. In the case of NRQM, it turns out that only informons from the current tapestry are needed, since the relevant information is already incorporated into their H ( M ) interpretations. Thus, it suffices if G n or ∅ is replaced with G n { [ n ] } or { [ n ] } , respectively. Note that in this case, the causal consistency criteria are trivially satisfied.
  • Player II next determines the causal manifold embedding. If the nascent informon n already possesses a causal manifold embedding, then Player II does nothing. Otherwise, Player II sets m n = α .
  • Player I next constructs a token representing the information passing from n to n and to be used to form the local Hilbert space contribution at n . Denote this token as T n n . Let S ˜ [ n , n ] = ( d ( n , n ) 2 2 m + V ( n ) ) t P . Let T n denote the set of tokens on n. Let Γ n denote the sum of the tokens on n, that is, Γ n = { T n m | T n m T n } . The relationship between these two is Γ n = ( 1 / A 3 ) Φ n ( m n ) , where A is the path integral normalization factor described by Feynman and Hibbs [68], which is appropriate to the current Lagrangian and initial and boundary conditions. The reason for this will become apparent later. Define the propagator P n n = ( l P 3 / A 3 ) e i S ˜ [ m n , m n ] / . Then, Player I places a token T n n = P n n Γ n on the site m . If there already is a set T n of tokens on informon n , then replace it by T n { T n n } .
  • Finally, Player II must determine the H ( M ) interpretation. If z = ( t , x , y , z ) and m n = ( n t P , m l P , r l P , s l P ) , then define T m n s i n c t P , l P ( z ) =
    s i n c π ( t n t P ) t P s i n c π ( x m l P ) l P
    × s i n c π ( y r l P ) l P s i n c π ( z s l P ) l P
    Player II constructs the H ( M ) interpretation by coupling the tokens on the site to a suitable interpolation function, which in the current strategy utilizes a sinc function given as A 3 T m n s i n c t P l P ( z ) . If the new informon has just been formed, then the H ( M ) interpretation is given as Φ n ( z ) = T n n A 3 T m n s i n c t P , l P ( z ) . If the informon already possesses a H ( M ) interpretation, Φ n ( z ) , then replace it by the new H ( M ) interpretation Φ n ( z ) + T n n A 3 T m n s i n c t P , l P ( z ) .
    In other words, add the new token to the collection, sum the token values, and couple the sum to the interpolation wavelet.
  • If no further tokens can be added (either no other contributing sites exist or an external limit has been reached), then the round ends and a new round begins.
Play continues until the allotted number of allowed game steps has been reached. At the end of play, a new causal tapestry C m + 1 has been created and the old causal tapestry C m is eliminated, formally becoming a part of C m + 1 p , the collection of prior tapestries. Any relevant information from C m now resides within the content sets of the informons of C m + 1 . Let n denote an informon of C m + 1 . Let L n denote the set of all informons from C n that contribute tokens to the formation of n . Equally, L n is the set of all informons from C n that form vertices in G n . The local H ( M ) interpretation of n may now be written as Φ n ( z ) = n L n T n n A 3 T m n s i n c t P , l P ( z ) .
The global H ( M ) interpretation on M is formed by summing the local contributions over all of C m + 1 , that is Φ m + 1 ( z ) = n C m + 1 Φ n ( z ) . One may restrict this to the t = ( m + 1 ) t P hyper-surface, obtaining, as will be shown below, a highly accurate approximation to the standard quantum mechanical wave function on the hyper-surface. Note that fixing t = m + 1 causes the time-based sinc term to take the value 1, and one indeed obtains a function on the hyper-surface. This approximation will be less accurate when extended to the entirety of M . To achieve greater accuracy requires either summing over the content sets of C m + 1 , i.e., Φ n c , m + 1 ( z ) = n G n , n I n Φ n ( z ) or over all of C m + 1 C p , Φ n p , m + 1 ( z ) = n C m + 1 C p Φ n ( z ) .
Three broad categories can be considered:
  • Prior driven: In a prior-driven process, the focus during each round is on the distribution of information by a prior informon at its dissolution. During such a round, a prior informon is selected and suitable target nascent informons are then selected to receive information. At the end of the round, the chosen prior is eliminated.
  • Posterior driven: In a posterior-driven process, the focus during each round is on the nascent informon. At the end of the round, the nascent informon is complete and receives no further information.
  • Incremental: In an incremental process, during each round, a packet of information is transmitted from one prior informon and incorporated into a nascent informon.

3.6. Process Covering Graph

The determination of probabilities and statistics for various events requires an accounting of all possible histories that can be generated by a given process. This is achieved by means of various covering graphs. Let us first consider a single primitive process. Three different levels of analysis may be considered: micro, the level of short rounds; meso, the level of rounds; and macro, the level of generations. We expand the label of each informon to indicate which short round, round, or generation the informon arises. An informon with this descriptor is denoted as [ n ; k ] < m n , ϕ n ; k ; p n , Γ n ; k > { G n ; k } .
The micro-process covering graph (denoted m P ( P , I ) ), where P is the process and C is the prior causal tapestry, consists of chains of pairs
( [ n ; 0 ] < m n , ϕ n , 0 ; p n , Γ n , 0 > { G n , 0 } , [ m ; i k ] < m m , ϕ m ; i k ; p m , Γ m ; i k > { G m ; i k } )
where [ n ; 0 ] is an informon of the prior tapestry and [ m , i k ] is a partial informon to which [ n ; 0 ] contributes information in the k-th short round of the i-th round. Adjacent pairs imply that they were created in successive short rounds. One such sequence is included for every possible generation.
The meso-process covering graph (denoted M P ( P , I ) ) consists of a chain of informons as they appear in each round during the construction of a generation. An arrow [ n ; i ] < m n , ϕ n ; p n , Γ n > { G n } [ m ; i + 1 ] < m m , ϕ m ; i + 1 ; p m , Γ m ; i + 1 > { G m ; i + 1 } means that these two informons were generated in successive rounds.
An active primitive process P acting on a tapestry C generates a sequence (corresponding to the succession of rounds) of partial causal tapestries , C 1 , C 2 , C 3 , which were each formed from the previous tapestry by the inclusion of an informon, C i = I i 1 { n i } . The sequence of partial tapestries forms an ordered set with edges labeled by informons n 1 , n 2 , , n k , , where ( C i 1 n i I i ), and having a maximal element, the final causal tapestry. Letting P act upon C again will generate a different ordered set of tapestries with edges n 1 , n 2 , , n k , . Two distinct global interpretations will be generated: Φ 1 ( z ) = n i ϕ n i ( z ) and Φ 2 ( z ) = n i ϕ n i ( z ) .
The union of all possible tapestry sequences forms the process sequence tree of P with initial tapestry C , which is denoted Σ ( P , C ) . This is the analogue of the game tree in combinatorial game theory. Associate to P a set H P of elements of H ( M ) consisting of all global interpretations constructed from every maximal tapestry in the sequence tree.
For fixed initial causal tapestry C , and some primitive process P P , where P is the set of primitive processes, define the Process-Covering Map (PCM)  P C : P H ( M ) by P C ( P ) = H P . Technically, P C ( P ) is a set-valued map, so one should write P C : P P ( H ( M ) ) , where P ( H ( M ) ) is the power set on H ( M ) , but H ( M ) is simpler. By allowing C to vary, one obtains an operator interpretation for the PCM, which is discussed below.
The PCM is now extended to include sums and products.
Consider an independent exclusive sum i w i P i of primitive processes P i acting on the causal tapestry C . Since this is an exclusive sum, the informons of the nascent causal tapestry C 1 will lie in distinct subsets, C 1 i , each corresponding to a specific subprocess P i . Let j n = i if n C 1 i . Since this is an independent sum, the actions of each subprocess can be considered independently of the others. Then, the global interpretation Φ ( z ) = n C 1 w j n ϕ n ( z ) = i w i { n C 1 i ϕ n ( z ) } = i w i Φ i ( z ) where Φ i ( z ) is the global interpretation corresponding to the process P i . Applying this to each path of the sequence tree, it follows easily that
P ( i w i P i ) = i w i P ( P i )
where for two sets of functions A , B , the sum A + B = { f + g | f A , g B } .
It is not difficult to show that this also holds true for free sums, but in this case, these subsets may overlap and informons must be artificially divided to reflect the contributions from each subprocess. Nevertheless,
P ( ^ i w i P i ) = i w i P ( P i ) .
Note that although i w i P i and ^ i w i P i are Ψ -epistemic equivalent, they possess very different interpretations on the causal manifold M and may possess quite distinct sequence trees. The local interpretations will also differ even though the asymptotic global interpretations are the same.
It is not possible, in general, to describe interactive sums, which must be determined from their sequence trees.
The case of products is more complicated. First, consider an independent exclusive product i P i of primitive processes P i . During each round, a set of informons { n i } will be generated together with the set of their local interpretations, { ϕ n i ( z ) } . The most natural phenomenological representation is to consider the co-product of the spaces H ( M ) corresponding to each subprocess. This maintains the point of view of individual entities. The usual approach would be to consider a product of the spaces, but this approach may fail in the process algebra, and so a more subtle version of the PCM must be employed.
Each edge in the sequence tree may be given by a tuple ( n 1 , n 2 , , n j ) of informons. There will be corresponding tuples of causal manifold points ( m n 1 , m n 2 , , m n j ) , and there will also be local Hilbert space contributions ( ϕ n 1 ( z ) , ϕ n 2 ( z ) , , ϕ n j ( z ) ) . The vertices of the sequence tree may be recursively defined in the co-product case as C n C n i { n k i } and in the product case as C n 1 × × C n n ( C n 1 { n k 1 } ) × × ( C n n { n k j } ) . Since this product is independent, exclusive, it follows easily that
P ( i P i ) = P ( P 1 ) P ( P 2 ) P ( P j )
in the co-product case (where ⊕ means a formal sum of functions, not a pointwise sum) and
P ( i P i ) = { ( Φ n 1 ( z ) , Φ n 2 ( z ) , , Φ n j ( z ) ) | over all instances of P } =
P ( P 1 ) × P ( P 2 ) × × P ( P j )
in the product case, where × means a set product, not a pointwise product. If the subprocesses represent different states of a single process, then the co-product can be replaced by sums.
It can also be shown that, as for sums, the free product satisfies
P ( ^ i P i ) = P ( P 1 ) × P ( P 2 ) × × P ( P j )
so that i P i and ^ i P i are also Ψ -epistemic equivalent.
Again, there is no general formulation in the case of interactions.

3.7. Process and Operators

Recall that a causal tapestry C consists of informons, each of which has the form [ n ] < ( m n , ϕ n ( z ) ; p n , Γ n , > { G n } . The function ϕ n ( z ) provides a local contribution from n to a global function Φ C ( z ) defined on the causal manifold M by Φ C ( z ) = n C ϕ n ( z ) . This defines a mapping T from the space of causal tapestries C to the Hilbert space H ( M ) by
T ( C ) = Φ C ( z ) .
The PCM was defined as a map P : P p × C P ( H ( M ) ) (the power set on H ( M ) ). If we fix some process P P p , then we can define a tapestry covering map (TCM) P P : C P ( H ( M ) ) in the obvious manner.
Define a generalized operator G on H ( M ) as a mapping G : H ( M ) P ( H ( M ) ) such that G ( f + g ) G ( f ) + G ( g ) and let G ( H ( M ) ) denote the set of generalized operators on H ( M ) .
For a fixed primitive process P , define a generalized operator G P on H ( M ) such that for every f H ( M ) , G P ( f ) = C T 1 ( f ) P P ( I ) .
One thus obtains the following diagram
C P P P ( H ( M ) ) T e H ( M ) G P P ( H ( M ) )
where e is a map such that h e ( h ) .
One cannot guarantee that if two causal tapestries C , C satisfy T ( I ) = T ( I ) , then P P ( I ) = P P ( I ) . A primitive process P is said to be Ψ -faithful if T ( I ) = T ( I ) implies that P P ( I ) = P P ( I ) for all I , I . In the case of a Ψ -faithful process, the diagram reduces to the simpler form
C P P P ( H ( M ) ) T i d H ( M ) G P P ( H ( M ) )
where i d is the identity.
In either case, one can associate each process P with a generalized operator G P on H ( M ) .
Consider now the situation in which the asymptotic limit has been taken. This corresponds to restricting attention to only those processes corresponding to the asymptote, so those processes for which N , r = 0 (at least). If we restrict then to the subset Π of such asymptotic processes, then one must also restrict the space of causal tapestries to C , the subset of causal tapestries that are generated by processes within Π p . Hence, for P Π p and C C ,
P I ( P ) = { Φ t P l P ( z ) } ,
a singleton set.
It follows that the previous diagrams simplify to
C P P H ( M ) T e H ( M ) G P P ( H ( M ) )
for general processes and for Ψ -faithful processes to
C P P H ( M ) T i d H ( M ) G P H ( M )
In this situation, the generalized operator G P becomes a standard operator on H ( M ) . This is the main value for considering Ψ -faithful processes. The extension of these ideas to sums and simple products is straightforward.

3.8. Process Approach to the Configuration Space

The PCM defined in the previous section essentially provides a space–time representation, especially in the case where the co-product representation is used for process products. The problem with the PCM lies in how the global interpretations are generated. During each round, a product process i = 1 n P i will generate a correlated set of informons A i = ( n i 1 , , n i n ) . If C is the causal tapestry consisting of these tuples and formed by a complete action of i = 1 n P i , then the global H ( M ) interpretation for the product process on M n should reasonably be defined as
Ψ ^ p ( z 1 , , z n ) = ( n i 1 , , n i n ) I Γ n i 1 Γ n i n T m n i 1 g ( z 1 ) T m n i n g ( z n )
which is consistent with the formulation of the global interpretation for primitive processes and where g is the local interpretation function.
Unfortunately, Ψ ^ p is inadequate for determining correlations. The problem with Ψ ^ p is that it is based upon a single complete action of the product process, which cannot take into account the effects of all possible actions of the process. The proper way to generate a function expressing these correlations is through the configuration space sequence tree. This generalizes the construction of the sequence tree described in the previous section. For n subprocesses, each vertex of the sequence tree will be a causal tapestry C j consisting of a set of ordered tuples of n informons of the form ( n i 1 , , n i n ) , which is just a tuple formed from the informons generated at round i by the generating process. An edge will consist of an n-tuple of informons ( n j 1 , , n j n ) such that if this edge takes C i C i + 1 , then C i + 1 = C i { ( n j 1 , , n j n ) } . Let C i denote a causal tapestry formed by completely traversing a path in the tree. Note that each informon generated by a subprocess P i in the sequence tree is generated independently from all of the others generated by P i , and each edge set is generated independently from every other edge set. The causal tapestry C i may be artificially extended by adding informons from C j so long as we ensure that if we wish to add an informon ( n j 1 , , n j n ) C j , then it is necessary that for each component informon n j k such that there exists an informon ( n g 1 , , n g n ) C i with p n j k = p n g k and m n j k = m n g k , then Γ n j k = Γ n g k . That is, if we form the projection C i C k i by mapping each informon ( n j 1 , , n j n ) n j k , then C k i forms a consistent causal tapestry in its own right. An informon of C j which meets this condition is said to be admissible for C i .
Define the consistent union  C i C j to be the set C i { n C j admissible in C i } . A causal tapestry C is said to be maximal for a sequence tree if there is no path and no causal tapestry C generated by this path such that K C C .
The configuration space sequence tree is denoted Σ C ( C , i P i ) . A similar construction holds for the free product as well and will be denoted Σ C ( C n , ^ i P i ) .
Given a configuration space sequence tree Σ C ( C , P ) , let I Σ C ( C , P ) M denote the set of all of its maximal causal tapestries. We define the configuration space process covering map or PCMC, denoted P C ( i P i , C ) (or sometimes P C C ( i P i ) or P C ( i P i ) ) to be P C C ( i P i ) =
{ Φ j ( z ) = ( n k 1 , , n k n ) C Γ n k 1 Γ n k n T m n k 1 g ( z 1 ) T m n k n g ( z n ) | K J Σ C ( C , i P i ) M }
It may be the case that the maximal causal tapestries are full product causal tapestries. In such a case, the asymptotic limit takes the form
P I n C ( i P i ) = { Ψ t P l P 1 ( z 1 ) × × Ψ t P l P n ( z n ) }
Note that the configuration space process covering map is defined over the entirety of the configuration space sequence tree. Only a single path along the tree is ontological in the sense that it corresponds to a single generated reality. The configuration space sequence tree is a purely heuristic tool used to calculate correlations when knowledge of a specific path is unknown or unknowable or where multiple paths may generated, either sequentially or concurrently, and one wishes to define some statistical measure on the resulting reality.
Both the process covering space and the configuration space covering space are important constructions. The process algebra model clearly distinguishes between the construction of a single causal tapestry by a process, which is ontological, and thus a history of events which were actual or real, and these two spaces, which are purely abstract spaces of potentialities, of possible causal tapestries which could be generated by a process (but were not) and which are used epistemologically solely to carry out calculations, particularly those of probabilities of occurrence of various situations. This suggests that the configuration space of physics should also be considered heuristically, epistemologically, rather than ontologically.

3.9. Measurement

Measurement, within the Processist worldview, and the process algebra, requires a process P to be interrogated. This requires a physical context (apparatus) M , which is itself a process, through which the interrogation takes place, and a detector D , also a process, which produces an enduring token p representing the outcome (value) of the measurement. Measurement thus involves the evolution over time of an interaction P M D . Probabilities are an emergent outcome of this interaction and require for their calculation reference to a heuristic configuration space covering graph. This is, virtually by definition, a contextual probability which will depend upon M (at least). Unlike in some objectivist worldviews, the process P is understood not as the possessor of some property with measured value p but rather as the generator of a collection of potential properties, which yield a measured value p during one measurement interaction.
A different measurement interaction could well result in the generation of a different property with measured value p . Since P is a generator and not a possessor, this makes perfect sense.
Measurement is conjectured to occur in three stages:
  • The system process P S and the measurement apparatus process P M begin as independent processes, P S , P M .
  • The system and measurement apparatus processes move into spatial positions from which their generative activities could potentially interfere with one another. This is the first level of interaction P S P M . The requirement for the system process to navigate the physicality of the measurement apparatus results in a transition P S n a n P n S , where P n S are subprocesses which resonate with or are compatible with eigenstates P n M of the measurement apparatus. Thus, we have the transitions P S , P M P S P M n ( a n P n S P n M ) .
  • In the final stage, the occurrence of an informon triggers a process-altering interaction between system and measurement apparatus processes, which results in the appearance of a single measured value corresponding to the eigenstate n
    n ( a n P n S P n M ) n ( a n P n S P n M ) a n P n S P n M .

4. The Temporal Structure of Process

Temporality gives rise to order, as does causality. The generation of informons by a process includes both causality and succession, and thus ideas of order and graphs play important roles. The role of graphs in describing the structure of complex interactions has already been alluded to. Here, attention turns to the order structures that can be generated through the creation of informons. More details can be found elsewhere [10,11]. Since causal tapestries are defined primarily in terms of graphs, the notion of order will be expressed in graph theoretic terms as acyclic directed graphs. Recall that a causal tapestry possesses two graphical substructures: a directed graph representing information flow (causality and succession) from generation to generation and a non-directed graph which provides topological and metrical structure within each generation. The focus here is on the causal substructure. Some insights concerning the topological and metrical substructure can be found in [11].
Graphs and orders play fundamental roles in the description of processes and causal tapestries, so it is useful to briefly review some basic concepts.
Definition 11. 
(Basic Definitions)
  • An undirected graph is a pair ( V , E ) , where V is a set of vertices and E = { { x , y } | x , y V } is a set of edges. We denote an edge linking x , y by x E y = y E x .
  • A directed graph is a pair ( V , E ) , where E { ( x , y , ) | x , y V } . x is called an initial vertex and y is called a terminal vertex. We shall assume no self loops, that is, no terms of the form x E x , so that x E y y E x . This accords with the idea that an event cannot be its own cause.
  • A mixed graph is a pair ( V , E ) , where V is a set of vertices and E is a set of edges which can be undirected { v 1 , v 2 } or directed ( v 1 , v 2 ) as necessary. We denote a generic edge in a mixed graph as [ x , y ] . By graph, we mean a mixed graph unless otherwise specified.
  • Two elements are comparable if either x E y or y E x . Otherwise, they are incomparable, which is denoted by x | | y .
  • An antichain is a set of pairwise incomparable elements.
Definition 12. 
A path x y in a graph G is a finite sequence of vertices x 0 , x 1 , x 2 , , x k such that [ x i , x i + 1 ] is an edge of G for all i. The length of the path is k, which is equal to the number of edges. A path of the form x 0 , x 1 , , x k , x is termed cyclic. A path is acyclic if it contains no cycles. A graph is acyclic if it contains no cycles. A path is prime x y if there is no z such that x z and z y . A graph is primitive if every edge is prime.
Theorem 3. 
A directed graph is ordered if it is acyclic. If ( V , E ) is an ordered graph, then the relation < given by x < y if ( x , y ) E is an order on V.
Definition 13. 
A chain x ω y in a graph G is an ordinal ω labeled sequence (a map from an initial segment of an ordinal to V) x 0 , x 1 , x ω k , x ω k + 1 such that for successor ordinals i , i + 1 , x i E x i + 1 and for a limit ordinal ω k , x i E x ω k for all i < ω k . The latter type of edge is called a jump. A jump x E y is prime if there is no z x such that there is a jump x E z and edge z E y . Thus, a prime jump crosses a chain of the form x 0 , x 1 , , x ω 0 , and no larger.
Definition 14. 
Let G = ( V , E ) be an ordered graph. The completion of G is a graph G ^ = ( V , E ^ ) such that if ( x , y ) E ^ , then there exists a path x y in E. If G is ordered, then so is G ^ .
Definition 15. 
An ordered graph G is well ordered if every subgraph possesses a minimal element. A well-ordered graph is called a causal order. A directed, acyclic graph is called a local causal order if it can be decomposed into a union of components, each of which is a causal order. Let A be a subgraph of G. A base is a set of minimal elements B such that for all x A , either x B or there exists a b B such that b x .
Definition 16. 
Let G = ( V , E ) be a directed graph. The reverse graph G R = ( V , E R ) , where ( x , y ) E R if ( y , x ) E . Similarly, given an order O = ( V , < ) , the reverse order is O R = ( V , < R ) , where x < R y if y < x .
Definition 17. 
An up-set of a directed graph G is a set of the form { y | x y } , where x A , and A is an antichain. A is called the base of the up-set. An ordered graph is future causal if every up-set is a causal graph.
Definition 18. 
Let C be a directed graph. A component A of C is a subgraph of C such that if x A , then there exists a y A and either a path x y or y x .
Definition 19. 
Let A , B be subgraphs of an ordered graph G. We write A < B if for any x A , y B , there either exists a path x y or no path y x . If there are neither paths x y nor y x , then we write x | | y .
Theorem 4. 
Let A be a component of a causal graph C. Then, there exists a sequence of antichains A 0 < A 1 < A 2 < in A such that i A i = A . Moreover, if x A i 1 and y A i such that x y , then x y is a step.
Proof. 
Since C is a causal order, it is well ordered and thus so must be any component A of C. Hence, every path in A has a least element. Let A 0 be the set of all such least elements in A. If A is non-empty, then A 0 must be non-empty. For if x A , and x is not a least element in A, then there must be a path in A terminating at x, and such a path must have a least element y. Hence, y A 0 . Clearly, there can be no element y A such that { y } < A 0 , since there would then be some element in A 0 which is not minimal, which is a contradiction. Now, let A 1 consist of all minimal elements of A A 0 . A 0 < A 1 , since otherwise, there must be a z A 0 and y A 1 such that y < z , which is a contradiction. Continuing in this manner, let A i consist of the set of all minimal elements of A j < i A j . Using a similar argument, one finds that A 0 < A 1 < A i . Clearly, each A i must be an antichain. Assume x A i 1 and y A i such that x y . Suppose there exists a z such that x z y . z j < i A j since if z A j < i , and A j < A i , then either z x or x | | z , which is a contradiction. But then y could not be minimal in A j < i A j ; hence, y A i , which is a contradiction. Hence, the edge x y is a step. If A i A i , then there exists a y A i A i . Since A is well ordered, we may assume that such a y is minimal in A i A i . If { y } | | i A i and minimal, then by construction, y A 0 , a contradiction. Hence, there must exist z i A i and a chain z y . Since A is well ordered, this chain must also be well ordered and thus possesses the order type of some ordinal ω m . Then, by applying the construction algorithm to this chain, we find that y A j for some j i + ω m , which is again a contradiction. Hence, A = i A i . □
Theorem 5. 
Let C be a causal order. Then, C can be decomposed into a set of disjoint, maximally connected components C i , each of which can be decomposed into an ordering of antichains C 0 i < C 1 i < .
Theorem 6. 
Let C be a future causal graph. Then, C can be decomposed into a chain of antichains of the form C n < < C 0 < C 1 < < C ω m .
Proof. 
Take any maximal antichain A. Since C is future causal, the upset on A is causal and thus can be decomposed into a chain of antichains of the form C 0 < C 1 < C 2 < < C ω m . Consider D = C i > = 0 C i . Since A is maximal, every element of D must lie below some element of A; otherwise, it could be added to A to create a larger antichain, which is a contradiction. Let A be a maximal antichain in D. Repeating the argument, we find that A will decompose into a chain of antichains lying below the chain C 0 < C 1 < C 2 < < C ω m . Thus, we obtain C k < C 1 < C 0 < C 1 < C 2 < < C ω m . The argument may be repeated indefinitely. It will either terminate for some n or there will be an infinite backward regression. Nevertheless, there will always be a future-oriented chain of antichains. □
Definition 20. 
Let G be a directed graph. If A is a subgraph of G, then a base for A, B A is a maximal set of greatest lower bounds for elements of A. That is,
  • If b B A , then for all x A , either b x and if b y x , then y A , or b | | x .
  • If x A , then either x B A or there exists b B A such that b x . If B A A , then we say that A is grounded.
Definition 21. 
Let C be a directed graph. We say that C is well founded if every subgraph A of C possesses a base.
Theorem 7. 
Let G be a well-ordered graph. Then, it is well founded.
Proof. 
Let G be a well-ordered graph and A be any subgraph. By definition, A must possess a minimal element, so B A is non-empty. Clearly, it must contain all minimal elements in A. Let x A be any element not in B A . If there is no b B A such that b x , then there must exist in A an infinite sequence x n x n 1 x 1 x ; otherwise, we have a contradiction. But then, this sequence is itself a subgraph of G and so must possess a minimal element, say x m . Hence, it must terminate, which again is a contradiction. Working one’s way up this sequence, one must eventually find an x m < x i < x such that x i A , but no previous x j can lie in A, so that x j is minima in A. Again, this is a contradiction. □
Theorem 8. 
Let G be a well-founded graph. Assume further that every subgraph is grounded. Then, G is well ordered.
Proof. 
Let G be a well-founded graph in which every subgraph is grounded. Let A be any subgraph. Then, A possesses a base B A A . By definition, every b B A must be a greatest lower bound for elements of A. By assumption, if b B A , then b A . Suppose that there exists an x A such that x b . Then, b cannot be a lower bound for A, which is a contradiction. Hence, b must be minimal in A and thus, G is well ordered. □
Definition 22. 
A directed graph G is skeletal if every edge is either prime or a jump.
Definition 23. 
A skeleton S of a directed graph G is a maximal skeletal subgraph.
Definition 24. 
A spanning subgraph G of G is a subgraph whose vertex set is that of G.
Definition 25. 
A directed graph is well founded if every upset is well ordered.
Theorem 9. 
Let G be a connected, well-founded directed graph. Then, G possesses a connected spanning skeleton S.
Proof. 
Let G be a connected, well-founded, directed graph. Let S be the subgraph obtained by restricting to prime and jump edges. If G consists of a singleton vertex, we are finished. Otherwise, G must have at least two vertices, x , y . Since G is connected, either x y , y x or there exists a z such that either x z , y z or z x , z y . Assume that the chain is maximal; otherwise, simply enlarge it until it is maximal. If any chain consists of just a single prime edge or jump, then S is non-empty. Otherwise, the chains must each consist of more than two vertices; hence, we may consider the chains formed by eliminating the initial vertex. Since a chain is an upset, it must be well ordered. Thus, the reduced chain must possess a minimal element, w. The edge x w must therefore be prime; otherwise, w would not be minimal. Again, we find that S is non-empty. If x y is any chain in G, then again it may be enlarged to a maximal chain which still connects x and y. We can use ordinal induction up this maximal chain to show that any two elements along the chain can be connected either by a prime edge or by a jump followed by a sequence of jumps of lower order and eventually by a sequence of prime edges (i.e., x , y are connected via a chain in S). Hence, S is a connected spanning skeleton. □

5. Order Automata

A simple example of a process strategy is an order automaton. The concept of order automaton as developed in [69] was an early attempt to explore the idea that the temporal structure of a space of events might be emergent, arising from the action of dynamics rather than being a pre-existing container within which events took place. Automata are most commonly studied within the setting of theories of computation.
Definition 26. 
Let M be a set of events. Let S be a set of states. The set of states forms a monoid. This means that we identify one element, 1, called the identity, and a binary operation ∗ on S such that for any s , t , v S , s 1 = 1 s = s , s ( t v ) = ( s t ) v . An S-automaton over M is a triple ( S , M , f ) , where f is a mapping f : M × S M , which is called the action such that
  • f ( a , 1 ) = a ;
  • f ( a , s t ) = f ( f ( a , s ) , t ) .
We sometimes write a s = f ( a , s ) .
Note that each s S corresponds to a self map f s on M, which is given by f s ( a ) = f ( s , a ) .
Definition 27. 
An order automaton is an S-automaton over M, ( S , M , f ) , such that the relation R ( S , f ) defined by ( a , b ) R ( S , f ) if there exists an s S such that b = f ( a , s ) is a partial order on M. We denote it by ( S , M , f , R ( S , f ) ) to emphasize the order. We call S an order monoid for M or say that S generates an order on M.
A sequence of automaton actions
a , f ( a , s ) , f ( f ( a , s ) , t ) , f ( f ( f ( a , s ) , t ) , v ) ,
will thus correspond to a chain
a < f ( a , s ) < f ( f ( a , s ) , t ) < f ( f ( f ( a , s ) , t ) , v ) <
in the partial order, so a trajectory on M generated by the automaton corresponds to a chain in the partial order on M.
This may serve as a strategy of a process if we interpret M not as a set of events but simply as a set of labels which we may attach to events. Given a prior event labeled x, this process generates a nascent event under some action s and assigns it a new label f ( x , s ) . This will generate a partial order upon the generated events and a mapping between the ordering on the events and the ordering on their labels. Another way to think of this is to treat the set of events M merely as a collection of potentialities not of actualized events. The set M merely represents the set of all potential or possible, unactualized events, and together with the order structure, the set of all possible, potential, unactualized histories.
Let us now examine the order structures which are created in this manner by order automata. The proofs of these results can be found in [69]. Some of these results apply to transfinite sets, but remember that often, we are dealing with heuristic models, not necessarily realist models.
Theorem 10. 
For any partially ordered set ( X , < ) , there exists a monoid M and an action f of M on X such that ( M , X , f , R ( M , f ) ) is an order automaton whose order R ( M , f ) = < . We shall denote this as ( M , X , f , < ) .
Theorem 11. 
For any set M-automaton on X with action f, the relation R ( M , f ) is reflexive and transitive.
Theorem 12. 
An M-automaton on X with action f is an M-order automaton on X if the relation R ( M , f ) is antisymmetric.
Order automata are quite common. The only requirement of the action is that for any two elements x , y X , the existence of n , m M such that f ( x , n ) = y and f ( y , m ) = x implies that x = y and n = m = 1 .
Ordered sets may be described as abelian or non-abelian depending upon whether the monoid of its order automaton is abelian or non-abelian. There are some subtle differences between these two which are not relevant here. Most of the orders that we shall be considering will be non-abelian. As an example, consider the following:
Theorem 13. 
Let us define an ordered set X inductively.
  • Let X 0 = { x 0 } .
  • Let X 1 = { x 0 , 1 , x 02 } .
  • Set x 0 < x 01 , x 02 and leave the other two incomparable.
  • Assume that we have defined sets up to X n . Define X n + 1 by taking each x X n and adding two elements, x 1 , x 2 to X n + 1 , making all elements of X n + 1 incomparable.
  • Take X to be the transitive closure of the order union of the X i .
This set is a tree in which each node has two forward branches. It is non-abelian.
It is quite simple to show that
Theorem 14. 
Every linearly ordered set is abelian.
Another example of an abelian order is the set N × N with order given by ( x , y ) < ( w , z ) if and only if x < w and y < z .
Definition 28. 
Let ( X , > ) be an ordered set. The forward cone of x X is the set F x = { z | x z } . Conversely, the backward cone of x is the set B x { z | z x } .
Non-abelian orders are much more difficult to characterize than abelian orders. A simple test for being non-abelian is the following.
Theorem 15. 
Let ( X , < ) be an ordered set. If there exist elements x , y X such that B x B y but F x F y = , then X is non-abelian.
Definition 29. 
Let L be a set, finite or infinite. A free monoid F over L consists of the set of all finite strings of elements of L. The monoid operation is given by concatenation. Thus, an element of F is a string x 1 x 2 X 3 x n , where x i L . The concatenation of two strings x 1 x 2 x 3 x n , y 1 y 2 y 3 y n is the string x 1 x 2 x 3 x n y 1 y 2 y 3 y n .
A basic result of monoid theory is that every monoid is the homomorphic image of a free monoid.
Theorem 16. 
Let ( M , X , f , R ( M , f ) ) be an order automaton. Then, there exists a free monoid F and an action g such that ( F , X , g , R ( F , g ) ) is an order automaton and R ( M , f ) = R ( F , g ) . That is, their induced orders are identical.
Definition 30. 
Let M be a free monoid. The dimension of M is the cardinality of its set of generators (usually the cardinality of its associated symbol set L).
Definition 31. 
Let ( X , < ) be an ordered set. Let
M ( X ) = { M | ( M , X , f , < ) be an order automaton , w h e r e M is a free monoid }
Then, the free dimension of X, d i m F ( X ) = inf M M ( X ) d i m ( M ) .
Theorem 17. 
For any ordered set X, d i m F ( X ) exists.
Every partially ordered set can be generated by some order automaton, so we focus upon those orders which are generated by free order automata. The free dimension provides a useful parameter for describing the structure of these orders.
Definition 32. 
Given an ordered set ( X , < ) , x , y X , y is an immediate successor of x if x < y and there is no z such that x < z < y . The set of immediate successors of x is denoted I x . Similarly, we can define an immediate predecessor of x as an element y such that there is no z with y < z < x . We denote the set of immediate predecessors of x by P x .
Theorem 18. 
Let ( X , < ) be an ordered set and ( M , X , f , < ) be a free order automaton. Let A denote the set of generators of M. For any x X , I x x A . If d i m F ( X ) = n and c a r d ( I x ) = n , then I x = x A .
Definition 33. 
A successor chain is an ascending chain c 0 < c 1 < c 2 < such that c i + 1 is an immediate successor of c i .
We can now describe ordered sets of free dimension 1.
Theorem 19. 
Let ( X , < ) be an ordered set. Then, d i m F ( X ) = 1 if and only if X is a disjoint union of ordered sets X i , where each element of X i has at most one immediate successor, there exists at least one non-trivial interval, and all closed intervals are finite. It is freely generated if and only if the supremum of the set of cardinalities of the forward cones in X is 0 .
Discrete dynamical systems are often given by iterations of a single self map f on some state set S. f n ( a ) gives the state of the system at time n given initial state a. We have the following result
Theorem 20. 
Let X be a set and f be a self map on X satisfying the conditions
  • For all x X , if x = x f k for some k > 0 , then x = x f k for all k, i.e., x is a fixed point of f.
  • The functions f k are all distinct for 0 k < ω .
Then, the monoid M = { f k | 0 k < ω } is a free monoid and ( M , X , g , R ( F , g ) ) is a free-order automaton with action g ( x , f k ) = x f k for all f k M .
The generators of a free monoid act more or less independently of one another.
Theorem 21. 
Let ( X , < ) be an ordered set and ( M , X , f , < ) be a free order automaton on X inducing the order <. Let A denote the set of generators of M. For each a A , let M a denote the free monoid generated by a. Then, ( M a , X , f a , R ( M a , f a ) ) is a free order automaton on X generating a suborder < a of <, where f a is the restriction of f to M a and < a is R ( M a , f a ) . Furthermore, < is the transitive closure of all of the < a .
There is a partial corollary to this.
Theorem 22. 
Let ( X , < ) be an ordered set and A be a set of self maps on ( X , < ) . For each a A , let M a denote the monoid generated by a. Define an action f a by f ( x , a ) = x a for all x X . If, for each a, ( M a , X , f a , R ( M a , f a ) ) is an order automaton, and if the transitive closure of all R ( M a , f a ) exists, then there exists a free monoid M of dimension card(A), and an action f, such that ( M , X , f , R ( M , f ) ) is an order automaton for the transitive closure of all R ( M a , f a ) .
Theorem 23. 
Let ( X , < ) be an ordered set of free dimension n. Then, any element of X has at most n immediate successors. Moreover, if the dimension is finite, then every non-maximal element has at least one immediate successor.
There is an approximate corollary to this.
Definition 34. 
Let ( X , < ) be an ordered set. For any x X , let o ( x ) denote the cardinality of the set of immediate successors of x. If x is maximal, o ( x ) = 0 , while if x has no immediate successor, set o ( x ) = 0 . Define o ( X ) = sup x X o ( x ) .
Theorem 24. 
Let ( X , < ) be an ordered set of finite free dimension, x X , y D x . Then, there exists a successor chain C X x with initial element x and C < y . Likewise, if y X x and z D y , then there exists a successor chain C in X x with initial element y and C < z .
Theorem 25. 
Let ( X , < ) be an ordered set. Then, o ( X ) d i m F ( X ) . If X is finite, then o ( X ) = d i m F ( X ) .
A useful result is the following, which sheds light on the order structure of process covering graphs.
Theorem 26. 
Let ( X , < ) and ( Y , < ) be ordered sets and ( Z , < ) be their order product. Then,
sup { d i m F ( X ) , d i m F ( Y ) } d i m f ( Z ) d i m F ( X ) + d i m F ( Y )
If either d i m F ( X ) or d i m F ( Y ) is infinite, then d i m F ( Z ) = d i m F ( X ) + d i m F ( Y ) . If both X and Y are finite sets, then d i m F ( Z ) = d i m F ( X ) + d i m F ( Y ) .
Theorem 27. 
Let ( X , < ) be an ordered set such that for all x X , d i m F ( F x ) κ . Then, d i m F ( X ) κ .
Theorem 28. 
Let ( X , < ) be an ordered set with finite free dimension n. Then, there exists an x X such that d i m F ( F x ) = n .
Theorem 29. 
Let ( X , < ) be an ordered set. Then, d i m F ( X ) is finite if and only if every forward cone is countable and o ( X ) is finite. Furthermore, o ( X ) d i m F ( X ) o ( X ) + 1 .
In the case of countable free dimension, we have
Theorem 30. 
Let ( X , < ) be an ordered set. Then, d i m F ( X ) = 0 if and only if every forward cone is countable and o ( X ) = 0 .
Now, we can begin to examine the larger structure.
Definition 35. 
Let ( X , > ) be an ordered set and ( M , X , f , < ) be an order automaton. A basis for ( M , X , f , < ) is a subset B of X of least cardinality such that X = x B f ( x , M ) .
Definition 36. 
Let ( X , < ) be an ordered set. Given two chains C , D in ( X , < ) , we say that C D if D c C F c = .
Theorem 31. 
Let ( X , < ) be an ordered set. Then, there exists a basis B for ( X , < ) such that
  • There exists a set C of chains, C = { c β | β < α } for some ordinal α;
  • β < γ < α implies c β c γ ;
  • B = β < α c β .
Definition 37. 
Let ( X , < ) be an ordered set and x X . A primitive neuron, X x , is defined by induction as follows. For n = 0 , set X 0 x = { x } . For n = 1 , set X 1 x = X 0 x I x , where I x is the set of immediate successors of x. Assuming that we have defined up to X n x , then X ( n + 1 ) x = X n x I z X n x . Clearly, X x F x . We define D x = F X x . The neuron X x consists of all elements which can be reached from x via a finite chain. D x is the set of all non-finitely reachable elements.
The idea of a neuron generalizes the above. It takes its name from its cartoonish resemblance to a biological neuron.
Definition 38. 
Let ( X , < ) be an ordered set. A (first-level) neuron is defined by induction. Let X x 0 1 = { x } . Then, set X x 1 1 = I x P x . Assuming we have defined sets up to level n 1 , set X x n 1 = y X x n 1 1 I y P y . Finally, define X x ω 1 = 0 n < ω X x n 1 .
Theorem 32. 
Let ( X , < ) be an ordered set of free dimension n. Let x X and suppose that for every y F x , I y has cardinality n. Then, F x X x ω 1 = .
Theorem 33. 
Let ( X , < ) be an ordered set of free dimension n. Assume that for every x X , I x has exactly n distinct elements. Then, there exists a subset C of incomparable elements such that X = x C X x ω 1 , whereis the order theoretic disjoint sum.
Theorem 34. 
Let ( X , < ) be an ordered set, x , y X . Then, either X x ω 1 = X y ω 1 or X x ω 1 X y ω 1 = .
Now, things become subtle.
Theorem 35. 
Let ( X , < ) be an ordered set. Then, there exists a subset C of X such that, as sets, X = a C X a ω 1 .
The point about ‘as sets’ is critical here. These first-level neurons form a partition of the ordered set X, but in decomposing it in this manner, the order theoretic relationships between the various first-order neurons are lost. We can only describe the possibilities in general terms.
Theorem 36. 
Let ( X , < ) be an ordered set of finite free dimension. Let x , y X and assume that X x ω 1 X y ω 1 = . Suppose that for some w X x ω 1 and z X y ω 1 we have w < z . Then, there exists an infinite successor chain x 1 , x 2 , x 3 , in X x ω 1 such that w < x 1 < x 2 < < z .
Theorem 37. 
Let ( X , < ) be an ordered set of finite free dimension. For any x , z X , if z is maximal in X x ω 1 , then it is maximal in X.
We can summarize this as outlined below.
Theorem 38. 
Let ( X , < ) be an ordered set of finite free dimension. Then, there exists a subset C of elements such that, as a set, X = c C X c ω 1 , where the X c ω 1 are pairwise disjoint, and for any x , y X such that X x ω 1 X y ω 1 = and x < y , there exists an infinite ascending chain D in X x ω 1 such that x < D < y .
Definition 39. 
Let ( X , < ) be an ordered set. A set of the form X x ω 1 is called a level 1 neuron, or simply neuron. Maximal elements are called terminal elements, minimal elements are called receptors, unbounded ascending chains are called transmitting axons, and unbounded descending chains are called receiving axons.
A synaptic union bears some similarities to a sum of causal tapestries—it cannot be defined exactly but must be specified in each individual case.
Definition 40. 
Let ( X , < ) be an ordered set of finite free dimension. Let N , M be neurons of X. Y is a synaptic union of N , M , denoted Y = N M if Y = N M as sets, and whenever there exist x N , y M such that x < y in X, then there exists a transmitting axon C X and either a receiving axon D X or a receptor z M such that either x < C < D < y or x < C < z < y .
Thus, we have the following result.
Theorem 39. 
Let ( X , < ) be an ordered set of finite dimension. Then, there exists a subset C X such that X = a C X a ω 1 . Thus, every finitely generated ordered set is a synaptic union of neurons.
In the case of free abelian ordered sets, that is, ordered sets having free abelian order monoids, we can specify the structure more precisely. To do that, it is necessary to generalize yet again the notion of neuron.
Definition 41. 
Let ( X , < ) be an ordered set, x X . An n-level neuron is defined inductively as follows:
  • I x 0 = { x }
  • P x 0 = { x }
  • X x ω 0 = { x }
  • I x 1 = { y | x < y and there exists no z such that x < z < y }
  • P x 1 = { y | y < x and there exists no z such that y < z < x }
  • X x 1 1 = I x 1 P x 1
  • X x n 1 = a X x n 1 1 I a 1 P a 1
  • X x ω 1 = n N X x n 1
  • I x n = { X y ω n 1 | x < y and for no z with X z ω n 1 X x ω n 1 = = X z ω n 1 X y ω n 1 do we have x < z < y }
  • P x n = { X y ω n 1 | y < x and for no z with X z ω n 1 X x ω n 1 = = X z ω n 1 X y ω n 1 do we have y < z < x }
  • X x 0 n = X x ω n 1
  • X x k n = a X x k 1 n I a n P a n
  • X x ω n = k N X x k n
X x ω n is called an n-level neuron.
Finally,
Theorem 40. 
Let ( X , < ) be an ordered set with finite free abelian dimension n, meaning that it has a free abelian order monoid on n generators, and no smaller number of generators suffices. Let C be a component of X and let x C . Then, C = X x ω n .
Thus, an ordered set with free abelian dimension n consists of a union of disjoint n-level neurons. Unfortunately, we do not have such a nice result in the case of non-abelian orders, and their characterization remains an open question.

General Process

Let us now turn to consider the general structure of the temporal graph of an arbitrary process. Consider first a single primitive process. The set of informons created by the process will change with each generation. We can build a history E as a union over a sequence of sets, E 1 , E 2 , E 3 , , which list the events generated up to and including the n-th iteration of the automaton. We thus think of a new kind of set, E ( ) , where ( ) refers to an index referencing which stage in its construction the set is in. We require that E 1 E 2 E 3 . If we are talking about E n for some n, we may speak about E i for all 0 i n , but we cannot speak about E n + 1 unless we know that such a set has already been generated. Across the entire history, E = n = 0 E n . If E has been generated up to generation n, we may speak of the potentialities for the version E n + 1 , which we denote as P ( E ) and which consists of a collection of sets such that if X P ( E ) , then E n X . In other words, every set X is an extension of E n . The set of potentialities thus contains all possible causal tapestries which can be created by the process and thus is closely related to the process covering graph.
A primitive process ( R = 1 ) generates only one informon per round, so we may label each informon by the number of its round. Hence, informon [ n ] is the informon created in the n-th round. [ 0 ] is the initial informon (it may be the empty informon [ 0 ] < ; > { } ).
Beginning with informon [ 0 ] , the process P will generate informon [ 1 ] , and informon [ 0 ] will cease to exist. Thus, if we examine what is taking place at any moment, we will observe a prior informon [ n 1 ] and a nascent informon [ n ] that is undergoing a process of concrescence. The actualized informons [ n 1 ] , [ n ] exist together for just an instant, while the process of concrescence forms an instance and takes place over some duration. If we wish to go to a micro level, we can add a second index to denote which short round the nascent informon currently is in, bearing in mind that such a decompensation is purely heuristic. Thus, the completed concrescence or actualization of [ n 1 ] marks an instant t n 1 , which is also the beginning of the concrescence of [ n ] . Its completion marks another instant, t n , and separating them is a duration d n , which corresponds to the instance of concrescence of [ n ] . The gap between two informons [ n ] , [ m ] is the number of actualized informons along the chain from [ m ] to [ n ] . In this example, the gap is 0, since there are no informons along the chain from [ n 1 ] to [ n ] .
Theorem 41. 
Let C be a causal tapestry and P its generating process. Let [ n ] be a nascent informon and [ m ] a prior informon which is propagating information to [ n ] . Then, an edge ( [ m ] , [ n ] ) in the directed subgraph is primitive.
Proof. 
By definition, any such edge, if it exists, must be directed. If there were an informon [ v ] such that [ m ] , [ v ] , [ n ] is a path and the edges are both directed, ( [ m ] , [ v ] ) and ( [ v ] , [ n ] ) , then by definition, [ v ] would lie in a nascent tapestry to C , but also in a prior tapestry to C , which is impossible since C and C are adjacent tapestries. □
This of course is not true if we take a path in the full tapestry because [ v ] could be space-like separated from either [ m ] or [ n ] .
Since the temporal order is associated with a primitive process, the order structure will obviously be that of a linear order, which is ordered by generation.
[ 0 ] < [ 1 ] < [ 2 ] < [ 3 ] <
and if d i is the duration corresponding to the generation of the i-th informon, we will have
d 0 · d 1 · d 2 · d 3 ·
It is possible that orders could have the form
ω n ω n + 1 ω n + 2 ω 1 ω 0 ω 1 ω 2 ω 3 ω m
Suppose that we have two independent primitive processes. In that case, there is no interaction of any kind between them and thus no reason a priori to believe that there should be any relationship between their two temporal orders. The temporal order T of the pair P 1 , P 2 is simply the disjoint sum of the individual temporal orders, T 1 T 2 . The same is true of coupled processes, because there is no requirement of any form of simultaneity, merely that they must stay out of each other’s way when generating informons.
The case of sums deals with single processes, and so coherence in the temporal ordering is automatic. The situation changes with products and interactive products. In these circumstances, informons from all of the participating processes must be generated simultaneously; otherwise, it would be possible for measurements to be carried out in which fewer informons than the number of participating processes are detected at the end of a single round. But that would contradict the number of participants. Thus, in the situation of products, the temporal orderings of the participating processes become correlated. This is an important consideration in phenomena such as entanglement. Two independent entities become entangled, causing previously unsynchronized entities to synchronize their activities. The result is that they now behave as a single entity. Going forward, they now possess a single, absolute temporal ordering. Simultaneity is now a feature of this new entity. This will persist until one of the entities enters into an interaction with another entity. At that point, because information propagation is local, it is not possible for information about the new, local, interaction to propagate to the other entity; thus, the temporal synchronization between the two original entities is lost, synchronization between the newly interacting entities takes place, and the original entanglement is lost. Of course, this need not pertain once the product relationship has ended. The situation within complex products can become quite complicated and will not be discussed further here.
Since r is the maximum number of informons to which any given informon may contribute information in each generation cycle, then each informon can have at most N r immediate successors. Thus, the free dimension could be as high as N r or as low as r . Note that at least one informon must pass information to r informons in a generation cycle; otherwise, we could assign r a lower value. Thus, the free dimension could lie anywhere in the range [ r , N r ] . If C is a causal graph for a causal tapestry generated by some process, then there will always be a base of initial informons, so that the backward regression will always terminate after a finite number of generations. Thus, we may begin to observe the generations of a process with any generation and know that the causal structure will remain consistent back to its origin.
Since every process has a beginning, every process possesses a base, which is either a set of null informons or a set of prior informons from the prior causal tapestry which triggered its activation. Thus, the causal tapestry that it will generate will be composable into a partially ordered set of antichains. Each adjacent antichain pair will take the form of a r -partite graph. Each antichain corresponds to a generation. This ordering will be causal and thus invariant under Lorentz boosts. Thus, it provides a local, absolute, temporal ordering. Such local temporal becoming is the thesis of Arthur [70].
Time, in this ordering, is discrete, since each instant of time marks the completion of concrescence of its generated informons. If two processes are independent of one another, or even weakly coupled, there is no interchange of information between them, and thus, a priori, there need be no temporal synchronization of their generation cycles. The relative temporal difference, reflected in any M interpretation of the causal tapestry, can take any value. Thus, while each temporal ordering corresponds to discrete time, the range of possible relative times is continuous. This is in keeping with Whitehead’s statement of the difference between actual time and potential or abstract time. Thus, time can be both continuous and discrete—again, it depends upon what we are referring to.
There are actually two related temporal orderings associated with a process. There is the instant-based ordering discussed above, which is based upon the termination or commencement of concrescence of informons by a process. This temporal ordering more closely resembles the traditional, though misleading, point-based orders of classical and quantum physics. There is a second temporal ordering, however, which is focused upon the instances of concrescence and thus the informons themselves. Since infomons are grounded in instances of temporal becoming, in durations, this is not a point-like order but rather an example of what is called an interval order. An interval order is defined by Fishburn as follows [71]:
Definition 42. 
An interval order is a 4-tuple ( V , P , I , F ) , where
  • V is a non-empty finite set;
  • P is an asymmetric binary relation on V;
  • I is a symmetric, reflexive binary relation on V;
  • F is a mapping from V into the set of positive length, closed real intervals;
  • ( V , I ) is an interval graph if x , y V , x I y if and only if F ( x ) F ( y ) ;
  • ( V , P ) is an interval order if x , y V , x P y if and only if F ( x ) > F ( y ) .
It is interesting that although any interval order gives rise to exactly one interval graph, called the symmetric-complement graph, the converse is not true. A given interval graph may be the symmetric-complement graph of many different interval orders. The ordering of informons themselves, particularly when these arise from independent processes, is thus fundamentally ambiguous regardless of the results of special relativity. In the absence of some primordial über process that somehow synchronizes all processes from the get-go, temporal orderings that involve independent processes are ambiguous by nature. It is an interesting question whether the process which generated the original Big Bang would serve as such an über process. I have not addressed that question in any depth but if it could, it would then be the source of an underlying, fundamental, absolute, and global temporal ordering, though one which we would have no means of detecting. Nevertheless, the processes of Nature could act in accordance with it.

6. Conclusions

The process algebra approach presented here was originally developed to provide a local, realist model of non-relativistic quantum mechanics [4,5,6,7,8,9,10]. It indeed proved to be capable of approximating the usual wave function to a high degree of accuracy. Its scope of application has subsequently been expanded to model the behavior of organisms, keeping with Whitehead’s concept of process. The dynamical characteristics of organisms provide deep challenges to the development of mathematical methods that promise more than a cartoon version of living systems. They also provide a wonderful opportunity for the creation of new mathematics through a serious interaction between mathematicians and scientists studying living systems, in all of their complexity, much as has taken place between mathematics and physics over the past three centuries. The essential characteristics of living systems—generativity, becoming, fungibility, meta-stability, emergence, transience, transients, openness, contextuality, locality and non-Kolmogorov probability—all pose deep challenges to our usual mathematical approaches but also great opportunities. Some inroads have already been made in terms of the study of fractal geometry, non-Kolmogorov probability and contextuality, and iterated systems like Lindemeyer systems.
Some possible lines of future research are offered here:
  • The study of dynamics systems is dominated by generativity, fungibility, transients, transience, and contextuality.
  • Process dynamics appears to sit between fully deterministic and fully random dynamics—the study of the legitimacy of this idea.
  • The application of the process algebra to the study of neurobiological systems, collective intelligence or temperament.
  • The study of generated space–time structures—the relationship between the discrete and interpolated continuous geometries.
  • A study of local, causal temporal orderings, their topological and metrical properties, and the effect that interactions have on linkages between them.
  • An examination of the relationship between process strategies and the classes of functions which can be generated by them.
  • The generation of functions appears to be a novel approach distinct from that afforded by computation theory—the study of the legitimacy of this idea.
  • The study of the relationship between process strategies and the differential/integral equations for which their generated, interpolated functions serve as solutions.
  • The study of the range and limitations of causally local dynamics.
  • The study of the existence and origin of fundamental scales—whether they can be found in the mathematics itself or they must be added ad hoc.
Hopefully, some of these questions will catch the interest of mathematicians and lead to further development and application of the process algebra approach.

Funding

This research received no external funding.

Data Availability Statement

No data was generated in the preparation of this paper.

Acknowledgments

I would like to thank Irina Trofimova for her own work on process models and for countless discussions and debates which inspired and contributed to the development of the process algebra.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Baeten, J.C.M.; Weijland, W.P. Process Algebra; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  2. Bergstra, J.A.; Ponse, A.; Smolka, S.A. (Eds.) Handbook of Process Algebra; Elsevier: Amsterdam, The Netherlands, 2001. [Google Scholar]
  3. De Nicola, R. Process Algebras. In Encyclopedia of Parallel Computing; Padua, D., Ed.; Springer: Boston, MA, USA, 2011. [Google Scholar] [CrossRef]
  4. Sulis, W. A Process Model of Non-Relativistic Quantum Mechanics. Ph.D. Thesis, University of Waterloo, Waterloo, ON, USA, 2014. [Google Scholar]
  5. Sulis, W. A Process Model of Quantum Mechanics. J. Mod. Phys. 2014, 5, 1789–1795. [Google Scholar] [CrossRef]
  6. Sulis, W. A process algebra model of quantum electrodynamics. J. Physics Conf. Ser. 2016. [Google Scholar]
  7. Sulis, W. Modeling stochastic complexity in complex adaptive systems: Non-Kolmogorov probability and the process algebra approach. Nonlinear Dyn. Psychol. Life Sci. 2007, 21, 407–440. [Google Scholar]
  8. Sulis, W. Locality is dead! Long live locality! Front. Phys. 2020, 8, 360. [Google Scholar] [CrossRef]
  9. Sulis, W. Reality Does Not Shine, It Twinkles. Quantum Rep. 2023, 5, 609–624. [Google Scholar] [CrossRef]
  10. Sulis, W. Process and Time. Entropy 2023, 25, 803. [Google Scholar] [CrossRef] [PubMed]
  11. Sulis, W. Process and Time; World Scientific: Singapore, 2025; forthcoming in 2025. [Google Scholar]
  12. Whiting, H. Human Motor Actions: Bernstein Reassessed; North-Holland: New York, NY, USA, 1984. [Google Scholar]
  13. Quirk, G.J.; Muller, R.U.; Kubie, J.L. The firing of hippocampal place cells in the dark depends on the rat’s recent experience. J. Neurosci. 1990, 10, 2008–2017. [Google Scholar] [CrossRef] [PubMed]
  14. Ziv, Y.; Burns, L.D.; Cocker, E.D.; Hamel, E.O.; Ghosh, K.K.; Kitch, L.J.; El Gamal, A.; Schnitzer, M.J. Long-term dynamics of CA1 hippocampal place codes. Nat. Neurosci. 2013, 16, 264–266. [Google Scholar] [CrossRef]
  15. Barry, D.N.; Maguire, E.A. Consolidating the Case for Transient Hippocampal Memory Traces. Trends Cogn. Sci. 2019, 23, 635–636. [Google Scholar] [CrossRef]
  16. Hudson, A. Metastability of Neuronal Dynamics during General Anesthesia: Time for a Change in Our Assumptions? Front. Neural Circuits 2017, 11, 58. [Google Scholar] [CrossRef]
  17. Cabral, J.; Castaldo, F.; Vohryzek, J.; Litvak, V.; Bick, C.; Lambiotte, R.; Friston, K.; Kringelbach, M.; Deco, G. Metastable oscillatory modes emerge from synchronization in the brain spacetime connectome. Commun. Phys. 2022, 5, 184. [Google Scholar] [CrossRef] [PubMed]
  18. Nilchian, P.; Wilson, M.; Sanders, H. Animal-to-Animal Variability in Partial Hippocampal Remapping in Repeated Environments. J. Neurosci. 2022, 42, 5268–5280. [Google Scholar] [CrossRef]
  19. Azmitia, E.C. Serotonin and Brain: Evolution, Neuroplasticity, and Homeostasis. Int. Rev. Neurobiol. 2007, 77, 31–56. [Google Scholar] [CrossRef] [PubMed]
  20. Gerstein, G.L.; Mandelbrot, B. Random walk models for the spike activity of a single neuron. Biophys. J. 1964, 4, 41–68. [Google Scholar] [CrossRef] [PubMed]
  21. Shadlen, M.; Newsome, W.T. Noise, neural codes and cortical organization. Curr. Opin. Neurobiol. 1994, 4, 569–579. [Google Scholar] [CrossRef] [PubMed]
  22. Badin, A.-S.; Fermani, F.; Greenfield, S. The features and functions of neuronal assemblies: Dependency on mechanisms beyond synaptic trans-mission. Front. Neural Circuits 2017, 10, 114. [Google Scholar] [CrossRef] [PubMed]
  23. Clapp, M.; Aurora, N.; Herrera, L.; Bhatia, M.; Wilen, E.; Wakefield, S. Gut microbiota’s effect on mental health: The gut-brain axis. Clin. Pract. 2017, 7, 131–136. [Google Scholar] [CrossRef] [PubMed]
  24. Freeman, W.J. Mass Action in the Nervous System; Springer: New York, NY, USA, 1975. [Google Scholar] [CrossRef]
  25. Freeman, W.J. Neurodynamics: An Exploration in Mesoscopic Brain Dynamics; Springer: New York, NY, USA, 2000. [Google Scholar] [CrossRef]
  26. Sulis, W. Contextuality in Neurobehavioural and Collective Intelligence Systems. Quantum Rep. 2021, 3, 592–614. [Google Scholar] [CrossRef]
  27. Buzsaki, G. The Brain from Inside Out; Oxford University Press: Oxford, UK, 2019. [Google Scholar]
  28. Sulis, W. An Information Ontology for the Process Algebra Model of Non-Relativistic Quantum Mechanics. Entropy 2020, 22, 136. [Google Scholar] [CrossRef]
  29. Trofimova, I. Principles, concepts, and phenomena of ensembles with variable structure. In Nonlinear Dynamics in the Life and Social Sciences; Sulis, W., Trofimova, I., Eds.; IOS Press: Amsterdam, NL, USA, 2001; pp. 217–231. [Google Scholar]
  30. Trofimova, I. Phenomena of Functional Differentiation (FD) and Fractal Functionality (FF). Int. J. Des. Nat. Ecodyn. 2016, 11, 508–521. [Google Scholar] [CrossRef]
  31. Trofimova, I. Functional constructivism: In search of formal descriptors. Nonlinear Dyn. Psychol Life Sci. 2017, 21, 441–474. [Google Scholar]
  32. Sulis, W. TIGoRS and Neural Codes. In Nonlinear Dynamics in Human Behaviour; Sulis, W., Combs, A., Eds.; World Scientific: Singapore; Shah Alam, Malaysia, 1996. [Google Scholar]
  33. Sulis, W. Transients as the basis for information flow in complex adaptive systems. Entropy 2019, 21, 94. [Google Scholar] [CrossRef] [PubMed]
  34. Dufort, P.; Lumsden, C. Dynamics, Complexity, and Computation. In Physical Theory in Biology. In Foundations and Explorations; Lumsden, C., Brandts, W., Trainor, L., Eds.; World Scientific: Singapore, 1997; pp. 69–103. [Google Scholar]
  35. Sulis, W. Lessons from collective intelligence. In Chaos Theory in the Social Sciences; Elliot, E., Kiel, D., Eds.; Michigan University Press: Ann Arbor, MI, USA, 2021. [Google Scholar]
  36. Available online: https://www.biologyonline.com/dictionary/process$#$ (accessed on 30 July 2023).
  37. Available online: https://en.wikipedia.org/wiki/Process_(engineering) (accessed on 30 July 2023).
  38. Mallon, E.; Pratt, S.; Franks, N. Individual and collective decision-making during nest site selection by the ant Leptothorax albipennis. Behav. Ecol. Sociobiol. 2001, 50, 352–359. [Google Scholar]
  39. Robinson, E.J.H.; Feinerman, O.; Franks, N.R. How collective comparisons emerge without individual comparisons of the options. Proc. R. Soc. B Biol. Sci. 2014, 281, 20140737. [Google Scholar] [CrossRef] [PubMed]
  40. Hodges, W. Building Models by Games; Dover Publications: New York, NY, USA, 2006. [Google Scholar]
  41. Hirsch, R.; Hodkinson, I. Relation Algebras by Games; Elsevier: New York, NY, USA, 2002. [Google Scholar]
  42. Berlekamp, E.; Conway, J.; Guy, R. Winning Ways for Your Mathematical Plays; A.K. Peters: Natick, MA, USA, 2004; Volume 4. [Google Scholar]
  43. Conway, J.H. On Numbers and Games; A.K. Peters: Natick, MA, USA, 2001. [Google Scholar]
  44. Guy, R.K. (Ed.) Combinatorial Games (Proceedings of Symposia in Applied Mathematics); American Mathematical Society: Providence, RI, USA, 1991; Volume 43. [Google Scholar]
  45. Prusinkiewicz, P.; Hanan, J. Lindenmeyer Systems, Fractals, and Plants; Lecture Notes in Biomathemaics 79; Springer: New York, NY, USA, 1989. [Google Scholar]
  46. Lai, Y.-C.; Tel, T. Transient Chaos: Complex Dynamics on Finite Time Scales; Sirnger: New York, NY, USA, 2011. [Google Scholar]
  47. Bergson, H. An Introduction to Metaphysics; SAGA Egmont: Hovedstaden, Denmark, 2020. [Google Scholar]
  48. Heidegger, M. Being and Time; General Press: New Delhi, India, 2023. [Google Scholar]
  49. Whitehead, A.N. Process and Reality; The Free Press: New York, NY, USA, 1978. [Google Scholar]
  50. Progogine, I. From Being to Becoming: Time and Complexity in the Physical Sciences; Freeman: New York, NY, USA, 1980. [Google Scholar]
  51. Shimony, A. Search for a Naturalistic World View, Volume II, Natural Science and Metaphysics; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  52. Hansen, N. Spacetime and Becoming: Overcoming the Contradiction Between Special Relativity and the Passage of Time. In Physics and Whitehead: Quantum, Process and Experience; Eastman, T.E., Keeton, H., Eds.; SUNY Press: Albany, NY, USA, 2004; pp. 136–163. [Google Scholar]
  53. Stapp, H. Mindful Universe: Quantum Mechanics and the Participating Observer. Springer: New York, NY, USA, 2011. [Google Scholar]
  54. Chew, G. A Historical Reality That Includes Big Bang, Free Will, and Elementary Particles. In Physics and Whitehead: Quantum, Process and Experience; Eastman, T.E., Keeton, H., Eds.; SUNY Press: Albany, NY, USA, 2004; pp. 84–91. [Google Scholar]
  55. Finkelstein, D. Physical Process and Physical Law. In Physics and Whitehead: Quantum, Process and Experience; Eastman, T.E., Keeton, H., Eds.; SUNY Press: Albany, NY, USA, 2004; pp. 180–186. [Google Scholar]
  56. Cahill, R.T. Process Physics: From Information Theory to Quantum Space and Matter; Nova Science Publishers: New York, NY, USA, 2005. [Google Scholar]
  57. Hiley, B. Process, Distinction, Groupoids and Clifford Algebras: An Alternative View of the Quantum Formalism. In New Structures for Physics; Coecke, B., Ed.; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  58. Eastman, T.E.; Keeton, H. (Eds.) Physics and Whitehead: Quantum, Process and Experience; SUNY Press: Albany, NY, USA, 2004. [Google Scholar]
  59. Epperson, M. Quantum Mechanics of the Philosophy of Alfred North Whitehead; Fordham University Press: New York, NY, USA, 2012. [Google Scholar]
  60. Stengers, I. Thinking with Whitehead: A Free and Wild Creation of Concepts; Harvard University Press: Cambridge, MA, USA, 2014. [Google Scholar]
  61. Sherburne, D. A Key to Whitehead’s Process and Reality; University of Chicago Press: Chicago, IL, USA, 1981. [Google Scholar]
  62. Bancal, J.D.; Pironio, S.; Acin, A.; Liang, Y.C.; Scarani, V.; Gisin, N. Quantum nonlocality based on finite-speed causal influences leads to superluminal signalling. arXiv 2013, arXiv:1110.3795v2. [Google Scholar]
  63. Sulis, W. Intransitivity and Contextuality in the Decision Making of Social Insect Colonies. Nat. Syst. Mind 2022, 1, 24–39. [Google Scholar] [CrossRef]
  64. Sulis, W.; Khan, A. Contextuality in Collective Intelligence: Not There Yet. Entropy 2023, 25, 1193. [Google Scholar] [CrossRef] [PubMed]
  65. Zayed, A.I. Advances in Shannon’s Sampling Theory; CRC Press: Boca Raton, FL, USA, 1993. [Google Scholar]
  66. Landau, H.J. Necessary density conditions for sampling and interpolation of certain entire functions. Acta Math. 1967, 117, 37–52. [Google Scholar] [CrossRef]
  67. Olevskii, A.; Ulanovskii, A. Functions with Disconencted Spectrum; University Lecture Series; AMS Press: Providence, RI, USA, 2016; Volume 65. [Google Scholar]
  68. Feynman, R.P.; Hibbs, A.R. Quantum Mechanics and Path Integrals; Dover Publications: Mineola, UK, 2010. [Google Scholar]
  69. Sulis, W. Order Automata. Ph.D. Thesis, The University of Western Ontario, London, ON, Canada, 1989. [Google Scholar]
  70. Arthur, R. The Reality of Time Flow: Local Becoming in Modern Physics; Springer: New York, NY, USA, 2019. [Google Scholar]
  71. Fishburn, P. Interval Graphs and Interval Orders. Discret. Math. 1985, 55, 135–149. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sulis, W. Mathematics of a Process Algebra Inspired by Whitehead’s Process and Reality: A Review. Mathematics 2024, 12, 1988. https://doi.org/10.3390/math12131988

AMA Style

Sulis W. Mathematics of a Process Algebra Inspired by Whitehead’s Process and Reality: A Review. Mathematics. 2024; 12(13):1988. https://doi.org/10.3390/math12131988

Chicago/Turabian Style

Sulis, William. 2024. "Mathematics of a Process Algebra Inspired by Whitehead’s Process and Reality: A Review" Mathematics 12, no. 13: 1988. https://doi.org/10.3390/math12131988

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop