Next Article in Journal
Teach Programming Using Task-Driven Case Studies: Pedagogical Approach, Guidelines, and Implementation
Previous Article in Journal
Predicting Student Performance in Introductory Programming Courses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Genome and Self-Regulating Distributed Software Applications with Associative Memory and Event-Driven History

1
Ageno School of Business, Golden Gate University, San Francisco, CA 94105, USA
2
Barowsky School of Business, Dominican University of California, San Rafael, CA 94901, USA
*
Author to whom correspondence should be addressed.
Computers 2024, 13(9), 220; https://doi.org/10.3390/computers13090220
Submission received: 21 June 2024 / Revised: 22 August 2024 / Accepted: 29 August 2024 / Published: 5 September 2024

Abstract

:
Biological systems have a unique ability inherited through their genome. It allows them to build, operate, and manage a society of cells with complex organizational structures, where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals with shared knowledge. The system receives information from various senses, makes sense of what is being observed, and acts using its experience while the observations are still in progress. We use the General Theory of Information (GTI) to implement a digital genome, specifying the operational processes that design, deploy, operate, and manage a cloud-agnostic distributed application that is independent of IaaS and PaaS infrastructure, which provides the resources required to execute the software components. The digital genome specifies the functional and non-functional requirements that define the goals and best-practice policies to evolve the system using associative memory and event-driven interaction history to maintain stability and safety while achieving the system’s objectives. We demonstrate a structural machine, cognizing oracles, and knowledge structures derived from GTI used for designing, deploying, operating, and managing a distributed video streaming application with autopoietic self-regulation that maintains structural stability and communication among distributed components with shared knowledge while maintaining expected behaviors dictated by functional requirements.

1. Introduction

Designing, developing, deploying, operating, and managing the lifecycle of distributed software applications is a critical area of study because all our business and personal lives depend on them. Volumes have been written on the subject, and the book authority organization lists 20 of the best-distributed system books of all time [1]. A literature survey on service-oriented architecture used 65 papers published between 2005 and 2020 [2]. A systemic literature review of microservice architecture (MSA), (MSA is a more recent proposal to use fine-grained services architecture for distributed software systems) discovered 3842 papers [3]. However, several issues with their design, deployment, operation, and management, as well as their instability under large fluctuations in resource demand or availability, vulnerability to security breaches, and CAP theorem limitations, are ever-present.
In this paper, we present an analysis of the current state-of-the-art distributed software application design, development, deployment, operation, and management. We describe a novel extension based on the foundation of the General Theory of Information (GTI) that infuses the following:
  • Improved resilience using the concept of autopoiesis, which refers to the ability of a system to replicate itself and maintain identity and stability while facing fluctuations caused by external influences;
  • Enhanced cognition using cognitive behaviors that model the system’s state, sense internal and external changes, analyze, predict, and take action to mitigate any risk to its functional fulfillment.
The paper is structured as follows:
  • We discuss the limitations of both the symbolic and sub-symbolic computing structures used in the current implementation of distributed software systems;
  • We discuss GTI and its application to create a knowledge representation in the form of associative memory, as well as the event-driven transaction history of the distributed software system;
  • We demonstrate a distributed software application with autopoietic and enhanced cognitive behaviors. An autopoietic manager configures and manages the components of the distributed software system, and a cognitive network manager provides enhanced cognition to manage the connections between the software components to maintain the quality of service. A policy manager’s policies are defined by best practices and experience to manage deviations from expected behaviors.

1.1. Limitations of the Current State of the Art

1.1.1. CAP Theorem Limitation

The CAP theorem [4], also known as Brewer’s theorem, states that it is impossible for a distributed data system to provide more than two out of three guarantees simultaneously:
  • Consistency: All users see the same data at the same time, no matter which node they connect to. For this to happen, whenever data are written to one node, they must be instantly forwarded or replicated to all the other nodes in the system before the write is deemed ‘successful’.
  • Availability: Any client requesting data receives a response, even if one or more nodes are down. Another way to state this is that all working nodes in the distributed system return a valid response for any request, without exception.
  • Partition Tolerance: The system continues to operate despite an arbitrary number of messages being dropped (or delayed) by the network between nodes.

1.1.2. Complexity

In addition, the complexity of maintaining availability and performance continues to increase with Hobson’s choice between single-vendor lock-in or multi-vendor complexity [5]. There are solutions available using free and open-source software or through adopting multi-vendor and heterogeneous resources offered by multiple cloud providers [6]. This can help to maintain the scalability and management flexibility of distributed applications. However, this often increases complexity, and layers of management lead to the “who manages the managers” conundrum. Moreover, the advent of many virtualized and disaggregated technologies, as well as the rapid increase of the Internet of Things (IoT), makes end-to-end orchestration difficult to do at scale.

1.1.3. Computation and Its Limits

Some arguments suggest that the problems of scalability, resiliency, and complexity of distributed software applications are symptoms that point to a foundational shortcoming of the computational model associated with the stored program implementation of the Turing machine from which all current-generation computers are derived [7,8,9,10,11,12,13,14].
As Cockshott et al. [11] p. 215 describe in their book “Computation and its Limits”, the concept of the universal Turing machine has allowed us to create general-purpose computers and “use them to deterministically model any physical system, of which they are not themselves a part to an arbitrary degree of accuracy. Their logical limits arise when we try to obtain them to model a part of the world that includes themselves”. External agents are required for the harmonious execution of the computer and the computation.
The distributed software application consists of a network structure of distributed software components that are dependent on the infrastructure that provides the resources (CPU, memory, and power/energy) that are managed by different service providers with their own infrastructure as a service (IaaS) and platform as a service (PaaS) management systems. In essence, the processes executing the computing structures (hardware and software) behave like a complex adaptive system. They exhibit emergent behavior when faced with local fluctuations impacting the infrastructure. For example, if a failure occurs in any one component, the execution halts, and external entities must fix the problem. Resources must be increased or decreased if the demand fluctuates to maintain efficiency and performance through an external entity. These lead to a single-vendor lock-in or the complexity of a third-party orchestrator that manages various component managers.

1.2. Stored Program Control Implementation of Symbolic and Sub-Symbolic Computing Structures

Current-generation computers are used for process automation, intelligent decision-making, mimicking behaviors by robots, and using transformers to generate text, images, and videos [15,16,17]. Figure 1 shows process automation executed by an algorithm operating on data structures. Insights obtained through machine learning algorithms or deep learning algorithms for intelligent decision-making are derived from data analytics, as shown in Figure 1. In addition, deep learning algorithms (which use multi-layered neural networks to simulate the complex pattern recognition processes of the human brain) are used to perform robotic behaviors or generative AI tasks.
McCulloch and Pitts’s 1943 paper [18] on how neurons might work and Frank Rosenblatt’s introduction of the perceptron [19] led to the current AI revolution with deep learning algorithms using computers.
Robotic Behavior Learning primarily involves training a robot to perform specific tasks or actions. This includes reinforcement learning, where the robot learns from trial and error, receiving rewards for successful actions and penalties for mistakes. Over time, the robot improves its performance by maximizing its rewards and minimizing penalties. This type of learning is useful in environments where explicit programming of all possible scenarios is impractical. On the other hand, transformers in GenAI focus on processing and generating text data. They use attention mechanisms to understand the context within large bodies of text. The input to a transformer model is a sequence of tokens (words, sub-words, or characters), and the output is typically a sequence of tokens that forms a coherent and contextually relevant text. This could be a continuation of the input text, a translation into another language, or an answer to a question. In addition, when the algorithm is trained on a large dataset of images or videos, it learns to understand the underlying patterns and structures in the data and generates new, original content. The results can be surprisingly creative and realistic, opening up new possibilities for art, design, and visual storytelling.
Symbolic computing uses a sequence of symbols (representing algorithms) that operate on another sequence of symbols (representing data structures describing the state of a system that depicts various entities and relationships) to change the state. Sub-symbolic computation is associated with neural networks where an algorithm mimics the neurons in biological systems (perceptron). A multi-layer network using perceptron mimics the neural networks in converting the information provided as input (text, voice, video, etc.).
Several issues with sub-symbolic computing have been identified [20,21]:
  • Lack of Interpretability: Deep learning models, particularly neural networks, are often “black boxes”, because it is difficult to understand the reasoning behind how they respond to the queries.
  • Need for Large Amounts of Data: These models typically require large data sets to train effectively.
  • Overfitting: Deep learning models can overfit the training data, meaning they may not generalize well to unseen data.
  • Vanishing and Exploding Gradient Problems: These are issues that can arise during the training process, making it difficult for the model to learn.
  • Adversarial Attacks: Deep learning models are vulnerable to adversarial attacks, where small, intentionally designed changes to the input can cause the model to make incorrect predictions.
  • Difficulty Incorporating Symbolic Knowledge: Sub-symbolic methods, such as neural networks, often struggle to incorporate symbolic knowledge, such as causal relationships and practitioners’ knowledge.
  • Bias: These methods can learn and reflect biases present in the training data.
  • Lack of Coordination with Symbolic Systems: While sub-symbolic and symbolic systems can operate independently, they often need to coordinate closely together to integrate the knowledge derived from them, which can be challenging.

1.3. The General Theory of Information and Super-Symbolic Computing

Recent advances based on the General Theory of Information provide a new approach that integrates symbolic and sub-symbolic computing structures with a novel super-symbolic structure [9,22,23,24,25,26,27,28,29,30,31,32,33,34,35], which addresses the various foundational shortcomings mentioned above.
Figure 2 depicts [23,24,25,26,27,28,29] how information bridges the material world of structures formed through energy and matter transformations; the mental world that observes the material world and creates mental structures that represent the knowledge received from observed information; and the digital world created using the information generated by both the mental and material structures using the stored program control implementation of the Turing machine.
Mark Burgin’s General Theory of Information (GTI) bridges our understanding of the material world, which consists of matter and energy, as well as the mental worlds of biological systems that utilize information and knowledge. This theory is significant because it offers a model for how operational knowledge is represented and used by biological systems involved in building, operating, and managing life processes [9,34,35]. In addition, it suggests a way to represent operational knowledge and use it to build, deploy, and operate distributed software applications. The result is a new class of digital automata with autopoietic and cognitive behaviors that biological systems exhibit [9,22]. Autopoietic behavior refers to the self-producing and self-maintaining nature of living systems. Cognitive behavior refers to obtaining and using knowledge.
The genome, through natural selection, has evolved to capture and transmit the knowledge to build, operate, and manage a structure that receives information from the material world and converts it into knowledge in the mental world using genes and neurons. The result is an associative memory and event-driven transaction history that the system uses to interact with the material world. In symbolic computing structures, the knowledge is represented as data structures (entities and their relationships) depicting the system state and its evolution using an algorithm. In sub-symbolic computing, the algorithm creates a neural network, and knowledge is represented by the optimized parameters that result from training the neural network with data structures. The observations of Turing, Neumann, McCulloch, Pitts, and Rosenblatt have led to the creation of a digital world where information is converted into knowledge in the digital form, as shown in Figure 1. The super-symbolic structure derived from the GTI provides a higher-level knowledge representation in the form of fundamental triads/named sets [24,25,26,27,28,29], shown in Figure 2.
In essence, three contributions from GTI enable the developing, deploying, operating, and managing of a distributed application using heterogeneous IaaS and PaaS resources while overcoming the shortcomings discussed in this paper:
Digital Automata: Burgin’s construction of a new class of digital automata to overcome the barrier posed by the Church–Turing thesis has significant implications for AI. This allows for creating more advanced AI systems that can perform tasks beyond the capabilities of traditional Turing machines [9,33,35].
Super-symbolic Computing: His contribution to super-symbolic computing with knowledge structures, cognizing oracles, and structural machines changes how we design and develop self-regulating distributed applications. These tools also allow AI systems to process and understand information in a more complex and nuanced way, similar to how humans do this by interacting with sub-symbolic and super-symbolic computing structures with common knowledge representation using super-symbolic computing [33,34,35], which is different from neuro-symbolic computing [36].
Digital Genome: The schema and associated operations derived from GTI are used to model a digital genome specifying the operational knowledge of algorithms executing software life processes. The digital genome specifies operational knowledge that defines and executes domain-specific functional requirements, non-functional requirements, and best-practice policies that maintain the system behavior, conforming to the design’s expectations. This results in a digital software system with a super-symbolic computing structure exhibiting autopoietic and cognitive behaviors that biological systems also exhibit [9,35,37,38].
Figure 3 shows the super-symbolic computing structure implementing a domain-specific digital genome using structural machines, cognizing oracles, and knowledge structures derived from GTI to create a knowledge network with two essential features:
  • The knowledge network captures the system state and its evolution caused by the event-driven interactions of various entities interacting with each other in the form of associative memory and event-driven interaction history. It is important to emphasize that the digital genome and super-symbolic computing structures differ from using symbolic and sub-symbolic structures together. For example, the new frameworks [36] from the MIT Computer Science and Artificial Intelligence Laboratory provide essential context for language models that perform coding, AI planning, and robotic tasks. However, this approach does not use associative memory and event-driven transaction history as long-term memory. The digital genome provides a schema for creating them using knowledge derived from both symbolic and sub-symbolic computing.
  • GTI provides a schema and operations [22,33,34,35,37,38] for representing the system state and its evolution. These operations define and execute various processes that fulfill the functional and non-functional requirements and the best-practice policies and constraints.
Figure 3 shows the theoretical GTI-based implementation model of the digital genome specifying the functional and non-functional requirements, along with adaptable policies derived from experience to maintain the expected behaviors based on the genome specification. In this paper, we describe an implementation of a distributed software application using the digital genome specification and demonstrate the policy-based management of functional and non-functional requirements. Section 2 describes the distributed software application and its implementation using GTI. Section 3, describes Video-on-Demand (VoD) Service with Associative Memory and Event-Driven Interaction History. Section 4 describes the results of the implementation. Section 5 draws some conclusions, compares with some related work, and discusses future directions.
Figure 4 shows two information processing structures one based on the Turing machine and von Neumann architecture, as well as another based on structural machines and a knowledge network composed of knowledge structures and cognizing oracles.
The current state of the art uses the von Neumann stored program control implementation of the Turing machine to execute algorithms using symbolic, sub-symbolic, or a combination called neuro-symbolic computing structures. The approach presented in this paper uses structural machines with a schema and operations defining a knowledge network based on GTI. The structural machine uses various process execution methods, such as symbolic and sub-symbolic computing, using general-purpose computers, people, or analog devices with sensors and actuators. Figure 4 shows the two models. Each knowledge structure executes a process specified by the functional and non-functional requirements using best-practice policies and constraints to fulfill the design goals. In the next section, we describe the knowledge network and how to use it to design, deploy, operate, and manage a distributed software application.

2. Distributed Software Application and Its Implementation

A distributed software application is designed to operate on multiple computers or devices across a network. They spread their functionality across different components with a specific role, work together, and communicate using shared knowledge to accomplish the application’s overall goals. The overall functionality and operation are defined using functional and non-functional requirements, and policies and constraints are specified using best practices that ensure the application’s functionality, availability, scalability, performance, and security while executing its mission. We describe a process to design, develop, deploy, operate, and manage a distributed software application using functional and non-functional requirements, policies, and constraints specified to achieve a specific goal. The goal is determined by the domain knowledge representing various entities, their relationships, and the behaviors that result from their interactions.
Both symbolic and sub-symbolic computing structures execute processes that receive input, fulfill functional requirements, and share knowledge with other wired components to fulfill system-level functional requirements. Figure 5 shows the computing models and the knowledge representation as a knowledge network used in the new approach based on GTI.
As discussed in [23], p. 13, “Information processing in triadic structural (entities, relationships, and behaviors) machines is accomplished through operations on knowledge structures, which are graphs representing nodes, links, and their behaviors. Knowledge structures contain named sets and their evolution containing named entities/objects, named attributes, and their relationships. Ontology-based models of domain knowledge structures contain information about known knowns, known unknowns, and processes for dealing with unknown unknowns through verification and consensus. Inter-object and intra-object behaviors are encapsulated as named sets and their chains. Events and associated behaviors are defined as algorithmic workflows, which determine the system’s state evolution. A named set chain of knowledge structures (knowledge network) provides a genealogy representing the system’s state history. This genealogy can be treated as the deep memory and used for reasoning about the system’s behavior, as well as for its modification and optimization”.
The domain knowledge for each node and the knowledge network are obtained from different sources (including Large Language Models) and specified as functional and non-functional requirements derived from the system’s desired availability, performance, security, and stability.
The digital genome specifies the functionality and operation of the system that deploys, operates, and manages the evolution of the knowledge network with the knowledge about where to obtain the computing resources and use them. The autopoietic network manager is designed to deploy the software components with appropriate computing resources (e.g., IaaS and PaaS in a cloud) as services. The cognitive network manager manages the communication connections between the nodes executing various processes. The service workflow manager controls the workflow among the nodes delivering the service. An event monitor captures the events in the system to create the associative memory and the event-driven interaction history. A cognitive red flag manager captures deviations from the normal workflow and alerts the autopoietic manager, which takes corrective action by coordinating with the cognitive network manager. The architecture provides a self-regulating distributed software application using resources from different providers.
We describe an example implemented using this architecture to demonstrate its feasibility and benefits. A video-on-demand service is deployed in a cloud with auto-failover. This demonstration aims to show the feasibility of creating associative memory and event-driven transaction history that provides real-time evolution of the system as long-term memory. They can be used to perform data analytics using a transparent model to gain insights in contrast to the current state of the art, as shown in Figure 3.

3. Video-on-Demand (VoD) Service with Associative Memory and Event-Driven Interaction History

The design begins with defining the functional requirements, non-functional requirements, best-practice policies, and constraints.
The Functional Requirements for User Interaction are the following:
  • The user is given a service URL.
  • The user registers for the service.
  • An administrator authenticates with a user ID and password.
  • The user logs into the URL with a user ID and Password.
  • The user is presented with a menu of videos.
  • The user selects a video.
  • The user is presented with a video and controls to interact.
  • The user uses the controls (pause, start, rewind, and fast forward) and watches the video.
The Functional Requirements for Video Service Delivery are the following:
  • The Video Service consists of several components working together:
    A VoD service workflow manager;
    A video content manager;
    A video server;
    A video client.
The Non-functional Requirements, Policies, and Constraints are the following:
  • Auto-Failover: When a video service is interrupted by the failure of any component, the user service should not experience any service interruption.
  • Auto-Scaling: When the end-to-end service response time falls below a threshold, necessary resource adjustments should be made to adjust the response time to the desired value.
  • Live Migration: Any component should be easily migrated from one infrastructure to another without service interruption.
Figure 6 shows a digital genome-based architecture with various components that fulfill these requirements. Each node is a process-executing engine that receives input and executes the process using a symbolic or sub-symbolic computing structure. An example is a Docker container deployed in a cloud using local IaaS and PaaS. The roles of the digital genome, the autopoietic network manager, and the cognitive network manager are well discussed in several papers [9,22,37,38].
Just as the genome in living organisms contains the information required to manage life processes, the digital genome contains all the information about the distributed software application in the form of knowledge structures to build itself, reproduce itself, and maintain its structural stability while using its cognitive processes to fulfill the functional requirements.
We summarize the functions of the schema of the knowledge network that specifies the process of processes executing the system’s functional and non-functional requirements, as well as best-practice policies and constraints:
The Digital Genome Node: It is the master controller that provides the operational knowledge to deploy the knowledge network that contains various nodes executing different processes and communicating with other nodes wired together with shared knowledge. It initiates the autopoietic and cognitive network managers responsible for managing the structure and workflow fulfilling the functional and non-functional requirements.
Autopoietic Network Manager (APM) Node: It contains knowledge about where to deploy the nodes using resources from various sources such as IaaS and PaaS from cloud providers. It receives a list of docker containers to be deployed as nodes and determines how they are wired together. At t = 0, the APM deploys various containers using the desired cloud resources. It passes on the URLs of these nodes and the network connections to the cognitive network manager. For example, if the nodes are duplicated to fulfill non-functional requirements, it assigns which connection is the primary and which is the secondary.
Using the URLs and their wiring map, the CNM consults with the policy manager, who specifies the requirements for fulfilling the non-functional requirements, such as auto-failover, auto-scaling, and live migration. Then, the CNM sets up the connection list and passes it on to the service workflow manager.
Service Workflow Manager (SWM): The SWM provides service workflow control by managing the connections between various nodes participating in the service workflow. In the VoD service, it manages the workflow of the video service sub-network and the user interface manager sub-network, as shown in Figure 5. It also manages deviations from the expected workflow by using policies that define actions to correct them.
User Interface Management Sub-Network (UIM): It manages the user interface workflow by providing registration, login, video selection, and other interactions.
Video Service Management Sub-Network: It provides video services, from content to video server and client management.
Cognitive Red Flag Manager: When deviations occur from normal workflow, such as when one of the video clients fails, the SWM switches it to the secondary video client, as shown in Figure 6. It also communicates a red flag, which is then communicated to the APM to take corrective action—in this case, restore the video client that went down and let the CNM know to make it secondary.
Event Monitor: It monitors events from video service and user interface workflows and creates an associative memory and an event-driven interaction history with a time stamp. These provide the long-term memory for other nodes to use the information in multiple ways, including performing data analytics and gaining insights to take appropriate action.
We summarize the development workflow, translating the functional and non-functional requirements and best-practice policies as follows:
  • Developers design the process workflow based on the functional requirements.
  • Each process (a knowledge structure with a schema consisting of entities, relationships, and their event-driven interactions, which are defined by its inputs and actions) executes based on the inputs and generated outputs communicated with other processes using shared knowledge.
  • All the knowledge structures are containerized and deployed as a knowledge network. For example, the user interface subnetwork contains the user registration, login, video selection, and use processes with specified inputs, behaviors, and outputs. The video service subprocess deals with video management and delivery processes. Wired knowledge structures fire together to execute autopoietic and enhanced cognitive behaviors managed by a service workflow manager under the supervision of the autopoietic and cognitive network managers.
  • The autopoietic manager manages the deployment of knowledge structures using cloud resources.
  • The cognitive network manager manages the workflow connections between the knowledge structures.
  • Autopoietic and cognitive managers, along with a policy manager who dictates best-practice rules, manage the deviations from expected behavior caused by fluctuations in the availability of or demand for resources or workflow disruptions. The best practice policies are derived from history and experience. For example, if the service response time exceeds a threshold, auto-scaling is used to reduce it. Using the return time objective (RTO) and the return position objective (RPO), the structure of the knowledge network is configured by the autopoietic and cognitive network managers to maintain the quality of service using auto-failover or live migration.
In the next section, we discuss the results.

4. Results

Various processes shown in Figure 6 were implemented using these three technologies:
  • Python programming for creating the schema and its evolution with various instances;
  • Containers that were deployed using the Google Cloud;
  • A graph database (Tiger Graph) that represents the schema and its evolution with various instances using the events that capture the interactions as associative memory and event-driven interaction history.
The video https://triadicautomata.com/digital-genome-vod-presentation/ (accessed on 30 August 2024) demonstrates the implementation of associative memory and the event-driven transaction history of a distributed software application with a digital software genome specification discussed in this paper. The video details the schema and operation of the video-on-demand service:
  • A knowledge sub-network in action, where users interact with various entities delivering the service. They can register, log in, choose a video from a menu, and interact with it.
  • A knowledge sub-network that manages and serves the video-on-demand service.
  • A higher-level knowledge network with the service workflow manager, policy manager, autopoietic manager, and cognitive network manager provides structural stability and enhanced cognitive workflow management to address the impact of fluctuations in the interactions causing disruptions in the quality of service.
  • A graph database demonstrates the system evolution using a service schema, associative memory, and event-driven interaction history of all the users.
Figure 7 shows the deployment of the VoD service using a cloud infrastructure, containers, and various services fulfilling both functional and non-functional requirements discussed in this paper.
The system is structured to be resilient to the failure of one of the critical components that delivers the video to the user without interruption. The switchover occurs without the user noticing the failure.

5. Conclusions

This paper aims to explain how GTI helps us understand knowledge and its representation, which belong to the mental worlds of biological systems. It helps them build, operate, and manage a society of cells with complex organizational structures. Self-regulation, associative memory, and event-driven interaction history are key attributes that allow them to make sense of what they are observing and act while the observation is still in progress to control the outcomes and manage risk. GTI, articulated by Mark Burgin, provides a unified context for existing directions in information studies. It allows us to elaborate on a comprehensive definition and explain the relationships between information, data, and knowledge. It demonstrates how different mathematical models of information and information processes are related [37,38,39,40,41]. According to GTI, information is the bridge between the material structures, their state evolution, and the mental structures of biological systems that model and interact with the material structures. Information is converted into knowledge and represented by composable knowledge structures forming multi-layer knowledge networks.
This paper outlines our endeavor to apply the same knowledge representation to construct, operate, and manage a society of autonomous distributed software components with complex organizational structures. The new approach offers practical benefits such as self-regulating distributed application design, development, deployment, operation, and management. Importantly, these benefits are independent of the infrastructure as a service (IaaS) or platform as a service (PaaS) used to execute them, promising a versatile and adaptable solution.
Event-driven interaction history allows for real-time sharing of business moments as asynchronous events. The schema-based knowledge network implementation improves a system’s scalability, agility, and adaptability through the dynamic control and feedback signals exchanged among the system’s components. This architecture separates the traditional sensing, analysis, control, and actuation elements for a given system across a distributed network. It uses shared information between nodes and the nodes that are wired fire together to execute collective behavior.
The knowledge network nodes and sub-networks can be composed without impacting the rest of the system, thus improving scalability, agility, and adaptability. Communication between nodes can be directed through crypto security to enhance the system’s security [9]. Other examples of networks with a control network overlay are communication networks with signaling overlay, cellular organisms with networks of genes and neurons, and human organizational networks with hierarchical and matrix management control overlay. They all behave as a society of autonomous components with local knowledge of their role, function, and operation. They use shared knowledge about their role, function, and operation to contribute to the systemic goals. The system is designed to optimize the global process execution while considering the constraints of the components’ local processes. The digital genome specifies how to build, operate, and manage a society of software components with complex organizational structures where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals with shared knowledge. The GTI-based schema of the knowledge network allows us to model a society of autonomous components that execute systemic goals with autopoietic and cognitive behaviors. The VoD implementation using the schema defined in this paper illustrates this process.

5.1. Future Directions

Our results show that schema-based distributed software applications with specific functional and non-functional requirements and policies based on best practices improve the system’s structural stability and enhance cognitive workflow management. The term digital genome we use here is a metaphor referring to the genomes of biological systems with the knowledge to build, operate, and manage a society of cells with self-regulation and autopoietic and cognitive behaviors. We have demonstrated how the digital genome is designed and implemented to replicate a VoD service.
This is an emerging area where GTI provides a theoretical and practical foundation for advancing the implementation of AI, enabling the creation of more intelligent and autonomous systems [34,37,38]. They offer a new perspective on how we can infuse autopoietic and cognitive behaviors into digital machines, leading to the evolution of machine intelligence. As Naidoo [41] points out, “The presentation of the overall qualitative framework, comprising a qualitative analysis of information, data, and knowledge, will be valuable and of great assistance in delineating regulatory, ethical, and strategic trajectories. In addition, this framework provides insights (and answers) regarding (1) data privacy and protection; (2) delineations between information, data, and knowledge based on the important notion of trust; (3) a structured approach to establishing the necessary conditions for an open society and system, and the maintenance of said openness, based on the work of Karl Popper and Georg Wilhelm Friedrich Hegel; (4) an active agent approach that promotes autonomy and freedom and protects the open society; and (5) a data governance mechanism based on the work of Friedrich Hayek, which structures the current legal–ethical–financial and social society”.
GTI provides a framework that infuses structural stability to manage availability, scalability, performance, security, survival, and enhanced cognition and reasoning through associative memory and the evolution history of the system’s state.

5.2. Related Work and Contributions of This Paper

As David Baden [42] points out, “According to the GTI, information exists in an abstract world of structures, somewhat analogous to Plato’s ideal world of Forms. This world of ideal structures interacts with the physical and mental worlds, a conceptualization strongly similar to Popper’s three-world ontology. The relation between information and the structure of reality has been noted by other theorists, including Tom Stonier and Luciano Floridi”. He goes on to say that GTI is one of several ‘gap bridging’ theories attempting to integrate information ideas from different domains. As a mathematical theory aiming to incorporate previous approaches, including those of Shannon, Bar-Hillel, Dretske, and others, GTI provides a formalism to unify the varied ways in which information is understood through a series of ontological and axiological principles that express what information is and how it may be measured. It defines information as that which can cause changes in a system so that information may be seen as a form of energy. GTI may, in principle, encompass all forms of information, including the physical, biological, mental, and social, thereby encompassing syntactic, semantic, and pragmatic information. However, it has not yet been extended in any detail into social and ethical domains.
Unfortunately, not many applications of GTI explain the practice of GTI in building new information technologies. The references by Burgin and Mikkilineni alluded to in this paper are the only ones that discuss this subject. Hopefully, this paper will stimulate the new generation of computer scientists and information technology professionals to take the research to the next level.
As far as associative memory and event-driven transaction history, our implementation is based on a schema and operations of the schema derived from GTI, and it is different from the following approaches:
  • Event-Driven Associative Memory Networks for Knowledge Graph Completion by X. Wang et al. [43]: This paper explores how event-driven associative memory networks can enhance knowledge graph completion tasks. It introduces a novel approach that combines temporal information with associative memory mechanisms to improve link prediction in knowledge graphs.
  • Memory Networks by J. Weston et al. [44]: Although not exclusively focused on associative memory, this influential paper introduces the concept of memory networks. It discusses how external memory can augment neural networks, allowing them to store and retrieve information more effectively.
  • Neural Turing Machines A. Graves et al. [45]: While not directly related to event-driven transaction history, this paper proposes a model called neural Turing machines (NTMs). NTMs combine neural networks with external memory, enabling them to learn algorithmic tasks and perform associative recall.
How is this work related to AGI?
According to Ben Goertzel [46], “At the moment, AGI system design is as much artistic as scientific, relying heavily on the designer’s scientific intuition. AGI implementation and testing are interwoven with (more or less) inspired tinkering, according to which systems are progressively improved internally as their behaviors are observed in various situations. This sort of approach is not unworkable, and many great inventions have been created via similar processes. It’s unclear how necessary or useful a more advanced AGI theory will be for the creation of practical AGI systems. But it seems likely that, the further we can get toward a theory providing tools to address questions like those listed above, the more systematic and scientific the AGI design process will become, and the more capable the resulting systems”.
In addition, many safety problems for AGI have been identified [47]. While some propose neuro-symbolic computing to augment deep learning [36,48,49,50,51,52], they do not address how to model associative memory and evet-driven interaction history. Neuro-symbolic AI integrates neural and symbolic AI architectures to address their weaknesses by adding knowledge graphs that enhance reliability and accuracy with structured knowledge and domain-specific facts. However, the GTI approach with super-symbolic computing integrates the knowledge generated by symbolic and sub-symbolic computing with higher-level reasoning with associative memory and event-driven interaction history, which is similar to how the neocortex integrates the knowledge received from various senses.
As shown in Figure 3, perhaps super-symbolic computing and structural machines may provide a path to introduce higher-level reasoning with experience, ethics, and safety using the associative memory and the event-driven interaction history discussed in this paper.
Our work uses the current implementations of AI as nodes in our knowledge network to augment the total knowledge describing the system’s state and its evolution based on event-driven interactions between nodes. In addition, GTI points to the role of knowledge representation in biological systems, which allows for autopoietic and cognitive behaviors. In addition, GTI provides a schema and tools to infuse these behaviors into digital automata with a knowledge representation. Associative memory and the event-driven interaction history of knowledge structures representing the entities, relationships, and behaviors provide a path to implement higher-level reasoning systems using experience, common sense, and wisdom.
What are the limitations of this approach?
The limitations for quick adaption of the schema-based approach stem from the need for computer scientists and information technology professionals to overcome the learning curve involved in adapting new concepts from GTI, genomics, and neuroscience, as well as limitations regarding the current state-of-the-art. Fortunately, this requires no new tools or technologies. A new schema-based architecture that integrates existing symbolic and sub-symbolic computing structures and the knowledge acquisition from the new large language models may hasten quick adaption.
The video of the implementation describing the use of associative memory and event-driven transaction history is available [53] at Digital Genome Implementation Presentations: – Autopoietic Machines (triadicautomata.com). Appendix A includes a brief textual explanation of the figures used in the paper. The glossary of terms used in this paper is included in Appendix B.
We conclude this paper with an excerpt from Mark Burgin [28], p. 4: “Knowledge processing and management make problem solving much more efficient and are crucial for big companies and institutions [54,55,56]. To achieve this goal, it is necessary to make distinction between knowledge and knowledge representation to know regularities of knowledge structure, functioning and representation, and develop software (and in some cases, hardware) that is based on these regularities. Many intelligent systems search knowledge spaces, which are explicitly or implicitly predefined by the choice of knowledge representation. In effect, the knowledge representation serves as strong bias”.

Supplementary Materials

The following supporting can be downloaded at [53] Digital Genome Implementation Presentations: Autopoietic Machines (triadicautomata.com) (accessed on 31 August 2024).

Author Contributions

Conceptualization, R.M.; methodology, R.M. and W.P.K.; software, W.P.K. and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding author/s.

Acknowledgments

One of the authors, Rao Mikkilineni, expresses his gratitude to the late Mark Burgin, who spent countless hours explaining the General Theory of Information and helped him understand its applications regarding software development. Rao Mikkilineni and Patrick Kelly also acknowledge their many discussions with Justin Kromelow, CEO at Opos Solutions, and his continued support.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Explanation of Figures

Figure 1: Current state of the art of information processing structures.
This diagram illustrates the flow and relationship between different components of data processing and artificial intelligence (AI). Here is a breakdown of each part:
  • Data Sources, Data Structures, and Algorithms: Data can come from various sources such as text, audio, pictures, and videos. Data are organized using data structures. Algorithms are applied to process and manipulate this data.
  • Computing Paradigms: Symbolic computing involves using symbols to represent problems and logical rules to solve them. Sub-symbolic computing involves techniques like neural networks and other forms of machine learning.
  • Application in Robotics and Generative AI: The knowledge gained from machine learning and deep learning can be applied to robotics for automation and intelligent behavior.
  • Transformers: A type of model architecture, especially useful for processing sequential data like language. Examples include BERT and GPT.
  • GenAI: Generative AI models, which can generate new data similar to the data they were trained on.
  • Question Answering: The ability of AI to provide answers to questions posed in natural language.
  • Sentiment Analysis: Determining the sentiment expressed in text, such as positive, negative, or neutral.
  • Information Extraction: Extracting structured information from unstructured data.
  • AI Image Generation: Creating new images from textual descriptions or other inputs.
  • Object Recognition: Identifying objects within images or videos.
Figure 2: According to the General Theory of Information, information is the bridge between the material, mental, and digital worlds.
This diagram elaborates on the relationship between information, knowledge, and computing or information processing structures in various worlds (physical, mental, and digital):
  • World of Ideal Structures: Information is seen as fundamental to the world of ideal structures. According to GTI, the ideal structures are represented by Named sets/Fundamental triads, where entities with established relationships interact with each other and evolve their state based on event-driven behaviors events, forming a basic knowledge structure.
  • World of Material Structures: In the physical world, energy relates to matter, and material structures evolve as they are governed by the laws of the conversion of energy and matter. In the mental world, information received by observers is processed by the neural networks in biological systems. Neurons fired together wire together to create associative memory and event-driven transaction history.
  • World of Digital Structures: In the digital world created by humans, information is processed in digital form using symbolic and sub-symbolic computing structures. In essence, this figure illustrates the comprehensive view of how information is processed, structured, and transformed into knowledge across different realms, linking theoretical foundations to practical implementations in computing.
Figure 3: The digital genome implementation integrating symbolic and sub-symbolic computing.
This figure highlights how symbolic and sub-symbolic computing can be combined to create a robust, adaptive software system using the tools derived from the General Theory of Information. The red and blue lines illustrate the interconnected nature of symbolic and sub-symbolic computing structures in creating the knowledge representation. Super-symbolic computing suggested by GTI uses symbolic and sub-symbolic elements to create an integrated knowledge representation as knowledge networks. This framework highlights how symbolic and sub-symbolic computing can be combined to create a robust, adaptive system capable of generating knowledge, learning from data, and providing intelligent advice and options.
Figure 4: Two information processing structures.
This image illustrates the evolution from traditional computing models to more advanced structural machines and knowledge networks suggested by GTI. The Turing machine-based computing model was derived from the observation that “A man in the process of computing a real number replaced by a machine which is only capable of a finite number of conditions”. The computing model with structural machines and knowledge networks is derived from the observation that “Nodes wired together fire together in a knowledge network to exhibit autopoietic and cognitive behaviors”.
The image contrasts traditional computing (limited by sequential processing and finite state machines) with a future vision of structural machines and knowledge networks, where interconnected nodes process and share information enabling complex cognitive and self-maintaining behaviors and components of a knowledge network driven by a digital genome, which integrates both symbolic and sub-symbolic computing to manage and process information effectively.
Figure 5: Digital genome-driven distributed application with associative memory and event-driven interaction history.
This figure depicts entities, relationships, and behaviors of autonomous distributed system components constituting a knowledge network. The digital genome provides the core framework that integrates various software components to form a cohesive system with specific functional and non-functional requirements, as well as best-practice-based policies. The autopoietic network manager ensures the deployment of the computing structures and maintains structural stability. The cognitive network manager manages the cognitive process workflow ensuring information flow within the knowledge network.
Knowledge network nodes are specialized units, with each focusing on different aspects of computation and information management:
  • Reasoning Genome: Handles logical reasoning and decision making.
  • Service Workflow Genome: Manages workflows and service operations.
  • Event Genome: Tracks and processes events within the network.
  • User Interface Genome: Manages interactions with users.
  • Red Flag Genome: Identifies and handles anomalies or critical issues.
  • Symbolic Computing Genomes: Handle different aspects of symbolic computation.
  • Sub-Symbolic Computing Genomes: Handle pattern recognition and machine learning tasks.
Associative memory stores information based on associations, allowing the network to recall and utilize relevant knowledge. Event history records all relevant events and interactions, capturing the system’s state and evolution, in addition to providing a single point of truth. The knowledge is represented as a network where nodes receive input, execute the process, and share information with wired nodes capturing the system’s state and evolution history based on node interactions.
Figure 6: Schema-based Service Architecture with various components.
This figure depicts the architecture of a video-on-demand (VoD) system driven by a digital genome specification described in the paper and demonstrated in the video, integrating various components to manage and deliver video content efficiently while leveraging associative memory and event history for improved functionality.
It leverages the digital genome framework to ensure self-maintenance, cognitive processing, and effective information flow. The associative memory and event history components provide enhanced decision-making capabilities based on past interactions and stored knowledge. Users interact with the system through a user-friendly interface, selecting and watching video content delivered by the video server. The system continuously monitors events to maintain optimal performance and reliability.
Figure 7: Deployment of the VoD service using Cloud resources. An implementation is presented in the video mentioned in this paper.
This figure demonstrates how a sophisticated VoD system can be built using cloud services and advanced data management techniques. Developers define the system’s functionality, which is then executed by process engines on the Google Cloud infrastructure. The system components, including autopoietic and cognitive network managers, work together to manage workflows, handle anomalies, and ensure the efficient delivery of video content. The knowledge network, supported by a schema, associative memory, and event history, provides the necessary intelligence and adaptability to meet user needs and maintain optimal performance. Python programs are used to execute symbolic and sub-symbolic processes. Containers are used to deploy, operate, and manage the components in a cloud infrastructure. Graph database technology is used to manage complex data relationships, further enhancing the system’s capabilities.

Appendix B. Glossary of Terms

Associative Memory: Associative memory refers to the ability to remember relationships between concepts, not just the individual concepts themselves. For instance, it involves remembering how two words are related (e.g., “man”–“woman”) or recognizing an object and its alternate name (e.g., a guitar). In psychology, it is defined as the capacity to learn and recall the relationship between unrelated items, such as remembering someone’s name or the scent of a particular perfume. Neural associative memories map specific input representations to specific output representations, allowing recall based on associations. In the context of this paper, it refers to the schema and its evolution history of entities, relationships, and interactions that help maintain stability and safety while achieving the system’s objectives through enhanced cognition.
Autopoiesis: A system’s ability to reproduce and maintain itself. It refers to the self-producing and self-maintaining nature of living systems, which is applied here to distributed software systems. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1618936/ (accessed on 30 July 2024)
CAP Theorem: A principle that states that it is impossible for a distributed data system to simultaneously provide more than two out of three guarantees: Consistency, Availability, and Partition Tolerance. https://www.ibm.com/topics/cap-theorem ((accessed on 30 July 2024)
Cognitive Network Manager (CNM): A component that manages the communication connections between the nodes executing various processes in a distributed system. It ensures that the system’s workflow and connections align with non-functional requirements. https://www.mdpi.com/2813-0324/8/1/70 (accessed on 30 July 2024)
Cognizing Oracles: Components derived from the General Theory of Information that are used in super-symbolic computing structures to process and understand information in a complex and nuanced way. They play a crucial role in enabling self-regulation and cognitive behaviors in distributed software applications. https://www.researchgate.net/figure/Oracles-as-cognizing-agents-managing-the-execution-and-evolution-of-knowledge-structures_fig5_342741582 (accessed on 30 July 2024)
Digital Genome: A digital specification of operational knowledge defining and executing the life processes of distributed software applications. It includes functional requirements, non-functional requirements, and best-practice policies to maintain system behavior. https://www.mdpi.com/2409-9287/8/6/107 (accessed on 30 July 2024)
Event-Driven Interaction History: A record of interactions and events within a system used to maintain the system’s long-term memory and aid in decision making and stability. https://www.preprints.org/manuscript/202404.1298/v1 (accessed on 30 July 2024)
General Theory of Information (GTI): A theoretical framework that integrates symbolic and sub-symbolic computing structures with a novel super-symbolic structure to address foundational shortcomings in current computational models. https://www.researchgate.net/publication/318740204_The_General_Theory_of_Information_as_a_Unifying_Factor_for_Information_Studies_The_Noble_Eight-Fold_Path (accessed on 30 July 2024)
Infrastructure as a Service (IaaS): A cloud computing service that provides virtualized computing resources over the internet, allowing users to run software without managing the underlying hardware. https://cloud.google.com/learn/what-is-iaas (accessed on 30 July 2024)
Microservice Architecture (MSA): A design approach for distributed systems, where applications are composed of small, independent services that communicate over network protocols, with each serving a single purpose. https://cloud.google.com/learn/what-is-microservices-architecture (accessed on 30 July 2024)
Non-Functional Requirements: Requirements that specify criteria for the operation of a system, such as performance, scalability, and reliability, rather than specific behaviors or functions. https://www.geeksforgeeks.org/non-functional-requirements-in-software-engineering/ (accessed on 30 July 2024)
Platform as a Service (PaaS): A cloud computing service that provides a platform allowing customers to develop, run, and manage applications without dealing with the underlying infrastructure. https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-paas (accessed on 30 July 2024)
Service Workflow Manager (SWM): A component that controls the workflow among the nodes delivering a service, ensuring that the system operates as expected and takes corrective actions when deviations occur.
Structural Machines: Machines that use knowledge structures to represent and manage system states and their evolution. They perform operations on these structures to maintain system stability and achieve desired behaviors. https://www.mdpi.com/2504-3900/47/1/26 (accessed on 30 July 2024)
Super-Symbolic Computing: An advanced computing paradigm that integrates symbolic and sub-symbolic representations to create a more nuanced and complex understanding of information, enabling self-regulation and cognitive behaviors in digital systems. https://easychair.org/publications/preprint/PMjC (accessed on 30 July 2024)
Symbolic Computing: A type of computing where symbols and operations on symbols represent and manipulate data, which is often used in traditional artificial intelligence systems. https://easychair.org/publications/preprint/PMjC (accessed on 30 July 2024)
Turing Machine: A mathematical model of computation that defines an abstract machine that manipulates symbols on a strip of tape according to a table of rules. It is the basis for the theory of computation. https://plato.stanford.edu/entries/turing-machine/ (accessed on 30 July 2024)
User Interface Management Sub-Network (UIM): A component that manages the user interface workflow, including registration, login, and video selection interactions in a video-on-demand service.
Video Service Management Sub-Network: A sub-network responsible for managing the video service from content management to video server and client interactions.
Von Neumann Architecture: A computer architecture model where a single storage structure holds both the instructions and data that are used in most contemporary computers. https://www.computerscience.gcse.guru/theory/von-neumann-architecture (accessed on 30 July 2024).

References

  1. 20 Best Distributed System Books of All Time—BookAuthority. Available online: https://bookauthority.org/books/best-distributed-system-books (accessed on 17 June 2024).
  2. Bohloul, S.M. Service-oriented Architecture: A review of state-of-the-art literature from an organizational perspective. J. Ind. Integr. Manag. 2021, 6, 353–382. [Google Scholar] [CrossRef]
  3. Söylemez, M.; Tekinerdogan, B.; Kolukısa Tarhan, A. Challenges and Solution Directions of Microservice Architectures: A Systematic Literature Review. Appl. Sci. 2022, 12, 5507. [Google Scholar] [CrossRef]
  4. “CAP Theorem (Explained)” Youtube, Uploaded by Techdose 9 December 2018. Available online: https://youtu.be/PyLMoN8kHwI?si=gtHWzvt2gelf3kly (accessed on 17 June 2024).
  5. Opara-Martins, J.; Sahandi, R.; Tian, F. Critical analysis of vendor lock-in and its impact on cloud computing migration: A business perspective. J. Cloud. Comp. 2016, 5, 4. [Google Scholar] [CrossRef]
  6. Vaquero, L.M.; Cuadrado, F.; Elkhatib, Y.; Bernal-Bernabe, J.; Srirama, S.N.; Zhani, M.F. Research challenges in nextgen service orchestration. Future Gener. Comput. Syst. 2019, 90, 20–38. [Google Scholar] [CrossRef]
  7. Burgin, M. Super-Recursive Algorithms; Monographs in Computer Science; Springer: Berlin/Heidelberg, Germany, 2005; ISBN 0-387-95569-0. [Google Scholar]
  8. Dodig Crnkovic, G. Info-Computationalism and Morphological Computing of Informational Structure; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar] [CrossRef]
  9. Burgin, M.; Mikkilineni, R. General Theory of Information Paves the Way to a Secure, Service-Oriented Internet Connecting People, Things, and Businesses. In Proceedings of the 2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI), Kanazawa, Japan, 2–8 July 2022; pp. 144–149. [Google Scholar]
  10. Dodig Crnkovic, G. Significance of Models of Computation, from Turing Model to Natural Computation. Minds Mach. 2011, 21, 301–322. [Google Scholar] [CrossRef]
  11. Cockshott, P.; MacKenzie, L.M.; Michaelson, G. Computation and Its Limits; Oxford University Press: Oxford, UK, 2012; p. 215. [Google Scholar]
  12. van Leeuwen, J.; Wiedermann, J. The Turing machine paradigm in contemporary computing. In Mathematics Unlimited—2001 and Beyond; Enquist, B., Schmidt, W., Eds.; LNCS; Springer: New York, NY, USA, 2000. [Google Scholar]
  13. Wegner, P.; Eberbach, E. New Models of Computation. Comput. J. 2004, 47, 4–9. [Google Scholar] [CrossRef]
  14. Wegner, P.; Goldin, D. Computation beyond Turing Machines: Seeking appropriate methods to model computing and human thought. Commun. ACM 2003, 46, 100. [Google Scholar] [CrossRef]
  15. Rothman, D. Transformers for Natural Language Processing and Computer Vision: Explore Generative AI and Large Language Models with Hugging Face, ChatGPT, GPT-4V, and DALL-E 3; Packt Publishing Ltd.: Birmingham, UK, 2024. [Google Scholar]
  16. Hu, W.; Li, X.; Li, C.; Li, R.; Jiang, T.; Sun, H.; Huang, X.; Grzegorzek, M.; Li, X. A state-of-the-art survey of artificial neural networks for whole-slide image analysis: From popular convolutional neural networks to potential visual transformers. Comput. Biol. Med. 2023, 161, 107034. [Google Scholar] [CrossRef]
  17. Soori, M.; Arezoo, B.; Dastres, R. Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cogn. Robot. 2023, 3, 54–70. [Google Scholar] [CrossRef]
  18. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  19. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef] [PubMed]
  20. Karimian, G.; Petelos, E.; Evers, S.M. The ethical issues of the application of artificial intelligence in healthcare: A systematic scoping review. AI Ethics 2022, 2, 539–551. [Google Scholar] [CrossRef]
  21. Groumpos, P.P. Artificial intelligence: Issues, challenges, opportunities and threats. In Creativity in Intelligent Technologies and Data Science: Third Conference, CIT&DS 2019, Volgograd, Russia, 16–19 September 2019, Proceedings, Part I 3; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 19–33. [Google Scholar]
  22. Mikkilineni, R. A New Class of Autopoietic and Cognitive Machines. Information 2022, 13, 24. [Google Scholar] [CrossRef]
  23. Burgin, M.; Mikkilineni, R. From Data Processing to Knowledge Processing: Working with Operational Schemas by Autopoietic Machines. Big Data Cogn. Comput. 2021, 5, 13. [Google Scholar] [CrossRef]
  24. Burgin, M. Mathematical Schema Theory for Modeling in Business and Industry. In Proceedings of the 2006 Spring Simulation Multi Conference (SpringSim‘06), Huntsville, AL, USA, 2–6 April 2006; pp. 229–234. [Google Scholar]
  25. Burgin, M. Unified Foundations of Mathematics. arXiv 2004, arXiv:math/0403186. [Google Scholar] [CrossRef]
  26. Burgin, M. Theory of Named Sets, Mathematics Research Developments; Nova Science: New York, NY, USA, 2011. [Google Scholar]
  27. Burgin, M. Structural Reality; Nova Science Publishers: New York, NY, USA, 2012. [Google Scholar]
  28. Burgin, M. Theory of Knowledge: Structures and Processes; World Scientific: New York, NY, USA; London, UK; Singapore, 2016. [Google Scholar]
  29. Burgin, M. Theory of Information: Fundamentality, Diversity, and Unification; World Scientific: Singapore, 2010. [Google Scholar]
  30. Burgin, M. Ideas of Plato in the Context of Contemporary Science and Mathematics. Athens J. Humanit. Arts 2017, 4, 161–182. [Google Scholar] [CrossRef]
  31. Burgin, M. Information Processing by Structural Machines. In Theoretical Information Studies: Information in the World; World Scientific: New York, NY, USA; London, UK; Singapore, 2020; pp. 323–371. [Google Scholar]
  32. Burgin, M. Elements of the Theory of Nested Named Sets. Theory Appl. Math. Comput. Sci. 2020, 10, 46–70. [Google Scholar]
  33. Renard, D.A. From Data to Knowledge Processing Machines. Proceedings 2022, 81, 26. [Google Scholar] [CrossRef]
  34. Burgin, M. Triadic Automata and Machines as Information Transformers. Information 2020, 11, 102. [Google Scholar] [CrossRef]
  35. Burgin, M.; Mikkilineni, R. Information Theoretic Principles of Software Development, EasyChair Preprint No. 9222. 2022. Available online: https://easychair.org/publications/preprint/jnMd (accessed on 31 August 2024).
  36. Shipps, A. MIT News. 2024. Available online: https://news.mit.edu/2024/natural-language-boosts-llm-performance-coding-planning-robotics-0501 (accessed on 20 June 2024).
  37. Mikkilineni, R. Mark Burgin’s Legacy: The General Theory of Information, the Digital Genome, and the Future of Machine Intelligence. Philosophies 2023, 8, 107. [Google Scholar] [CrossRef]
  38. Mikkilineni, R. Infusing Autopoietic and Cognitive Behaviors into Digital Automata to Improve Their Sentience, Resilience, and Intelligence. Big Data Cogn. Comput. 2022, 6, 7. [Google Scholar] [CrossRef]
  39. Krzanowski, R. Information: What We Do and Do Not Know—A Review. Available online: https://www.researchgate.net/publication/370105722_Information_What_We_Do_and_Do_Not_Know-A_Review (accessed on 30 June 2023).
  40. Floridi, L. Information. In A Very Short Introduction; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  41. Naidoo, M. The open ontology and information society. Front. Genet. 2024, 15, 1290658. [Google Scholar] [CrossRef] [PubMed]
  42. Bawden, D. The Occasional Informationist. 2023. Available online: https://theoccasionalinformationist.com/2023/03/05/mark-burgin-1946-2023/ (accessed on 28 June 2024).
  43. Wang, X.; Zhang, J.; Wang, Y. Event-Driven Associative Memory Networks for Knowledge Graph Completion. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI), Virtual Event, 2–9 February 2021. [Google Scholar]
  44. Weston, J.; Chopra, S.; Bordes, A. Memory Networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS), Montreal, QB, Canada, 7–12 December 2015. [Google Scholar]
  45. Graves, A.; Wayne, G.; Danihelka, I. Neural Turing Machines. arXiv 2014, arXiv:1410.5401. [Google Scholar]
  46. Goertzel, B. Artificial General Intelligence: Concept, State of the Art, and Future Prospects. J. Artif. Gen. Intell. 2009, 5, 1–46. [Google Scholar] [CrossRef]
  47. Everitt, T.; Lea, G.; Hutter, M. AGI safety literature review. arXiv 2018, arXiv:1805.01109. [Google Scholar]
  48. Garcez, A.D.; Besold, T.R.; de Raedt, L.; Földiak, P.; Hitzler, P.; Icard, T.; Kühnberger, K.-U.; Lamb, L.C.; Miikkulainen, R.; Silver, D.L. Neural-symbolic learning and reasoning: Contributions and challenges. In 2015 AAAI Spring Symposium Series; Association for the Advancement of Artificial Intelligence: Washington, DC, USA, 2015. [Google Scholar]
  49. Garcez, A.D.A.; Gori, M.; Lamb, L.C.; Serafini, L.; Spranger, M.; Tran, S.N. Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning. arXiv 2019, arXiv:1905.06088. [Google Scholar]
  50. Lamb, L.C.; Garcez, A.; Gori, M.; Prates, M.; Avelar, P.; Vardi, M. Graph neural networks meet neural-symbolic computing: A survey and perspective. arXiv 2020, arXiv:2003.00330. [Google Scholar]
  51. Besold, T.R.; Garcez, A.D.; Bader, S.; Bowman, H.; Domingos, P.; Hitzler, P.; Kühnberger, K.-U.; Lamb, L.C.; Lima, P.M.V.; de Penning, L.; et al. Neural-symbolic learning and reasoning: A survey and interpretation 1. In Neuro-Symbolic Artificial Intelligence: The State of the Art; IOS Press: Amsterdam, The Netherlands, 2021; pp. 1–51. [Google Scholar]
  52. Hitzler, P.; Eberhart, A.; Ebrahimi, M.; Sarker, M.K.; Zhou, L. Neuro-symbolic approaches in artificial intelligence. Natl. Sci. Rev. 2022, 9, nwac035. [Google Scholar] [CrossRef]
  53. Digital Genome Implementation Presentations:—Autopoietic Machines (triadicautomata.com). Available online: https://triadicautomata.com/digital-genome-vod-presentation/ (accessed on 30 August 2024).
  54. Ueno, H.; Koyama, T.; Okamoto, T.; Matsubi, B.; Isidzuka, M. Knowledge Representation and Utilization; Mir: Moscow, Russia, 1987; (Russian translation from the Japanese). [Google Scholar]
  55. Osuga, S. Knowledge Processing; Mir: Moscow, Russia, 1989; (Russian translation from the Japanese). [Google Scholar]
  56. Dalkir, K. Knowledge Management Theory and Practice; Butterworth-Heinemann: Boston, MA, USA, 2005. [Google Scholar]
Figure 1. Current state of the art of information processing structures.
Figure 1. Current state of the art of information processing structures.
Computers 13 00220 g001
Figure 2. According to the General Theory of Information [29], information is the bridge between the material, mental, and digital worlds.
Figure 2. According to the General Theory of Information [29], information is the bridge between the material, mental, and digital worlds.
Computers 13 00220 g002
Figure 3. The digital genome implementation integrates symbolic and sub-symbolic computing.
Figure 3. The digital genome implementation integrates symbolic and sub-symbolic computing.
Computers 13 00220 g003
Figure 4. Two information processing structures.
Figure 4. Two information processing structures.
Computers 13 00220 g004
Figure 5. Digital genome-driven distributed application with associative memory and event-driven interaction history. Both associative memory and event history shown as networks are described in detail in the video https://triadicautomata.com/digital-genome-vod-presentation/ (accessed on 30 August 2024).
Figure 5. Digital genome-driven distributed application with associative memory and event-driven interaction history. Both associative memory and event history shown as networks are described in detail in the video https://triadicautomata.com/digital-genome-vod-presentation/ (accessed on 30 August 2024).
Computers 13 00220 g005
Figure 6. Schema-based service architecture with various components. Both associative memory and event history shown as networks are described in detail in the video https://triadicautomata.com/digital-genome-vod-presentation/ (accessed on 30 August 2024).
Figure 6. Schema-based service architecture with various components. Both associative memory and event history shown as networks are described in detail in the video https://triadicautomata.com/digital-genome-vod-presentation/ (accessed on 30 August 2024).
Computers 13 00220 g006
Figure 7. Deployment of the VoD service using Cloud resources. An implementation is presented in the video mentioned in this paper. The video refreed to above describes the details of the implementation and various modules.
Figure 7. Deployment of the VoD service using Cloud resources. An implementation is presented in the video mentioned in this paper. The video refreed to above describes the details of the implementation and various modules.
Computers 13 00220 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mikkilineni, R.; Kelly, W.P.; Crawley, G. Digital Genome and Self-Regulating Distributed Software Applications with Associative Memory and Event-Driven History. Computers 2024, 13, 220. https://doi.org/10.3390/computers13090220

AMA Style

Mikkilineni R, Kelly WP, Crawley G. Digital Genome and Self-Regulating Distributed Software Applications with Associative Memory and Event-Driven History. Computers. 2024; 13(9):220. https://doi.org/10.3390/computers13090220

Chicago/Turabian Style

Mikkilineni, Rao, W. Patrick Kelly, and Gideon Crawley. 2024. "Digital Genome and Self-Regulating Distributed Software Applications with Associative Memory and Event-Driven History" Computers 13, no. 9: 220. https://doi.org/10.3390/computers13090220

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop